Tag Archives: serverless

Visually design your application with AWS Application Composer

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/visually-design-your-application-with-aws-application-composer/

This post is written by Paras Jain, Senior Technical Account Manager and Curtis Darst, Senior Solutions Architect.

AWS Application Composer allows you to design and build applications visually using 13 AWS CloudFormation resource types. Today, the service expands the support to all available CloudFormation resource types.

Overview

AWS Application Composer provides you with an interactive canvas for visually designing your applications. You use a drag-and-drop interface to create an application design from scratch or import an existing application definition to edit it.

Modern event-driven applications are built on many services. Visualizing an architecture helps you better understand the relationship between those services and identify gaps and areas of improvements.

You can use AWS Application Composer in local sync mode to connect to your local file system. That way your changes are updated to your file system. This way, you can integrate with existing version control systems and development and deployment workflow.

AWS Application Composer provides a drag-and-drop canvas view and a code editor template view. Changes made to one view reflect on the other view. Similarly, changes made in AWS Application Composer are reflected in your local code editor and vice versa.

What is AWS releasing today?

AWS Application Composer already supports 13 serverless resource types. For these resource types, AWS Application Composer provides enhanced component cards.

Enhanced component cards allow you to configure and join components together. Today’s release gives you the ability to drag and drop 1,134 resource types to the canvas and configure these using resource configuration pane.

This blog post shows how you can create a fault tolerant compute architecture involving an Application Load Balancer, two Amazon Elastic Compute Cloud (EC2) instances in different Availability Zones, and an Amazon Relational Database Service (RDS) instance.

Conceptually, this is the application design:

Application design

Designing a scalable and fault tolerant compute stack

For this blog post, you create a fault tolerant compute stack consisting of an ALB, two EC2 instances in two different Availability Zones with automatic scaling capabilities and an RDS instance.

  1. Navigate to the AWS Application Composer service in the AWS Management Console. Create a new project by choosing Create Project.
  2. If you are using one of the browsers that support local sync (Google Chrome and Microsoft Edge at this time), you can connect the project to the local file system and edit using command line interface or integrated development environment. To do so:
    1. Choose Menu, and Local sync.
    2. Select a folder on your file system and allow the necessary permissions from the browser when prompted.
  3. Some components in architecture diagrams, like security groups, can be visualized in the canvas but you don’t necessarily want to represent them as prominent part of architectures. Therefore, for brevity, instead of dragging and dropping, you only configure them in the template mode.
    Template mode

    1. Choose Template to switch to the template view.
    2. Paste the following code in the template editor:
      Resources:
        DBEC2SecurityGroup:
          Type: AWS::EC2::SecurityGroup
          Properties:
            GroupDescription: Open database for access
            SecurityGroupIngress:
              - IpProtocol: tcp
                FromPort: '3306'
                ToPort: '3306'
                SourceSecurityGroupId: !Ref WebServerSecurityGroup
            VpcId:
              ParameterId: VpcId
              Format: AWS::EC2::VPC::Id
        WebServerSecurityGroup:
          Type: AWS::EC2::SecurityGroup
          Properties:
            GroupDescription: Enable HTTP access via port 80 locked down to the load balancer + SSH access.
            SecurityGroupIngress:
              - IpProtocol: tcp
                FromPort: '80'
                ToPort: '80'
                SourceSecurityGroupId: !Select
                  - 0
                  - !GetAtt LoadBalancer.SecurityGroups
              - IpProtocol: tcp
                FromPort: '22'
                ToPort: '22'
                CidrIp:
                  ParameterId: SSHLocation
                  Format: String
                  Default: 0.0.0.0/0
            VpcId:
              ParameterId: VpcId
              Format: AWS::EC2::VPC::Id
        WebServerGroup:
          Type: AWS::AutoScaling::AutoScalingGroup
          Properties:
            VPCZoneIdentifier:
              ParameterId: Subnets
              Format: List<AWS::EC2::Subnet::Id>
            LaunchConfigurationName: !Ref LaunchConfiguration
            MinSize: '1'
            MaxSize: '5'
            DesiredCapacity:
              ParameterId: WebServerCapacity
              Format: Number
              Default: '1'
            TargetGroupARNs:
              - !Ref TargetGroup
      
    3. Switch back to canvas view.
  4. Add an Application Load Balancer, Load Balancer Listener, Load Balancer Target Group, Auto Scaling Launch Configuration and an RDS DB instance.
    Add components

    1. Under the resources pane on the left, enter loadbalancer in the search bar.
    2. Drag and drop AWS::ElasticLoadBalancingV2::LoadBalancer from the resources pane to the canvas.
  5. Repeat these steps for other four resource types. Choose Arrange. Your canvas now appears as follows:
    Canvas layout
  6. Start configuring the remaining component cards. You can connect two cards visually by connecting the right connection port of one card to the left connection port of another card. At the moment, not all component cards support visual connectivity. For those cards you can establish connectivity using the resource configuration pane. You can also update the template code directly. Either way, the connectivity is reflected in the canvas.
  7. You configure the components in the architecture using the Resource configuration pane. First, configure the Application Load Balancer listener:
    Configure components

    1. Choose the Listener Card in the canvas.
    2. Choose Details.
    3. Paste the following code in the Resource Configuration Section:
      DefaultActions:
           Type: forward
      TargetGroupArn: !Ref TargetGroup
      LoadBalancerArn: !Ref LoadBalancer
      Port: '80'
      Protocol: HTTP
    4. Choose Save.
  8. Repeat the same for remaining resource types with the following code. The code for the Load Balancer Card is:
    Subnets:
    ParameterId: Subnets
    Format: List<AWS::EC2::Subnet::Id>

  9. The code for the Target Group card is:
    HealthCheckPath: /
    HealthCheckIntervalSeconds: 10
    HealthCheckTimeoutSeconds: 5
    HealthyThresholdCount: 2
    Port: 80
    Protocol: HTTP
    UnhealthyThresholdCount: 5
    VpcId:
      ParameterId: VpcId
      Format: AWS::EC2::VPC::Id
    TargetGroupAttributes:
      - Key: stickiness.enabled
        Value: 'true'
      - Key: stickiness.type
        Value: lb_cookie
      - Key: stickiness.lb_cookie.duration_seconds
        Value: '30'
    
  10. This is the code for the Launch Configuration. Replace <image-id>with the right image id for your Region.
    ImageId: <image-id>
    InstanceType: t2.small
    SecurityGroups: !Ref WebServerSecurityGroup
    
  11. The code for DBInstance is:
    DBName:
      ParameterId: DBName
      Format: String
      Default: wordpressdb
    Engine: MySQL
    MultiAZ:
      ParameterId: MultiAZDatabase
      Format: String
      Default: 'false'
    MasterUsername:
      ParameterId: DBUser
      Format: String
    MasterUserPassword:
      ParameterId: DBPassword
      Format: String
    DBInstanceClass:
      ParameterId: DBClass
      Format: String
      Default: db.t2.small
    AllocatedStorage:
      ParameterId: DBAllocatedStorage
      Format: Number
      Default: '5'
    VPCSecurityGroups:
      - !GetAtt DBEC2SecurityGroup.GroupId
    
  12. Choose Arrange. Your canvas looks like this:
    Canvas layout
  13. This completes the visualization portion of the application architecture. You can export this visualization by using the Export Canvas option in the menu.

Adding observability

After adding the core application components, you now add observability to your application. Observability enables you to collect and analyze important events and metrics for your applications.

To be notified of any changes to the RDS database configuration, use a serverless design pattern to avoid running instances when they are not needed. Conceptually, your observability stack looks like:

Architecture

  1. Amazon EventBridge captures the events emitted by Amazon RDS.
  2. For any event matching the EventBridge rule, EventBridge invokes AWS Lambda.
  3. Lambda runs the custom logic and send an email to an Amazon Simple Notification Service(SNS) topic. You can subscribe interested parties to this SNS topic.

There are now two distinct sets of components in the architecture. One set of components comprises the core application while another comprises the observability logic.

AWS Application Composer allows you to organize different components in groups. This allows you and your team to focus on one portion of the architecture at a time. Before adding observability components, first create a group of the existing components.

Adding components

  1. Select a component card.
  2. While holding the ‘shift’ key, select the other cards. Once all resources are selected, select Group action.

Once the group is created, follow these steps to rename the group.

Rename the group

  1. Select the Group card.
  2. Rename the group to Application Stack.
  3. Choose Save.

Now add the observability components. Repeat the process of searching then dragging and dropping of the following components from the Resources pane to the canvas outside the Application Stack group.

    1. EventBridge Event rule
    2. Lambda Function
    3. SNS Topic
    4. SNS Subscription

Repeat the process for grouping these 4 components in a group with the name Observability.

Some of the components have a small circle on their sides. These are connector ports. A port on the right side of a card indicates an opportunity for the card to invoke another card. A port on the left side indicates an opportunity for a card to be invoked by another card. You can connect two cards by clicking the right port of a card and dragging to the left port of another card.

Create the observability stack by following the following steps:

  1. Connect the right port of EventBridge Event Rule card to the left port of Lambda Function Card. This makes the Lambda function a target for the EventBridge rule.
  2. Connect the right port of the Lambda function to the left port of the SNS topic. This adds the necessary AWS Identity and Management(IAM) permissions policies and environment variable to the Lambda function to provide it the ability to interact with the SNS topic.
  3. Select the EventBridge event rule card and replace the event pattern code in the resource properties pane with the following code. This event pattern monitors the RDS instance for an instance change event and pushes this event to Lambda.
    source:
      - aws.rds
    detail-type:
      - RDS DB Instance Event
    
  4. Select the SNS subscription to see the resource configuration pane. Add the following code to the resource configuration. Replace [email protected] with your email address.
        Endpoint: [email protected]
        Protocol: email
        TopicArn: !Ref Topic
    
  5. Repeat the group creating steps to create an observability group comprising an EventBridge event rule, Lambda function, SNS topic, and SNS subscription. Name the group Observability. Your group appears as follows:
    Observability group

Deploying your AWS Architecture

Before you can provision the resources for your architecture, you must make the configuration changes as per development and deployment best practices for your organization.

For example, you must provide a strong DB password, name the resources as per the naming conventions of your organization. You must also add the Lambda code with your custom logic.

AWS Application Composer provides you the ability to configure each resource via resource configuration panel. This enables you to always stay in-context while creating a complex architecture. You can quickly find the resource you want to edit instead of scrolling through a large template file. If you prefer to edit the template file directly, you can use the Template View of AWS Application Composer.

Alternatively, if you have enabled the local sync, you can edit the file directly in your integrated development environment (IDE) where changes made in AWS Application Composer are saved in real-time. If you have not enabled the local sync, you can export the template using the Save Template File option in the menu. After concluding your changes, you can provision the AWS infrastructure either by using AWS CloudFormation Console or by command line interface.

Pricing

AWS Application Composer does not provision any AWS resources. Using AWS Application Composer to design your application architecture is free. You are only charged when you provision AWS Resources using the template file created by AWS Application Composer.

Conclusion

This blog post shows how to use AWS Application Composer to create and update an application architecture using any of the 1,134 CloudFormation resource types. It covers how to configure local sync mode to integrate the AWS Application to your development workflow. The post demonstrates how to organize your architecture into two distinct groups. Changes made in Canvas view are reflected in the template view and vice versa.

To learn more about AWS Application Composer visit https://aws.amazon.com/application-composer/.

For more serverless learning resources, visit Serverless Land.

Architecting for scale with Amazon API Gateway private integrations

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/architecting-for-scale-with-amazon-api-gateway-private-integrations/

This post is written by Lior Sadan, Sr. Solutions Architect, and Anandprasanna Gaitonde,
Sr. Solutions Architect.

Organizations use Amazon API Gateway to build secure, robust APIs that expose internal services to other applications and external users. When the environment evolves to many microservices, customers must ensure that the API layer can handle the scale without compromising security and performance. API Gateway provides various API types and integration options, and builders must consider how each option impacts the ability to scale the API layer securely and performantly as the microservices environment grows.

This blog post compares architecture options for building scalable, private integrations with API Gateway for microservices. It covers REST and HTTP APIs and their use of private integrations, and shows how to develop secure, scalable microservices architectures.

Overview

Here is a typical API Gateway implementation with backend integrations to various microservices:

A typical API Gateway implementation with backend integrations to various microservices

API Gateway handles the API layer, while integrating with backend microservices running on Amazon EC2, Amazon Elastic Container Service (ECS), or Amazon Elastic Kubernetes Service (EKS). This blog focuses on containerized microservices that expose internal endpoints that the API layer then exposes externally.

To keep microservices secure and protected from external traffic, they are typically implemented within an Amazon Virtual Private Cloud (VPC) in a private subnet, which is not accessible from the internet. API Gateway offers a way to expose these resources securely beyond the VPC through private integrations using VPC link. Private integration forwards external traffic sent to APIs to private resources, without exposing the services to the internet and without leaving the AWS network. For more information, read Best Practices for Designing Amazon API Gateway Private APIs and Private Integration.

The example scenario has four microservices that could be hosted in one or more VPCs. It shows the patterns integrating the microservices with front-end load balancers and API Gateway via VPC links.

While VPC links enable private connections to microservices, customers may have additional needs:

  • Increase scale: Support a larger number of microservices behind API Gateway.
  • Independent deployments: Dedicated load balancers per microservice enable teams to perform blue/green deployments independently without impacting other teams.
  • Reduce complexity: Ability to use existing microservice load balancers instead of introducing additional ones to achieve API Gateway integration
  • Low latency: Ensure minimal latency in API request/response flow.

API Gateway offers HTTP APIs and REST APIs (see Choosing between REST APIs and HTTP APIs) to build RESTful APIs. For large microservices architectures, the API type influences integration considerations:

VPC link supported integrations Quota on VPC links per account per Region
REST API

Network Load Balancer (NLB)

20

HTTP API

Network Load Balancer (NLB), Application Load Balancer (ALB), and AWS Cloud Map

10

This post presents four private integration options taking into account the different capabilities and quotas of VPC link for REST and HTTP APIs:

  • Option 1: HTTP API using VPC link to multiple NLBs or ALBs.
  • Option 2: REST API using multiple VPC links.
  • Option 3: REST API using VPC link with NLB.
  • Option 4: REST API using VPC link with NLB and ALB targets.

Option 1: HTTP API using VPC link to multiple NLBs or ALBs

HTTP APIs allow connecting a single VPC link to multiple ALBs, NLBs, or resources registered with an AWS Cloud Map service. This provides a fan out approach to connect with multiple backend microservices. However, load balancers integrated with a particular VPC link should reside in the same VPC.

Option 1: HTTP API using VPC link to multiple NLB or ALBs

Two microservices are in a single VPC, each with its own dedicated ALB. The ALB listeners direct HTTPS traffic to the respective backend microservice target group. A single VPC link is connected to two ALBs in that VPC. API Gateway uses path-based routing rules to forward requests to the appropriate load balancer and associated microservice. This approach is covered in Best Practices for Designing Amazon API Gateway Private APIs and Private Integration – HTTP API. Sample CloudFormation templates to deploy this solution are available on GitHub.

You can add additional ALBs and microservices within VPC IP space limits. Use the Network Address Usage (NAU) to design the distribution of microservices across VPCs. Scale beyond one VPC by adding VPC links to connect more VPCs, within VPC link quotas. You can further scale this by using routing rules like path-based routing at the ALB to connect more services behind a single ALB (see Quotas for your Application Load Balancers). This architecture can also be built using an NLB.

Benefits:

  • High degree of scalability. Fanning out to multiple microservices using single VPC link and/or multiplexing capabilities of ALB/NLB.
  • Direct integration with existing microservices load balancers eliminates the need for introducing new components and reducing operational burden.
  • Lower latency for API request/response thanks to direct integration.
  • Dedicated load balancers per microservice enable independent deployments for microservices teams.

Option 2: REST API using multiple VPC links

For REST APIs, the architecture to support multiple microservices may differ due to these considerations:

  • NLB is the only supported private integration for REST APIs.
  • VPC links for REST APIs can have only one target NLB.

Option 2: REST API using multiple VPC links

A VPC link is required for each NLB, even if the NLBs are in the same VPC. Each NLB serves one microservice, with a listener to route API Gateway traffic to the target group. API Gateway path-based routing sends requests to the appropriate NLB and corresponding microservice. The setup required for this private integration is similar to the example described in Tutorial: Build a REST API with API Gateway private integration.

To scale further, add additional VPC link and NLB integration for each microservice, either in the same or different VPCs based on your needs and isolation requirements. This approach is limited by the VPC links quota per account per Region.

Benefits:

  • Single NLB in the request path reduces operational complexity.
  • Dedicated NLBs for each enable independent microservice deployments.
  • No additional hops in the API request path results in lower latency.

Considerations:

  • Limits scalability due to a one-to-one mapping of VPC links to NLBs and microservices limited by VPC links quota per account per Region.

Option 3: REST API using VPC link with NLB

The one-to-one mapping of VPC links to NLBs and microservices in option 2 has scalability limits due to VPC link quotas. An alternative is to use multiple microservices per NLB.

Option 3: REST API using VPC link with NLB

A single NLB fronts multiple microservices in a VPC by using multiple listeners, with each listener on a separate port per microservice. Here, NLB1 fronts two microservices in one VPC. NLB2 fronts two other microservices in a second VPC. With multiple microservices per NLB, routing is defined for the REST API when choosing the integration point for a method. You define each service using a combination of selecting the VPC Link, which is integrated with a specific NLB, and a specific port that is assigned for each microservice at the NLB Listener and addressed from the Endpoint URL.

To scale out further, add additional listeners to existing NLBs, limited by Quotas for your Network Load Balancers. In cases where each microservice has its dedicated load balancer or access point, those are configured as targets to the NLB. Alternatively, integrate additional microservices by adding additional VPC links.

Benefits:

  • Larger scalability – limited by NLB listener quotas and VPC link quotas.
  • Managing fewer NLBs supporting multiple microservices reduces operational complexity.
  • Low latency with a single NLB in the request path.

Considerations:

  • Shared NLB configuration limits independent deployments for individual microservices teams.

Option 4: REST API using VPC link with NLB and ALB targets

Customers often build microservices with ALB as their access point. To expose these via API Gateway REST APIs, you can take advantage of ALB as a target for NLB. This pattern also increases the number of microservices supported compared to the option 3 architecture.

Option 4: REST API using VPC link with NLB and ALB targets

A VPC link (VPCLink1) is created with NLB1 in a VPC1. ALB1 and ALB2 front-end the microservices mS1 and mS2, added as NLB targets on separate listeners. VPC2 has a similar configuration. Your isolation needs and IP space determine if microservices can reside in one or multiple VPCs.

To scale out further:

  • Create additional VPC links to integrate new NLBs.
  • Add NLB listeners to support more ALB targets.
  • Configure ALB with path-based rules to route requests to multiple microservices.

Benefits:

  • High scalability integrating services using NLBs and ALBs.
  • Independent deployments per team is possible when each ALB is dedicated to a single microservice.

Considerations:

  • Multiple load balancers in the request path can increase latency.

Considerations and best practices

Beyond the scaling considerations of scale with VPC link integration discussed in this blog, there are other considerations:

Conclusion

This blog explores building scalable API Gateway integrations for microservices using VPC links. VPC links enable forwarding external traffic to backend microservices without exposing them to the internet or leaving the AWS network. The post covers scaling considerations based on using REST APIs versus HTTP APIs and how they integrate with NLBs or ALBs across VPCs.

While API type and load balancer selection have other design factors, it’s important to keep the scaling considerations discussed in this blog in mind when designing your API layer architecture. By optimizing API Gateway implementation for performance, latency, and operational needs, you can build a robust, secure API to expose microservices at scale.

For more serverless learning resources, visit Serverless Land.

Centralizing management of AWS Lambda layers across multiple AWS Accounts

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/centralizing-management-of-aws-lambda-layers-across-multiple-aws-accounts/

This post is written by Debasis Rath, Sr. Specialist SA-Serverless, Kanwar Bajwa, Enterprise Support Lead, and Xiaoxue Xu, Solutions Architect (FSI).

Enterprise customers often manage an inventory of AWS Lambda layers, which provide shared code and libraries to Lambda functions. These Lambda layers are then shared across AWS accounts and AWS Organizations to promote code uniformity, reusability, and efficiency. However, as enterprises scale on AWS, managing shared Lambda layers across an increasing number of functions and accounts is best handled with automation.

This blog post centralizes the management of Lambda layers to ensure compliance with your enterprise’s governance standards, and promotes consistency across your infrastructure. This centralized management uses a detective configuration approach to identify non-compliant Lambda functions systematically using outdated Lambda layer versions, and corrective measures to remediate these Lambda functions by updating them with the right layer version.

This solution uses AWS services such as AWS Config, Amazon EventBridge Scheduler, AWS Systems Manager (SSM) Automation, and AWS CloudFormation StackSets.

Solution overview

This solution offers two parts for layers management:

  1. On-demand visibility into outdated Lambda functions.
  2. Automated remediation of the affected Lambda functions.

1.	On-demand visibility into outdated Lambda functions

This is the architecture for the first part. Users with the necessary permissions can use AWS Config advanced queries to obtain a list of outdated Lambda functions.

The current configuration state of any Lambda function is captured by the configuration recorder within the member account. This data is then aggregated by the AWS Config Aggregator within the management account. The aggregated data can be accessed using queries.

2.	Automated remediation of the affected Lambda functions

This diagram depicts the architecture for the second part. Administrators must manually deploy CloudFormation StackSets to initiate the automatic remediation of outdated Lambda functions.

The manual remediation trigger is used instead of a fully automated solution. Administrators schedule this manual trigger as part of a change request to minimize disruptions to the business. All business stakeholders owning affected Lambda functions should receive this change request notification and have adequate time to perform unit tests to assess the impact.

Upon receiving confirmation from the business stakeholders, the administrator deploys the CloudFormation StackSets, which in turn deploy the CloudFormation stack to the designated member account and Region. After the CloudFormation stack deployment, the EventBridge scheduler invokes an AWS Config custom rule evaluation. This rule identifies the non-compliant Lambda functions, and later updates them using SSM Automation runbooks.

Centralized approach to layer management

The following walkthrough deploys the two-part architecture described, using a centralized approach to layer management as in the preceding diagram. A decentralized approach scatters management and updates of Lambda layers across accounts, making enforcement more difficult and error-prone.

This solution is also available on GitHub.

Prerequisites

For the solution walkthrough, you should have the following prerequisites:

Writing an on-demand query for outdated Lambda functions

First, you write and run an AWS Config advanced query to identify the accounts and Regions where the outdated Lambda functions reside. This is helpful for end users to determine the scope of impact, and identify the responsible groups to inform based on the affected Lambda resources.

Follow these procedures to understand the scope of impact using the AWS CLI:

  1. Open CloudShell in your AWS account.
  2. Run the following AWS CLI command. Replace YOUR_AGGREGATOR_NAME with the name of your AWS Config aggregator, and YOUR_LAYER_ARN with the outdated Lambda layer Amazon Resource Name (ARN).
    aws configservice select-aggregate-resource-config \
    --expression "SELECT accountId, awsRegion, configuration.functionName, configuration.version WHERE resourceType = 'AWS::Lambda::Function' AND configuration.layers.arn = 'YOUR_LAYER_ARN'" \
    --configuration-aggregator-name 'YOUR_AGGREGATOR_NAME' \
    --query "Results" \
    --output json | \
    jq -r '.[] | fromjson | [.accountId, .awsRegion, .configuration.functionName, .configuration.version] | @csv' > output.csv
    
  3. The results are saved to a CSV file named output.csv in the current working directory. This file contains the account IDs, Regions, names, and versions of the Lambda functions that are currently using the specified Lambda layer ARN. Refer to the documentation on how to download a file from AWS CloudShell.

To explore more configuration data and further improve visualization using services like Amazon Athena and Amazon QuickSight, refer to Visualizing AWS Config data using Amazon Athena and Amazon QuickSight.

Deploying automatic remediation to update outdated Lambda functions

Next, you deploy the automatic remediation CloudFormation StackSets to the affected accounts and Regions where the outdated Lambda functions reside. You can use the query outlined in the previous section to obtain the account IDs and Regions.

Updating Lambda layers may affect the functionality of existing Lambda functions. It is essential to notify affected development groups, and coordinate unit tests to prevent unintended disruptions before remediation.

To create and deploy CloudFormation StackSets from your management account for automatic remediation:

  1. Run the following command in CloudShell to clone the GitHub repository:
    git clone https://github.com/aws-samples/lambda-layer-management.git
  2. Run the following CLI command to upload your template and create the stack set container.
    aws cloudformation create-stack-set \
      --stack-set-name layers-remediation-stackset \
      --template-body file://lambda-layer-management/layer_manager.yaml
    
  3. Run the following CLI command to add stack instances in the desired accounts and Regions to your CloudFormation StackSets. Replace the account IDs, Regions, and parameters before you run this command. You can refer to the syntax in the AWS CLI Command Reference. “NewLayerArn” is the ARN for your updated Lambda layer, while “OldLayerArn” is the original Lambda layer ARN.
    aws cloudformation create-stack-instances \
    --stack-set-name layers-remediation-stackset \
    --accounts <LIST_OF_ACCOUNTS> \
    --regions <YOUR_REGIONS> \
    --parameter-overrides ParameterKey=NewLayerArn,ParameterValue='<NEW_LAYER_ARN>' ParameterKey=OldLayerArn,ParameterValue='=<OLD_LAYER_ARN>'
    
  4. Run the following CLI command to verify that the stack instances are created successfully. The operation ID is returned as part of the output from step 3.
    aws cloudformation describe-stack-set-operation \
      --stack-set-name layers-remediation-stackset \
      --operation-id <OPERATION_ID>

This CloudFormation StackSet deploys an EventBridge Scheduler that immediately triggers the AWS Config custom rule for evaluation. This rule, written in AWS CloudFormation Guard, detects all the Lambda functions in the member accounts currently using the outdated Lambda layer version. By using the Auto Remediation feature of AWS Config, the SSM automation document is run against each non-compliant Lambda function to update them with the new layer version.

Other considerations

The provided remediation CloudFormation StackSet uses the UpdateFunctionConfiguration API to modify your Lambda functions’ configurations directly. This method of updating may lead to drift from your original infrastructure as code (IaC) service, such as the CloudFormation stack that you used to provision the outdated Lambda functions. In this case, you might need to add an additional step to resolve drift from your original IaC service.

Alternatively, you might want to update your IaC code directly, referencing the latest version of the Lambda layer, instead of deploying the remediation CloudFormation StackSet as described in the previous section.

Cleaning up

Refer to the documentation for instructions on deleting all the created stack instances from your account. After, proceed to delete the CloudFormation StackSet.

Conclusion

Managing Lambda layers across multiple accounts and Regions can be challenging at scale. By using a combination of AWS Config, EventBridge Scheduler, AWS Systems Manager (SSM) Automation, and CloudFormation StackSets, it is possible to streamline the process.

The example provides on-demand visibility into affected Lambda functions and allows scheduled remediation of impacted functions. AWS SSM Automation further simplifies maintenance, deployment, and remediation tasks. With this architecture, you can efficiently manage updates to your Lambda layers and ensure compliance with your organization’s policies, saving time and reducing errors in your serverless applications.

To learn more about using Lambda layer, visit the AWS documentation. For more serverless learning resources, visit Serverless Land.

How Chime Financial uses AWS to build a serverless stream analytics platform and defeat fraudsters

Post Syndicated from Khandu Shinde original https://aws.amazon.com/blogs/big-data/how-chime-financial-uses-aws-to-build-a-serverless-stream-analytics-platform-and-defeat-fraudsters/

This is a guest post by Khandu Shinde, Staff Software Engineer and Edward Paget, Senior Software Engineering at Chime Financial.

Chime is a financial technology company founded on the premise that basic banking services should be helpful, easy, and free. Chime partners with national banks to design member first financial products. This creates a more competitive market with better, lower-cost options for everyday Americans who aren’t being served well by traditional banks. We help drive innovation, inclusion, and access across the industry.

Chime has a responsibility to protect our members against unauthorized transactions on their accounts. Chime’s Risk Analysis team constantly monitors trends in our data to find patterns that indicate fraudulent transactions.

This post discusses how Chime utilizes AWS Glue, Amazon Kinesis, Amazon DynamoDB, and Amazon SageMaker to build an online, serverless fraud detection solution — the Chime Streaming 2.0 system.

Problem statement

In order to keep up with the rapid movement of fraudsters, our decision platform must continuously monitor user events and respond in real-time. However, our legacy data warehouse-based solution was not equipped for this challenge. It was designed to manage complex queries and business intelligence (BI) use cases on a large scale. However, with a minimum data freshness of 10 minutes, this architecture inherently didn’t align with the near real-time fraud detection use case.

To make high-quality decisions, we need to collect user event data from various sources and update risk profiles in real time. We also need to be able to add new fields and metrics to the risk profiles as our team identifies new attacks, without needing engineering intervention or complex deployments.

We decided to explore streaming analytics solutions where we can capture, transform, and store event streams at scale, and serve rule-based fraud detection models and machine learning (ML) models with milliseconds latency.

Solution overview

The following diagram illustrates the design of the Chime Streaming 2.0 system.

The design included the following key components:

  1. We have Amazon Kinesis Data Streams as our streaming data service to capture and store event streams at scale. Our stream pipelines capture various event types, including user enrollment events, user login events, card swipe events, peer-to-peer payments, and application screen actions.
  2. Amazon DynamoDB is another data source for our Streaming 2.0 system. It acts as the application backend and stores data such as blocked devices list and device-user mapping. We mainly use it as lookup tables in our pipeline.
  3. AWS Glue jobs form the backbone of our Streaming 2.0 system. The simple AWS Glue icon in the diagram represents thousands of AWS Glue jobs performing different transformations. To achieve the 5-15 seconds end-to-end data freshness service level agreement (SLA) for the Steaming 2.0 pipeline, we use streaming ETL jobs in AWS Glue to consume data from Kinesis Data Streams and apply near-real-time transformation. We choose AWS Glue mainly due to its serverless nature, which simplifies infrastructure management with automatic provisioning and worker management, and the ability to perform complex data transformations at scale.
  4. The AWS Glue streaming jobs generate derived fields and risk profiles that get stored in Amazon DynamoDB. We use Amazon DynamoDB as our online feature store due to its millisecond performance and scalability.
  5. Our applications call Amazon SageMaker Inference endpoints for fraud detections. The Amazon DynamoDB online feature store supports real-time inference with single digit millisecond query latency.
  6. We use Amazon Simple Storage Service (Amazon S3) as our offline feature store. It contains historical user activities and other derived ML features.
  7. Our data scientist team can access the dataset and perform ML model training and batch inferencing using Amazon SageMaker.

AWS Glue pipeline implementation deep dive

There are several key design principles for our AWS Glue Pipeline and the Streaming 2.0 project.

  • We want to democratize our data platform and make the data pipeline accessible to all Chime developers.
  • We want to implement cloud financial backend services and achieve cost efficiency.

To achieve data democratization, we needed to enable different personas in the organization to use the platform and define transformation jobs quickly, without worrying about the actual implementation details of the pipelines. The data infrastructure team built an abstraction layer on top of Spark and integrated services. This layer contained API wrappers over integrated services, job tags, scheduling configurations and debug tooling, hiding Spark and other lower-level complexities from end users. As a result, end users were able to define jobs with declarative YAML configurations and define transformation logic with SQL. This simplified the onboarding process and accelerated the implementation phase.

To achieve cost efficiency, our team built a cost attribution dashboard based on AWS cost allocation tags. We enforced tagging with the above abstraction layer and had clear cost attribution for all AWS Glue jobs down to the team level. This enabled us to track down less optimized jobs and work with job owners to implement best practices with impact-based priority. One common misconfiguration we found was sizing of AWS Glue jobs. With data democratization, many users lacked the knowledge to right-size their AWS Glue jobs. The AWS team introduced AWS Glue auto scaling to us as a solution. With AWS Glue Auto Scaling, we no longer needed to plan AWS Glue Spark cluster capacity in advance. We could just set the maximum number of workers and run the jobs. AWS Glue monitors the Spark application execution, and allocates more worker nodes to the cluster in near-real time after Spark requests more executors based on our workload requirements. We noticed a 30–45% cost saving across our AWS Glue Jobs once we turned on Auto Scaling.

Conclusion

In this post, we showed you how Chime’s Streaming 2.0 system allows us to ingest events and make them available to the decision platform just seconds after they are emitted from other services. This enables us to write better risk policies, provide fresher data for our machine learning models, and protect our members from unauthorized transactions on their accounts.

Over 500 developers in Chime are using this streaming pipeline and we ingest more than 1 million events per second. We follow the sizing and scaling process from the AWS Glue streaming ETL jobs best practices blog and land on a 1:1 mapping between Kinesis Shard and vCPU core. The end-to-end latency is less than 15 seconds, and it improves the model score calculation speed by 1200% compared to legacy implementation. This system has proven to be reliable, performant, and cost-effective at scale.

We hope this post will inspire your organization to build a real-time analytics platform using serverless technologies to accelerate your business goals.


About the Authors

Khandu Shinde Khandu Shinde is a Staff Engineer focused on Big Data Platforms and Solutions for Chime. He helps to make the platform scalable for Chime’s business needs with architectural direction and vision. He’s based in San Francisco where he plays cricket and watches movies.

Edward Paget Edward Paget is a Software Engineer working on building Chime’s capabilities to mitigate risk to ensure our members’ financial peace of mind. He enjoys being at the intersection of big data and programming language theory. He’s based in Chicago where he spends his time running along the lake shore.

Dylan Qu is a Specialist Solutions Architect focused on Big Data & Analytics with Amazon Web Services. He helps customers architect and build highly scalable, performant, and secure cloud-based solutions on AWS.

Implementing idempotent AWS Lambda functions with Powertools for AWS Lambda (TypeScript)

Post Syndicated from Pascal Vogel original https://aws.amazon.com/blogs/compute/implementing-idempotent-aws-lambda-functions-with-powertools-for-aws-lambda-typescript/

This post is written by Alexander Schüren, Sr Specialist SA, Powertools.

One of the design principles of AWS Lambda is to “develop for retries and failures”. If your function fails, the Lambda service will retry and invoke your function again with the same event payload. Therefore, when your function performs tasks such as processing orders or making reservations, it is necessary for your Lambda function to handle requests idempotently to avoid duplicate payment or order processing, which can result in a poor customer experience.

This article explains what idempotency is and how to make your Lambda functions idempotent using the idempotency utility for Powertools for AWS Lambda (TypeScript). The Powertools idempotency utility for TypeScript was co-developed with Vanguard and is now generally available.

Understanding idempotency

Idempotency is the property of an operation that can be applied multiple times without changing the result beyond the initial execution. You can safely run an idempotent operation multiple times without side effects, such as duplicate records or data inconsistencies. This is especially relevant for payment and order processing or third-party API integrations.

There are key concepts to consider when implementing idempotency in AWS Lambda. For each invocation, you specify which subset of the event payload you want to use to identify an idempotent request. This is called the idempotency key. This key can be a single field such as transactionId, a combination of multiple fields such as customerId and requestId, or the entire event payload.

Because timestamps, dates, and other generated values within the payload affect the idempotency key, we recommend that you define specific fields rather than using the entire event payload.

By evaluating the idempotency key, you can then decide if the function needs to run again or send an existing response to the client. To do this, you need to store the following information for each request in a persistence layer (i.e., Amazon DynamoDB):

  • Status: IN_PROGRESS, EXPIRED, COMPLETE
  • Response data: the response to send back to the client instead of executing the function again
  • Expiration timestamp: when the idempotency record becomes invalid for reuse

The following diagram shows a successful request flow for this idempotency scenario:

Request flow for idempotent Lambda function

When you invoke a Lambda function with a particular event for the first time, it stores a record with a unique idempotency key tied to an event payload in the persistence layer.

The function then executes its code and updates the record in the persistence layer with the function response. For subsequent invocations with the same payload, you must check if the idempotency key exists in the persistence layer. If it exists, the function returns the same response to the client. This prevents multiple invocations of the function, making it idempotent.

There are more edge cases to be mindful of, such as when the idempotency record has expired, or handling of failures between the client, the Lambda function, and the persistence layer. The Powertools for AWS Lambda (TypeScript) documentation covers all request flows in detail.

Idempotency with Powertools for AWS Lambda (TypeScript)

Powertools for AWS Lambda, available in PythonJava, .NET, and TypeScript, provides utilities for Lambda functions to ease the adoption of best practices and to reduce the amount of code needed to perform recurring tasks. In particular, it provides a module to handle idempotency.

This post shows examples using the TypeScript version of Powertools. To get started with the Powertools idempotency module, you must install the library and configure it within your build process. For more details, follow the Powertools for AWS Lambda documentation.

Getting started

Powertools for AWS Lambda (TypeScript) is modular, meaning you can install the idempotency utility independently from the Logger, Tracing, Metrics, or other packages. Install the idempotency utility library and the AWS SDK v3 client for DynamoDB in your project using npm:

npm i @aws-lambda-powertools/idempotency @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb

Before getting started, you need to create a persistent storage layer where the idempotency utility can store its state. Your Lambda function AWS Identity and Access Management (IAM) role must have dynamodb:GetItem, dynamodb:PutItem, dynamodb:UpdateItem and dynamodb:DeleteItem permissions.

Currently, DynamoDB is the only supported persistent storage layer, so you’ll need to create a table first. Use the AWS Cloud Development Kit (CDK), AWS CloudFormation, AWS Serverless Application Model (SAM) or any Infrastructure as Code tool of your choice that supports DynamoDB resources.

The following sections illustrate how to instrument your Lambda function code to make it idempotent using a wrapper function or using middy middleware.

Using the function wrapper

Assuming you have created a DynamoDB table with the name IdempotencyTable, create a persistence layer in your Lambda function code:

import { makeIdempotent } from "@aws-lambda-powertools/idempotency";
import { DynamoDBPersistenceLayer } from "@aws-lambda-powertools/idempotency/dynamodb";

const persistenceStore = new DynamoDBPersistenceLayer({
  tableName: "IdempotencyTable",
});

Now, apply the makeIdempotent function wrapper to your Lambda function handler to make it idempotent and use the previously configured persistence store.

import { makeIdempotent } from '@aws-lambda-powertools/idempotency';
import { DynamoDBPersistenceLayer } from '@aws-lambda-powertools/idempotency/dynamodb';
import type { Context } from 'aws-lambda';
import type { Request, Response, SubscriptionResult } from './types';

export const handler = makeIdempotent(
  async (event: Request, _context: Context): Promise<Response> => {
    try {
      const payment = … // create payment
	  
      return {
        paymentId: payment.id,
        message: 'success',
        statusCode: 200,
      };

    } catch (error) {
      throw new Error('Error creating payment');
    }
  },
  {
    persistenceStore,
  }
);

The function processes the incoming event to create a payment and return the paymentId, message, and status back to the client. Making the Lambda function handler idempotent ensures that payments are only processed once, despite multiple Lambda invocations with the same event payload. You can also apply the makeIdempotent function wrapper to any other function outside of your handler.

Use the following type definitions for this example by adding a types.ts file to your source folder:

type Request = {
  user: string;
  productId: string;
};

type Response = {
  [key: string]: unknown;
};

type SubscriptionResult = {
  id: string;
  productId: string;
};

Using middy middleware

If you are using middy middleware, Powertools provides makeHandlerIdempotent middleware to make your Lambda function handler idempotent:

import { makeHandlerIdempotent } from '@aws-lambda-powertools/idempotency/middleware';
import { DynamoDBPersistenceLayer } from '@aws-lambda-powertools/idempotency/dynamodb';
import middy from '@middy/core';
import type { Context } from 'aws-lambda';
import type { Request, Response, SubscriptionResult } from './types';

const persistenceStore = new DynamoDBPersistenceLayer({
  tableName: 'IdempotencyTable',
});

export const handler = middy(
  async (event: Request, _context: Context): Promise<Response> => {
    try {
      const payment = … // create payment object
	  
      return {
        paymentId: payment.id,
        message: 'success',
        statusCode: 200,
      };
    } catch (error) {
      throw new Error('Error creating payment');
    }
  }
).use(
    makeHandlerIdempotent({
      persistenceStore,
  })
);

Configuration options

The Powertools idempotency utility comes with several configuration options to change the idempotency behavior that will fit your use case scenario. This section highlights the most common configurations. You can find all available customization options in the AWS Powertools for Lambda (TypeScript) documentation.

Persistence layer options

When you create a DynamoDBPersistenceLayer object, only the tableName attribute is required. Powertools will expect the table with a partition key id and will create other attributes with default values.

You can change these default values if needed by passing the options parameter:

import { DynamoDBPersistenceLayer } from '@aws-lambda-powertools/idempotency/dynamodb';

const persistenceStore = new DynamoDBPersistenceLayer({
  tableName: 'idempotencyTableName',
  keyAttr: 'idempotencyKey', // default: id
  expiryAttr: 'expiresAt', // default: expiration
  inProgressExpiryAttr: 'inProgressExpiresAt', // default: in_progress_expiration
  statusAttr: 'currentStatus', // default: status
  dataAttr: 'resultData', // default: data
  validationKeyAttr: 'validationKey', .// default validation
});

Using a subset of the event payload

When you configure idempotency for your Lambda function handler, Powertools will use the entire event payload for idempotency handling by hashing the object.

However, events from AWS services such as Amazon API Gateway or Amazon Simple Queue Service (Amazon SQS) often have generated fields, such as timestamp or requestId. This results in Powertools treating each event payload as unique.

To prevent that, create an IdempotencyConfig and configure which part of the payload should be hashed for the idempotency logic.

Create the IdempotencyConfig and set eventKeyJmespath to a key within your event payload:

import { IdempotencyConfig } from '@aws-lambda-powertools/idempotency';

// Extract the idempotency key from the request headers
const config = new IdempotencyConfig({
  eventKeyJmesPath: 'headers."X-Idempotency-Key"',
});

Use the X-Idempotency-Key header for your idempotency key. Subsequent invocations with the same header value will be idempotent.

You can then add the configuration to the makeIdempotent function wrapper from the previous example:

export const handler = makeIdempotent(
  async (event: Request, _context: Context): Promise<Response> => {
    try {
      const payment = … // create payment
      
	  return {
        paymentId: payment.id,
        message: 'success',
        statusCode: 200,
      };
    } catch (error) {
      throw new Error('Error creating payment');
    }
  },
  {
    persistenceStore,
    config
  }
);

The event payload should contain X-Idempotency-Key in the headers, so Powertools can use this field to handle idempotency:

{
  "version": "2.0",
  "routeKey": "ANY /createpayment",
  "rawPath": "/createpayment",
  "rawQueryString": "",
  "headers": {
    "Header1": "value1",
    "X-Idempotency-Key": "abcdefg"
  },
  "requestContext": {
    "accountId": "123456789012",
    "apiId": "api-id",
    "domainName": "id.execute-api.us-east-1.amazonaws.com",
    "domainPrefix": "id",
    "http": {
      "method": "POST",
      "path": "/createpayment",
      "protocol": "HTTP/1.1",
      "sourceIp": "ip",
      "userAgent": "agent"
    },
    "requestId": "id",
    "routeKey": "ANY /createpayment",
    "stage": "$default",
    "time": "10/Feb/2021:13:40:43 +0000",
    "timeEpoch": 1612964443723
  },
  "body": "{\"user\":\"xyz\",\"productId\":\"123456789\"}",
  "isBase64Encoded": false
}

There are other configuration options you can apply, such as payload validation, expiration duration, local caching, and others. See the Powertools for AWS Lambda (TypeScript) documentation for more information.

Customizing the AWS SDK configuration

The DynamoDBPersistenceLayer is built-in and allows you to store the idempotency data for all your requests. Under the hood, Powertools uses the AWS SDK for JavaScript v3. Change the SDK configuration by passing a clientConfig object.

The following sample sets the region to eu-west-1:

import { DynamoDBPersistenceLayer } from '@aws-lambda-powertools/idempotency/dynamodb';

const persistenceStore = new DynamoDBPersistenceLayer({
  tableName: 'IdempotencyTable',
  clientConfig: {
    region: 'eu-west-1',
  },
});

If you are using your own client, you can pass it the persistence layer:

import { DynamoDBPersistenceLayer } from '@aws-lambda-powertools/idempotency/dynamodb';
import { DynamoDBClient } from '@aws-sdk/client-dynamodb';

const ddbClient = new DynamoDBClient({ region: 'eu-west-1' });

const dynamoDBPersistenceLayer = new DynamoDBPersistenceLayer({
  tableName: 'IdempotencyTable',
  awsSdkV3Client: ddbClient,
});

Conclusion

Making your Lambda functions idempotent can be a challenge and, if not done correctly, can lead to duplicate data, inconsistencies, and a bad customer experience. This post shows how to use Powertools for AWS Lambda (TypeScript) to process your critical transactions only once when using AWS Lambda.

For more details on the Powertools idempotency feature and its configuration options, see the full documentation.

For more serverless learning resources, visit Serverless Land.

Building resilient serverless applications using chaos engineering

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/compute/building-resilient-serverless-applications-using-chaos-engineering/

This post is written by Suranjan Choudhury (Head of TME and ITeS SA) and Anil Sharma (Sr PSA, Migration) 

Chaos engineering is the process of stressing an application in testing or production environments by creating disruptive events, such as outages, observing how the system responds, and implementing improvements. Chaos engineering helps you create the real-world conditions needed to uncover hidden issues and performance bottlenecks that are challenging to find in distributed applications.

You can build resilient distributed serverless applications using AWS Lambda and test Lambda functions in real world operating conditions using chaos engineering.  This blog shows an approach to inject chaos in Lambda functions, making no change to the Lambda function code. This blog uses the AWS Fault Injection Simulator (FIS) service to create experiments that inject disruptions for Lambda based serverless applications.

AWS FIS is a managed service that performs fault injection experiments on your AWS workloads. AWS FIS is used to set up and run fault experiments that simulate real-world conditions to discover application issues that are difficult to find otherwise. You can improve application resilience and performance using results from FIS experiments.

The sample code in this blog introduces random faults to existing Lambda functions, like an increase in response times (latency) or random failures. You can observe application behavior under introduced chaos and make improvements to the application.

Approaches to inject chaos in Lambda functions

AWS FIS currently does not support injecting faults in Lambda functions. However, there are two main approaches to inject chaos in Lambda functions: using external libraries or using Lambda layers.

Developers have created libraries to introduce failure conditions to Lambda functions, such as chaos_lambda and failure-Lambda. These libraries allow developers to inject elements of chaos into Python and Node.js Lambda functions. To inject chaos using these libraries, developers must decorate the existing Lambda function’s code. Decorator functions wrap the existing Lambda function, adding chaos at runtime. This approach requires developers to change the existing Lambda functions.

You can also use Lambda layers to inject chaos, requiring no change to the function code, as the fault injection is separated. Since the Lambda layer is deployed separately, you can independently change the element of chaos, like latency in response or failure of the Lambda function. This blog post discusses this approach.

Injecting chaos in Lambda functions using Lambda layers

A Lambda layer is a .zip file archive that contains supplementary code or data. Layers usually contain library dependencies, a custom runtime, or configuration files. This blog creates an FIS experiment that uses Lambda layers to inject disruptions in existing Lambda functions for Java, Node.js, and Python runtimes.

The Lambda layer contains the fault injection code. It is invoked prior to invocation of the Lambda function and injects random latency or errors. Injecting random latency simulates real world unpredictable conditions. The Java, Node.js, and Python chaos injection layers provided are generic and reusable. You can use them to inject chaos in your Lambda functions.

The Chaos Injection Lambda Layers

Java Lambda Layer for Chaos Injection

Java Lambda Layer for Chaos Injection

The chaos injection layer for Java Lambda functions uses the JAVA_TOOL_OPTIONS environment variable. This environment variable allows specifying the initialization of tools, specifically the launching of native or Java programming language agents. The JAVA_TOOL_OPTIONS has a javaagent parameter that points to the chaos injection layer. This layer uses Java’s premain method and the Byte Buddy library for modifying the Lambda function’s Java class during runtime.

When the Lambda function is invoked, the JVM uses the class specified with the javaagent parameter and invokes its premain method before the Lambda function’s handler invocation. The Java premain method injects chaos before Lambda runs.

The FIS experiment adds the layer association and the JAVA_TOOL_OPTIONS environment variable to the Lambda function.

Python and Node.js Lambda Layer for Chaos Injection

Python and Node.js Lambda Layer for Chaos Injection

When injecting chaos in Python and Node.js functions, the Lambda function’s handler is replaced with a function in the respective layers by the FIS aws:ssm:start-automation-execution action. The automation, which is an SSM document, saves the original Lambda function’s handler to in AWS Systems Manager Parameter Store, so that the changes can be rolled back once the experiment is finished.

The layer function contains the logic to inject chaos. At runtime, the layer function is invoked, injecting chaos in the Lambda function. The layer function in turn invokes the Lambda function’s original handler, so that the functionality is fulfilled.

The result in all runtimes (Java, Python, or Node.js), is invocation of the original Lambda function with latency or failure injected. The observed changes are random latency or failure injected by the layer.

Once the experiment is completed, an SSM document is provided. This rolls back the layer’s association to the Lambda function and removes the environment variable, in the case of the Java runtime.

Sample FIS experiments using SSM and Lambda layers

In the sample code provided, Lambda layers are provided for Python, Node.js and Java runtimes along with sample Lambda functions for each runtime.

The sample deploys the Lambda layers and the Lambda functions, FIS experiment template, AWS Identity and Access Management (IAM) roles needed to run the experiment, and the AWS Systems Manger (SSM) Documents. AWS CloudFormation template is provided for deployment.

Step 1: Complete the prerequisites

  • To deploy the sample code, clone the repository locally:
    git clone https://github.com/aws-samples/chaosinjection-lambda-samples.git
  • Complete the prerequisites documented here.

Step 2: Deploy using AWS CloudFormation

The CloudFormation template provided along with this blog deploys sample code. Execute runCfn.sh.

When this is complete, it returns the StackId that CloudFormation created:

Step 3: Run the chaos injection experiment

By default, the experiment is configured to inject chaos in the Java sample Lambda function. To change it to Python or Node.js Lambda functions, edit the experiment template and configure it to inject chaos using steps from here.

Step 4: Start the experiment

From the FIS Console, choose Start experiment.

 Start experiment

Wait until the experiment state changes to “Completed”.

Step 5: Run your test

At this stage, you can inject chaos into your Lambda function. Run the Lambda functions and observe their behavior.

1. Invoke the Lambda function using the command below:

aws lambda invoke --function-name NodeChaosInjectionExampleFn out --log-type Tail --query 'LogResult' --output text | base64 -d

2. The CLI commands output displays the logs created by the Lambda layers showing latency introduced in this invocation.

In this example, the output shows that the Lambda layer injected 1799ms of random latency to the function.

The experiment injects random latency or failure in the Lambda function. Running the Lambda function again results in a different latency or failure. At this stage, you can test the application, and observe its behavior under conditions that may occur in the real world, like an increase in latency or Lambda function’s failure.

Step 6: Roll back the experiment

To roll back the experiment, run the SSM document for rollback. This rolls back the Lambda function to the state before chaos injection. Run this command:

aws ssm start-automation-execution \
--document-name “InjectLambdaChaos-Rollback” \
--document-version “\$DEFAULT” \
--parameters \
‘{“FunctionName”:[“FunctionName”],”LayerArn”:[“LayerArn”],”assumeRole”:[“RoleARN
”]}’ \
--region eu-west-2

Cleaning up

To avoid incurring future charges, clean up the resources created by the CloudFormation template by running the following CLI command. Update the stack name to the one you provided when creating the stack.

aws cloudformation delete-stack --stack-name myChaosStack

Using FIS Experiments results

You can use FIS experiment results to validate expected system behavior. An example of expected behavior is: “If application latency increases by 10%, there is less than a 1% increase in sign in failures.” After the experiment is completed, evaluate whether the application resiliency aligns with your business and technical expectations.

Conclusion

This blog explains an approach for testing reliability and resilience in Lambda functions using chaos engineering. This approach allows you to inject chaos in Lambda functions without changing the Lambda function code, with clear segregation of chaos injection and business logic. It provides a way for developers to focus on building business functionality using Lambda functions.

The Lambda layers that inject chaos can be developed and managed separately. This approach uses AWS FIS to run experiments that inject chaos using Lambda layers and test serverless application’s performance and resiliency. Using the insights from the FIS experiment, you can find, fix, or document risks that surface in the application while testing.

For more serverless learning resources, visit Serverless Land.

Best Practices for Writing Step Functions Terraform Projects

Post Syndicated from Patrick Guha original https://aws.amazon.com/blogs/devops/best-practices-for-writing-step-functions-terraform-projects/

Terraform by HashiCorp is one of the most popular infrastructure-as-code (IaC) platforms. AWS Step Functions is a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines. In this blog, we showcase best practices for users leveraging Terraform to deploy workflows, also known as Step Functions state machines. We will create a state machine using Workflow Studio for AWS Step Functions, deploy the state machine with Terraform, and introduce best operating practices on topics such as project structure, modules, parameter substitution, and remote state.

We recommend that you have a working understanding of both Terraform and Step Functions before going through this blog. If you are brand new to Step Functions and/or Terraform, please visit the Introduction to Terraform on AWS Workshop and the Terraform option in the Managing State Machines with Infrastructure as Code section of The AWS Step Functions Workshop to learn more.

Step Functions and Terraform Project Structure

One of the most important parts of any software project is its structure. It must be clear and well-organized for yourself or any member of your team to pick up and start coding efficiently. A Step Functions project using Terraform can potentially have many moving parts and components, so it is especially important to modularize and label wherever possible. Let’s take a look at a project structure that will allow for modularization, re-usability, and extensibility:

mkdir sfn-tf-example
cd sfn-tf-example
mkdir -p -- statemachine modules functions/first-function/src
touch main.tf outputs.tf variables.tf .gitignore functions/first-function/src/lambda.py
tree

Before moving forward, let’s analyze the directory, subdirectories, and files created above:

  • /statemachine will hold our Amazon States Language (ASL) JSON code describing the Step Functions state machine definition. This is where the orchestration logic will reside, so it is prudent to keep it separated from the infrastructure code. If you are deploying multiple state machines in your project, each definition will have its own JSON file. If you prefer, you can specify separate folders for each state machine to further modularize and isolate the logic.
  • /functions subdirectory includes the actual code for AWS Lambda functions used in our state machine. Keeping this code here will be much easier to read than writing it inline in our main.tf file.
  • The last subdirectory we have is /modules. Terraform modules are higher level abstracts explaining new concepts in your architecture. However, do not fall into the trap of making a custom module for everything. Doing so will make your code harder to maintain, and AWS provider resources will often suffice. There are also very popular modules that you can use from the Terraform Registry, such as Terraform AWS modules. Whenever possible, one should re-use modules to avoid code duplication in your project.
  • The remaining files in the root of the project are common to all Terraform projects. There are going to be hidden files created by your Terraform project after running terraform init, so we will include a .gitignore. What you include in .gitignore is largely dependent on your codebase and what your tools silently create in the background. In a later section, we will explicitly call out *.tfstate files in our .gitignore, and go over best practices for managing Terraform state securely and remotely.

Initial Code and Project Setup

We are going to create a simple Step Functions state machine that will only execute a single Lambda function. However, we will need to create the Lambda function that the state machine will reference. We first need to create our Lambda function code and save it in the following the directory structure and file mentioned above: functions/first-function/src/lambda.py.

import boto3

def lambda_handler(event, context):
# Minimal function for demo purposes
	return True

In Terraform, the main configuration file is named main.tf. This is the file that the Terraform CLI will look for in the local directory. Although you can break down your template into multiple .tf files, main.tf must be one of them. In this file, we will define the required providers and their minimum version, along with the resource definition of our template. In the example below, we define the minimum resources needed for a simple state machine that only executes a Lambda function. We define the two AWS Identity and Access Management (IAM) roles that our Lambda function and state machine will use, respectively. We define a data resource that zips the Lambda function code, which is then used in the Lambda function definition. Also notice that we use the aws_iam_policy_document data source throughout. Using the official IAM policy document means both your integrated development environment (IDE) and Terraform can see if your policy is malformed before running terraform apply. Finally, we define an Amazon CloudWatch Log group that will be used by the Lambda function to store its execution logs.

Terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~>4.0"
    }
  }
}

provider "aws" {}

provider "random" {}

data "aws_caller_identity" "current_account" {}

data "aws_region" "current_region" {}

resource "random_string" "random" {
  length  = 4
  special = false
}

data "aws_iam_policy_document" "lambda_assume_role_policy" {
  statement {
    effect = "Allow"

    principals {
      type        = "Service"
      identifiers = ["lambda.amazonaws.com"]
    }

    actions = [
      "sts:AssumeRole",
    ]
  }
}

resource "aws_iam_role" "function_role" {
  assume_role_policy  = data.aws_iam_policy_document.lambda_assume_role_policy.json
  managed_policy_arns = ["arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"]
}

# Create the function
data "archive_file" "lambda" {
  type        = "zip"
  source_file = "functions/first-function/src/lambda.py"
  output_path = "functions/first-function/src/lambda.zip"
}

resource "aws_kms_key" "log_group_key" {}

resource "aws_kms_key_policy" "log_group_key_policy" {
  key_id = aws_kms_key.log_group_key.id
  policy = jsonencode({
    Id = "log_group_key_policy"
    Statement = [
      {
        Action = "kms:*"
        Effect = "Allow"
        Principal = {
          AWS = "arn:aws:iam::${data.aws_caller_identity.current_account.account_id}:root"
        }

        Resource = "*"
        Sid      = "Enable IAM User Permissions"
      },
      {
        Effect = "Allow",
        Principal = {
          Service : "logs.${data.aws_region.current_region.name}.amazonaws.com"
        },
        Action = [
          "kms:Encrypt*",
          "kms:Decrypt*",
          "kms:ReEncrypt*",
          "kms:GenerateDataKey*",
          "kms:Describe*"
        ],
        Resource = "*"
      }
    ]
    Version = "2012-10-17"
  })
}

resource "aws_lambda_function" "test_lambda" {
  function_name    = "HelloFunction-${random_string.random.id}"
  role             = aws_iam_role.function_role.arn
  handler          = "lambda.lambda_handler"
  runtime          = "python3.9"
  filename         = "functions/first-function/src/lambda.zip"
  source_code_hash = data.archive_file.lambda.output_base64sha256
}

# Explicitly create the function’s log group to set retention and allow auto-cleanup
resource "aws_cloudwatch_log_group" "lambda_function_log" {
  retention_in_days = 1
  name              = "/aws/lambda/${aws_lambda_function.test_lambda.function_name}"
  kms_key_id        = aws_kms_key.log_group_key.arn
}

# Create an IAM role for the Step Functions state machine
data "aws_iam_policy_document" "state_machine_assume_role_policy" {
  statement {
    effect = "Allow"

    principals {
      type        = "Service"
      identifiers = ["states.amazonaws.com"]
    }

    actions = [
      "sts:AssumeRole",
    ]
  }
}

resource "aws_iam_role" "StateMachineRole" {
  name               = "StepFunctions-Terraform-Role-${random_string.random.id}"
  assume_role_policy = data.aws_iam_policy_document.state_machine_assume_role_policy.json
}

data "aws_iam_policy_document" "state_machine_role_policy" {
  statement {
    effect = "Allow"

    actions = [
      "logs:CreateLogStream",
      "logs:PutLogEvents",
      "logs:DescribeLogGroups"
    ]

    resources = ["${aws_cloudwatch_log_group.MySFNLogGroup.arn}:*"]
  }

  statement {
    effect = "Allow"
    actions = [
      "cloudwatch:PutMetricData",
      "logs:CreateLogDelivery",
      "logs:GetLogDelivery",
      "logs:UpdateLogDelivery",
      "logs:DeleteLogDelivery",
      "logs:ListLogDeliveries",
      "logs:PutResourcePolicy",
      "logs:DescribeResourcePolicies",
    ]
    resources = ["*"]
  }

  statement {
    effect = "Allow"

    actions = [
      "lambda:InvokeFunction"
    ]

    resources = ["${aws_lambda_function.test_lambda.arn}"]
  }

}

# Create an IAM policy for the Step Functions state machine
resource "aws_iam_role_policy" "StateMachinePolicy" {
  role   = aws_iam_role.StateMachineRole.id
  policy = data.aws_iam_policy_document.state_machine_role_policy.json
}

# Create a Log group for the state machine
resource "aws_cloudwatch_log_group" "MySFNLogGroup" {
  name_prefix       = "/aws/vendedlogs/states/MyStateMachine-"
  retention_in_days = 1
  kms_key_id        = aws_kms_key.log_group_key.arn
}

Workflow Studio and Terraform Integration

It is important to understand the recommended steps given the different tools we have available for creating Step Functions state machines. You should use a combination of Workflow Studio and local development with Terraform. This workflow assumes you will define all resources for your application within the same Terraform project, and that you will be leveraging Terraform for managing your AWS resources.

Workflow for creating Step Functions state machine via Terraform

Figure 1 – Workflow for creating Step Functions state machine via Terraform

  1. You will write the Terraform definition for any resources you intend to call with your state machine, such as Lambda functions, Amazon Simple Storage Service (Amazon S3) buckets, or Amazon DynamoDB tables, and deploy them using the terraform apply command. Doing this prior to using Workflow Studio will be useful in designing the first version of the state machine. You can define additional resources after importing the state machine into your local Terraform project.
  2. You can use Workflow Studio to visually design the first version of the state machine. Given that you should have created the necessary resources already, you can drag and drop all of the actions and states, link them, and see how they look. Finally, you can execute the state machine for testing purposes.
  3. Once your initial design is ready, you will export the ASL file and save it in your Terraform project. You can use the Terraform resource type aws_sfn_state_machine and reference the saved ASL file in the definition field.
  4. You will then need to parametrize the ASL file given that Terraform will dynamically name the resources, and the Amazon Resource Name (ARN) may eventually change. You do not want to hardcode an ARN in your ASL file, as this will make updating and refactoring your code more difficult.
  5. Finally, you deploy the state machine via Terraform by running terraform apply.

Simple changes should be made directly in the parametrized ASL file in your Terraform project instead of going back to Workflow Studio. Having the ASL file versioned as part of your project ensures that no manual changes break the state machine. Even if there is a breaking change, you can easily roll back to a previous version. One caveat to this is if you are making major changes to the state machine. In this case, taking advantage of Workflow Studio in the console is preferable.

However, you will most likely want to continue seeing a visual representation of the state machine while developing locally. The good news is that you have another option directly integrated into Visual Studio Code (VS Code) that visually renders the state machine, similar to Workflow Studio. This functionality is part of the AWS Toolkit for VS Code. You can learn more about the state machine integration with the AWS Toolkit for VS Code here. Below is an example of a parametrized ASL file and its rendered visualization in VS Code.

Step Functions state machine displayed visually in VS Code

Figure 2 – Step Functions state machine displayed visually in VS Code

Parameter Substitution

In the Terraform template, when you define the Step Functions state machine, you can either include the definition in the template or in an external file. Leaving the definition in the template can cause the template to be less readable and difficult to manage. As a best practice, it is recommended to keep the definition of the state machine in a separate file. This raises the question of how to pass parameters to the state machine. In order to do this, you can use the templatefile function of Terraform. The templatefile function reads a file and renders its content with the supplied set of variables. As shown in the code snippet below, we will use the templatefile function to render the state machine definition file with the Lambda function ARN and any other parameters to pass to the state machine.

resource "aws_sfn_state_machine" "sfn_state_machine" {
  name     = "MyStateMachine-${random_string.random.id}"
  role_arn = aws_iam_role.StateMachineRole.arn
  definition = templatefile("${path.module}/statemachine/statemachine.asl.json", {
    ProcessingLambda = aws_lambda_function.test_lambda.arn
    }
  )
  logging_configuration {
    log_destination        = "${aws_cloudwatch_log_group.MySFNLogGroup.arn}:*"
    include_execution_data = true
    level                  = "ALL"
  }
}

Inside the state machine definition, you have to specify a string template using the interpolation sequences delimited with ${}. Similar to the code snippet below, you will define the state machine with the variable name that will be passed by the templatefile function.

"Lambda Invoke": {
    "Type": "Task",
    "Resource": "arn:aws:states:::lambda:invoke",
    "Parameters": {
        "Payload.$": "$",
        "FunctionName": "${ProcessingLambda}"
    },
    "End": true
}

After the templatefile function runs, it will replace the variable ${ProcessingLambda} with the actual Lambda function ARN generated when the template is deployed.

Remote Terraform State Management

Every time you run Terraform, it stores information about the managed infrastructure and configuration in a state file. By default, Terraform creates the state file called terraform.tfstate in the local directory. As mentioned earlier, you will want to include any .tfstate files in your .gitignore file. This will ensure you do not commit it to source control, which could potentially expose secrets and would most likely lead to errors in state. If you accidentally delete this local file, Terraform cannot track the infrastructure that was previously created. In that case, if you run terraform apply on an updated configuration, Terraform will create it from scratch, which will lead to conflicts. It is recommended that you store the Terraform state remotely in secure storage to enable versioning, encryption, and sharing. Terraform supports storing state in S3 buckets by using the backend configuration block. In order to configure Terraform to write the state file to an S3 bucket, you need to specify the bucket name, the region, and the key name.

It is also recommended that you enable versioning in the S3 bucket and MFA delete to protect the state file from accidental deletion. In addition, you need to make sure that Terraform has the right IAM permissions on the target S3 bucket. In case you have multiple developers working with the same infrastructure simultaneously, Terraform can also use state locking to prevent concurrent runs against the same state. You can use a DynamoDB table to control locking. The DynamoDB table you use must have a partition key named LockID with type String, and Terraform must have the right IAM permissions on the table.

terraform {
    backend "s3" {
        bucket         = "mybucket"
        key            = "path/to/state/file"
        region         = "us-east-1"
        attach_deny_insecure_transport_policy = true # only allow HTTPS connections 
        encrypt        = true
        dynamodb_table = "Table-Name"
    }
}

With this remote state configuration, you will maintain the state securely stored in S3. With every change you apply to your infrastructure, Terraform will automatically pull the latest state from the S3 bucket, lock it using the DynamoDB table, apply the changes, push the latest state again to the S3 bucket and then release the lock.

Cleanup

If you were following along and deployed resources such as the Lambda function, the Step Functions state machine, the S3 bucket for backend state storage, or any of the other associated resources by running terraform apply, to avoid incurring charges on your AWS account, please run terraform destroy to tear these resources down and clean up your environment.

Conclusion

In conclusion, this blog provides a comprehensive guide to leveraging Terraform for deploying AWS Step Functions state machines. We discussed the importance of a well-structured project, initial code setup, integration between Workflow Studio and Terraform, parameter substitution, and remote state management. By following these best practices, developers can create and manage their state machines more effectively while maintaining clean, modular, and reusable code. Embracing infrastructure-as-code and using the right tools, such as Workflow Studio, VS Code, and Terraform, will enable you to build scalable and maintainable distributed applications, automate processes, orchestrate microservices, and create data and ML pipelines with AWS Step Functions.

If you would like to learn more about using Step Functions with Terraform, please check out the following patterns and workflows on Serverless Land and view the Step Functions Developer Guide.

About the authors

Ahmad Aboushady

Ahmad Aboushady is a Senior Technical Account Manager at AWS based in UAE. He works with Enterprise Support customers across the region to help them optimize their workloads on AWS and make the best out of their cloud journey.

Patrick Guha

Patrick Guha is a Solutions Architect at AWS based in Austin, TX. He supports non-profit, research customers focused on genomics, healthcare, and high-performance compute workloads in the cloud. Patrick has a BS in Electrical and Computer Engineering, and is currently working towards an MS in Engineering Management.

Aryam Gutierrez

Aryam Gutierrez is a Senior Partner Solutions Architect at AWS based in Madrid. He supports strategic partners to either build highly-scalable solutions or navigate through the various partner programs to differentiate their business, with the ultimate goal of growing business with AWS.

Building a secure webhook forwarder using an AWS Lambda extension and Tailscale

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-secure-webhook-forwarder-using-an-aws-lambda-extension-and-tailscale/

This post is written by Duncan Parsons, Enterprise Architect, and Simon Kok, Sr. Consultant.

Webhooks can help developers to integrate with third-party systems or devices when building event based architectures.

However, there are times when control over the target’s network environment is restricted or targets change IP addresses. Additionally, some endpoints lack sufficient security hardening, requiring a reverse proxy and additional security checks to inbound traffic from the internet.

It can be complex to set up and maintain highly available secure reverse proxies to inspect and send events to these backend systems for multiple endpoints. This blog shows how to use AWS Lambda extensions to build a cloud native serverless webhook forwarder to meet this need with minimal maintenance and running costs.

The custom Lambda extension forms a secure WireGuard VPN connection to a target in a private subnet behind a stateful firewall and NAT Gateway. This example sets up a public HTTPS endpoint to receive events, selectively filters, and proxies requests over the WireGuard connection. This example uses a serverless architecture to minimize maintenance overhead and running costs.

Example overview

The sample code to deploy the following architecture is available on GitHub. This example uses AWS CodePipeline and AWS CodeBuild to build the code artifacts and deploys this using AWS CloudFormation via the AWS Cloud Development Kit (CDK). It uses Amazon API Gateway to manage the HTTPS endpoint and the Lambda service to perform the application functions. AWS Secrets Manager stores the credentials for Tailscale.

To orchestrate the WireGuard connections, you can use a free account on the Tailscale service. Alternatively, set up your own coordination layer using the open source Headscale example.

Reference architecture

  1. The event producer sends an HTTP request to the API Gateway URL.
  2. API Gateway proxies the request to the Lambda authorizer function. It returns an authorization decision based on the source IP of the request.
  3. API Gateway proxies the request to the Secure Webhook Forwarder Lambda function running the Tailscale extension.
  4. On initial invocation, the Lambda extension retrieves the Tailscale Auth key from Secrets Manager and uses that to establish a connection to the appropriate Tailscale network. The extension then exposes the connection as a local SOCKS5 port to the Lambda function.
  5. The Lambda extension maintains a connection to the Tailscale network via the Tailscale coordination server. Through this coordination server, all other devices on the network can be made aware of the running Lambda function and vice versa. The Lambda function is configured to refuse incoming WireGuard connections – read more about the --shields-up command here.
  6. Once the connection to the Tailscale network is established, the Secure Webhook Forwarder Lambda function proxies the request over the internet to the target using a WireGuard connection. The connection is established via the Tailscale Coordination server, traversing the NAT Gateway to reach the Amazon EC2 instance inside a private subnet. The EC2 instance responds with an HTML response from a local Python webserver.
  7. On deployment and every 60 days, Secrets Manager rotates the Tailscale Auth Key automatically. It uses the Credential Rotation Lambda function, which retrieves the OAuth Credentials from Secrets Manager and uses these to create a new Tailscale Auth Key using the Tailscale API and stores the new key in Secrets Manager.

To separate the network connection layer logically from the application code layer, a Lambda extension encapsulates the code required to form the Tailscale VPN connection and make this available to the Lambda function application code via a local SOCK5 port. You can reuse this connectivity across multiple Lambda functions for numerous use cases by attaching the extension.

To deploy the example, follow the instructions in the repository’s README. Deployment may take 20–30 minutes.

How the Lambda extension works

The Lambda extension creates the network tunnel and exposes it to the Lambda function as a SOCKS5 server running on port 1055. There are three stages of the Lambda lifecycle: init, invoke, and shutdown.

Lambda extension deep dive

With the Tailscale Lambda extension, the majority of the work is performed in the init phase. The webhook forwarder Lambda function has the following lifecycle:

  1. Init phase:
    1. Extension Init – Extension connects to Tailscale network and exposes WireGuard tunnel via local SOCKS5 port.
    2. Runtime Init – Bootstraps the Node.js runtime.
    3. Function Init – Imports required Node.js modules.
  2. Invoke phase:
    1. The extension intentionally doesn’t register to receive any invoke events. The Tailscale network is kept online until the function is instructed to shut down.
    2. The Node.js handler function receives the request from API Gateway in 2.0 format which it then proxies to the SOCKS5 port to send the request over the WireGuard connection to the target. The invoke phase ends once the function receives a response from the target EC2 instance and optionally returns that to API Gateway for onward forwarding to the original event source.
  3. Shutdown phase:
    1. The extension logs out of the Tailscale network and logs the receipt of the shutdown event.
    2. The function execution environment is shut down along with the Lambda function’s execution environment.

Extension file structure

The extension code exists as a zip file along with some metadata set at the time the extension is published as an AWS Lambda layer. The zip file holds three folders:

  1. /extensions – contains the extension code and is the directory that the Lambda service looks for code to run when the Lambda extension is initialized.
  2. /bin –includes the executable dependencies. For example, within the tsextension.sh script, it runs the tailscale, tailscaled, curl, jq, and OpenSSL binaries.
  3. /ssl –stores the certificate authority (CA) trust store (containing the root CA certificates that are trusted to connect with). OpenSSL uses these to verify SSL and TLS certificates.

The tsextension.sh file is the core of the extension. Most of the code is run in the Lambda function’s init phase. The extension code is split into three stages. The first two stages relate to the Lambda function init lifecycle phase, with the third stage covering invoke and shutdown lifecycle phases.

Extension phase 1: Initialization

In this phase, the extension initializes the Tailscale connection and waits for the connection to become available.

The first step retrieves the Tailscale auth key from Secrets Manager. To keep the size of the extension small, the extension uses a series of Bash commands instead of packaging the AWS CLI to make the Sigv4 requests to Secrets Manager.

The temporary credentials of the Lambda function are made available as environment variables by the Lambda execution environment, which the extension uses to authenticate the Sigv4 request. The IAM permissions to retrieve the secret are added to the Lambda execution role by the CDK code. To optimize security, the secret’s policy restricts reading permissions to (1) this Lambda function and (2) Lambda function that rotates it every 60 days.

The Tailscale agent starts using the Tailscale Auth key. Both the tailscaled and tailscale binaries start in userspace networking mode, as each Lambda function runs in its own container on its own virtual machine. More information about userspace networking mode can be found in the Tailscale documentation.

With the Tailscale processes running, the process must wait for the connection to the Tailnet (the name of a Tailscale network) to be established and for the SOCKS5 port to be available to accept connections. To accomplish this, the extension simply waits for the ‘tailscale status’ command not to return a message with ‘stopped’ in it and then moves on to phase 2.

Extension phase 2: Registration

The extension now registers itself as initialized with the Lambda service. This is performed by sending a POST request to the Lambda service extension API with the events that should be forwarded to the extension.

The runtime init starts next (this initializes the Node.js runtime of the Lambda function itself), followed by the function init (the code outside the event handler). In the case of the Tailscale Lambda extension, it only registers the extension to receive ‘SHUTDOWN’ events. Once the SOCKS5 service is up and available, there is no action for the extension to take on each subsequent invocation of the function.

Extension phase 3: Event processing

To signal the extension is ready to receive an event, a GET request is made to the ‘next’ endpoint of the Lambda runtime API. This blocks the extension script execution until a SHUTDOWN event is sent (as that is the only event registered for this Lambda extension).

When this is sent, the extension logs out of the Tailscale service and the Lambda function shuts down. If INVOKE events are also registered, the extension processes the event. It then signals back to the Lambda runtime API that the extension is ready to receive another event by sending a GET request to the ‘next’ endpoint.

Access control

A sample Lambda authorizer is included in this example. Note that it is recommended to use the AWS Web Application Firewall service to add additional protection to your public API endpoint, as well as hardening the sample code for production use.

For the purposes of this demo, the implementation demonstrates a basic source IP CIDR range restriction, though you can use any property of the request to base authorization decisions on. Read more about Lambda authorizers for HTTP APIs here. To use the source IP restriction, update the CIDR range of the IPs you want to accept on the Lambda authorizer function AUTHD_SOURCE_CIDR environment variable.

Costs

You are charged for all the resources used by this project. The NAT Gateway and EC2 instance are destroyed by the pipeline once the final pipeline step is manually released to minimize costs. The AWS Lambda Power Tuning tool can help find the balance between performance and cost while it polls the demo EC2 instance through the Tailscale network.

The following result shows that 256 MB of memory is the optimum for the lowest cost of execution. The cost is estimated at under $3 for 1 million requests per month, once the demo stack is destroyed.

Power Tuning results

Conclusion

Using Lambda extensions can open up a wide range of options to extend the capability of serverless architectures. This blog shows a Lambda extension that creates a secure VPN tunnel using the WireGuard protocol and the Tailscale service to proxy events through to an EC2 instance inaccessible from the internet.

This is set up to minimize operational overhead with an automated deployment pipeline. A Lambda authorizer secures the endpoint, providing the ability to implement custom logic on the basis of the request contents and context.

For more serverless learning resources, visit Serverless Land.

Build streaming data pipelines with Amazon MSK Serverless and IAM authentication

Post Syndicated from Marvin Gersho original https://aws.amazon.com/blogs/big-data/build-streaming-data-pipelines-with-amazon-msk-serverless-and-iam-authentication/

Currently, MSK Serverless only directly supports IAM for authentication using Java. This example shows how to use this mechanism. Additionally, it provides a pattern creating a proxy that can easily be integrated into solutions built in languages other than Java.

The rising trend in today’s tech landscape is the use of streaming data and event-oriented structures. They are being applied in numerous ways, including monitoring website traffic, tracking industrial Internet of Things (IoT) devices, analyzing video game player behavior, and managing data for cutting-edge analytics systems.

Apache Kafka, a top-tier open-source tool, is making waves in this domain. It’s widely adopted by numerous users for building fast and efficient data pipelines, analyzing streaming data, merging data from different sources, and supporting essential applications.

Amazon’s serverless Apache Kafka offering, Amazon Managed Streaming for Apache Kafka (Amazon MSK) Serverless, is attracting a lot of interest. It’s appreciated for its user-friendly approach, ability to scale automatically, and cost-saving benefits over other Kafka solutions. However, a hurdle encountered by many users is the requirement of MSK Serverless to use AWS Identity and Access Management (IAM) access control. At the time of writing, the Amazon MSK library for IAM is exclusive to Kafka libraries in Java, creating a challenge for users of other programming languages. In this post, we aim to address this issue and present how you can use Amazon API Gateway and AWS Lambda to navigate around this obstacle.

SASL/SCRAM authentication vs. IAM authentication

Compared to the traditional authentication methods like Salted Challenge Response Authentication Mechanism (SCRAM), the IAM extension into Apache Kafka through MSK Serverless provides a lot of benefits. Before we delve into those, it’s important to understand what SASL/SCRAM authentication is. Essentially, it’s a traditional method used to confirm a user’s identity before giving them access to a system. This process requires users or clients to provide a user name and password, which the system then cross-checks against stored credentials (for example, via AWS Secrets Manager) to decide whether or not access should be granted.

Compared to this approach, IAM simplifies permission management across AWS environments, enables the creation and strict enforcement of detailed permissions and policies, and uses temporary credentials rather than the typical user name and password authentication. Another benefit of using IAM is that you can use IAM for both authentication and authorization. If you use SASL/SCRAM, you have to additionally manage ACLs via a separate mechanism. In IAM, you can use the IAM policy attached to the IAM principal to define the fine-grained access control for that IAM principal. All of these improvements make the IAM integration a more efficient and secure solution for most use cases.

However, for applications not built in Java, utilizing MSK Serverless becomes tricky. The standard SASL/SCRAM authentication isn’t available, and non-Java Kafka libraries don’t have a way to use IAM access control. This calls for an alternative approach to connect to MSK Serverless clusters.

But there’s an alternative pattern. Without having to rewrite your existing application in Java, you can employ API Gateway and Lambda as a proxy in front of a cluster. They can handle API requests and relay them to Kafka topics instantly. API Gateway takes in producer requests and channels them to a Lambda function, written in Java using the Amazon MSK IAM library. It then communicates with the MSK Serverless Kafka topic using IAM access control. After the cluster receives the message, it can be further processed within the MSK Serverless setup.

You can also utilize Lambda on the consumer side of MSK Serverless topics, bypassing the Java requirement on the consumer side. You can do this by setting Amazon MSK as an event source for a Lambda function. When the Lambda function is triggered, the data sent to the function includes an array of records from the Kafka topic—no need for direct contact with Amazon MSK.

Solution overview

This example walks you through how to build a serverless real-time stream producer application using API Gateway and Lambda.

For testing, this post includes a sample AWS Cloud Development Kit (AWS CDK) application. This creates a demo environment, including an MSK Serverless cluster, three Lambda functions, and an API Gateway that consumes the messages from the Kafka topic.

The following diagram shows the architecture of the resulting application including its data flows.

The data flow contains the following steps:

  1. The infrastructure is defined in an AWS CDK application. By running this application, a set of AWS CloudFormation templates is created.
  2. AWS CloudFormation creates all infrastructure components, including a Lambda function that runs during the deployment process to create a topic in the MSK Serverless cluster and to retrieve the authentication endpoint needed for the producer Lambda function. On destruction of the CloudFormation stack, the same Lambda function gets triggered again to delete the topic from the cluster.
  3. An external application calls an API Gateway endpoint.
  4. API Gateway forwards the request to a Lambda function.
  5. The Lambda function acts as a Kafka producer and pushes the message to a Kafka topic using IAM authentication.
  6. The Lambda event source mapping mechanism triggers the Lambda consumer function and forwards the message to it.
  7. The Lambda consumer function logs the data to Amazon CloudWatch.

Note that we don’t need to worry about Availability Zones. MSK Serverless automatically replicates the data across multiple Availability Zones to ensure high availability of the data.

The demo additionally shows how to use Lambda Powertools for Java to streamline logging and tracing and the IAM authenticator for the simple authentication process outlined in the introduction.

The following sections take you through the steps to deploy, test, and observe the example application.

Prerequisites

The example has the following prerequisites:

  • An AWS account. If you haven’t signed up, complete the following steps:
  • The following software installed on your development machine, or use an AWS Cloud9 environment, which comes with all requirements preinstalled:
  • Appropriate AWS credentials for interacting with resources in your AWS account.

Deploy the solution

Complete the following steps to deploy the solution:

  1. Clone the project GitHub repository and change the directory to subfolder serverless-kafka-iac:
git clone https://github.com/aws-samples/apigateway-lambda-msk-serverless-integration
cd apigateway-lambda-msk-serverless-integration/serverless-kafka-iac
  1. Configure environment variables:
export CDK_DEFAULT_ACCOUNT=$(aws sts get-caller-identity --query 'Account' --output text)
export CDK_DEFAULT_REGION=$(aws configure get region)
  1. Prepare the virtual Python environment:
python3 -m venv .venv

source .venv/bin/activate

pip3 install -r requirements.txt
  1. Bootstrap your account for AWS CDK usage:
cdk bootstrap aws://$CDK_DEFAULT_ACCOUNT/$CDK_DEFAULT_REGION
  1. Run cdk synth to build the code and test the requirements (ensure docker daemon is running on your machine):
cdk synth
  1. Run cdk deploy to deploy the code to your AWS account:
cdk deploy --all

Test the solution

To test the solution, we generate messages for the Kafka topics by sending calls through the API Gateway from our development machine or AWS Cloud9 environment. We then go to the CloudWatch console to observe incoming messages in the log files of the Lambda consumer function.

  1. Open a terminal on your development machine to test the API with the Python script provided under /serverless_kafka_iac/test_api.py:
python3 test-api.py

  1. On the Lambda console, open the Lambda function named ServerlessKafkaConsumer.

  1. On the Monitor tab, choose View CloudWatch logs to access the logs of the Lambda function.

  1. Choose the latest log stream to access the log files of the last run.

You can review the log entry of the received Kafka messages in the log of the Lambda function.

Trace a request

All components integrate with AWS X-Ray. With AWS X-Ray, you can trace the entire application, which is useful to identify bottlenecks when load testing. You can also trace method runs at the Java method level.

Lambda Powertools for Java allows you to shortcut this process by adding the @Trace annotation to a method to see traces on the method level in X-Ray.

To trace a request end to end, complete the following steps:

  1. On the CloudWatch console, choose Service map in the navigation pane.
  2. Select a component to investigate (for example, the Lambda function where you deployed the Kafka producer).
  3. Choose View traces.

  1. Choose a single Lambda method invocation and investigate further at the Java method level.

Implement a Kafka producer in Lambda

Kafka natively supports Java. To stay open, cloud native, and without third-party dependencies, the producer is written in that language. Currently, the IAM authenticator is only available to Java. In this example, the Lambda handler receives a message from an API Gateway source and pushes this message to an MSK topic called messages.

Typically, Kafka producers are long-living and pushing a message to a Kafka topic is an asynchronous process. Because Lambda is ephemeral, you must enforce a full flush of a submitted message until the Lambda function ends by calling producer.flush():

// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: MIT-0
package software.amazon.samples.kafka.lambda;
 
// This class is part of the AWS samples package and specifically deals with Kafka integration in a Lambda function.
// It serves as a simple API Gateway to Kafka Proxy, accepting requests and forwarding them to a Kafka topic.
public class SimpleApiGatewayKafkaProxy implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
 
    // Specifies the name of the Kafka topic where the messages will be sent
    public static final String TOPIC_NAME = "messages";
 
    // Logger instance for logging events of this class
    private static final Logger log = LogManager.getLogger(SimpleApiGatewayKafkaProxy.class);
    
    // Factory to create properties for Kafka Producer
    public KafkaProducerPropertiesFactory kafkaProducerProperties = new KafkaProducerPropertiesFactoryImpl();
    
    // Instance of KafkaProducer
    private KafkaProducer<String, String>[KT1]  producer;
 
    // Overridden method from the RequestHandler interface to handle incoming API Gateway proxy events
    @Override
    @Tracing
    @Logging(logEvent = true)
    public APIGatewayProxyResponseEvent handleRequest(APIGatewayProxyRequestEvent input, Context context) {
        
        // Creating a response object to send back 
        APIGatewayProxyResponseEvent response = createEmptyResponse();
        try {
            // Extracting the message from the request body
            String message = getMessageBody(input);
 
            // Create a Kafka producer
            KafkaProducer<String, String> producer = createProducer();
 
            // Creating a record with topic name, request ID as key and message as value 
            ProducerRecord<String, String> record = new ProducerRecord<String, String>(TOPIC_NAME, context.getAwsRequestId(), message);
 
            // Sending the record to Kafka topic and getting the metadata of the record
            Future<RecordMetadata>[KT2]  send = producer.send(record);
            producer.flush();
 
            // Retrieve metadata about the sent record
            RecordMetadata metadata = send.get();
 
            // Logging the partition where the message was sent
            log.info(String.format("Message was send to partition %s", metadata.partition()));
 
            // If the message was successfully sent, return a 200 status code
            return response.withStatusCode(200).withBody("Message successfully pushed to kafka");
        } catch (Exception e) {
            // In case of exception, log the error message and return a 500 status code
            log.error(e.getMessage(), e);
            return response.withBody(e.getMessage()).withStatusCode(500);
        }
    }
 
    // Creates a Kafka producer if it doesn't already exist
    @Tracing
    private KafkaProducer<String, String> createProducer() {
        if (producer == null) {
            log.info("Connecting to kafka cluster");
            producer = new KafkaProducer<String, String>(kafkaProducerProperties.getProducerProperties());
        }
        return producer;
    }
 
    // Extracts the message from the request body. If it's base64 encoded, it's decoded first.
    private String getMessageBody(APIGatewayProxyRequestEvent input) {
        String body = input.getBody();
 
        if (input.getIsBase64Encoded()) {
            body = decode(body);
        }
        return body;
    }
 
    // Creates an empty API Gateway proxy response event with predefined headers.
    private APIGatewayProxyResponseEvent createEmptyResponse() {
        Map<String, String> headers = new HashMap<>();
        headers.put("Content-Type", "application/json");
        headers.put("X-Custom-Header", "application/json");
        APIGatewayProxyResponseEvent response = new APIGatewayProxyResponseEvent().withHeaders(headers);
        return response;
    }
}

Connect to Amazon MSK using IAM authentication

This post uses IAM authentication to connect to the respective Kafka cluster. For information about how to configure the producer for connectivity, refer to IAM access control.

Because you configure the cluster via IAM, grant Connect and WriteData permissions to the producer so that it can push messages to Kafka:

{
    “Version”: “2012-10-17”,
    “Statement”: [
        {            
            “Effect”: “Allow”,
            “Action”: [
                “kafka-cluster:Connect”
            ],
            “Resource”: “arn:aws:kafka:region:account-id:cluster/cluster-name/cluster-uuid “
        }
    ]
}
 
 
{
    “Version”: “2012-10-17”,
    “Statement”: [
        {            
            “Effect”: “Allow”,
            “Action”: [
                “kafka-cluster:Connect”,
                “kafka-cluster: DescribeTopic”,
            ],
            “Resource”: “arn:aws:kafka:region:account-id:topic/cluster-name/cluster-uuid/topic-name“
        }
    ]
}

This shows the Kafka excerpt of the IAM policy, which must be applied to the Kafka producer. When using IAM authentication, be aware of the current limits of IAM Kafka authentication, which affect the number of concurrent connections and IAM requests for a producer. Refer to Amazon MSK quota and follow the recommendation for authentication backoff in the producer client:

        Map<String, String> configuration = Map.of(
                “key.serializer”, “org.apache.kafka.common.serialization.StringSerializer”,
                “value.serializer”, “org.apache.kafka.common.serialization.StringSerializer”,
                “bootstrap.servers”, getBootstrapServer(),
                “security.protocol”, “SASL_SSL”,
                “sasl.mechanism”, “AWS_MSK_IAM”,
                “sasl.jaas.config”, “software.amazon.msk.auth.iam.IAMLoginModule required;”,
                “sasl.client.callback.handler.class”,
				“software.amazon.msk.auth.iam.IAMClientCallbackHandler”,
                “connections.max.idle.ms”, “60”,
                “reconnect.backoff.ms”, “1000”
        );

Additional considerations

Each MSK Serverless cluster can handle 100 requests per second. To reduce IAM authentication requests from the Kafka producer, place it outside of the handler. For frequent calls, there is a chance that Lambda reuses the previously created class instance and only reruns the handler.

For bursting workloads with a high number of concurrent API Gateway requests, this can lead to dropped messages. Although this might be tolerable for some workloads, for others this might not be the case.

In these cases, you can extend the architecture with a buffering technology like Amazon Simple Queue Service (Amazon SQS) or Amazon Kinesis Data Streams between API Gateway and Lambda.

To reduce latency, reduce cold start times for Java by changing the tiered compilation level to 1, as described in Optimizing AWS Lambda function performance for Java. Provisioned concurrency ensures that polling Lambda functions don’t need to warm up before requests arrive.

Conclusion

In this post, we showed how to create a serverless integration Lambda function between API Gateway and MSK Serverless as a way to do IAM authentication when your producer is not written in Java. You also learned about the native integration of Lambda and Amazon MSK on the consumer side. Additionally, we showed how to deploy such an integration with the AWS CDK.

The general pattern is suitable for many use cases where you want to use IAM authentication but your producers or consumers are not written in Java, but you still want to take advantage of the benefits of MSK Serverless, like its ability to scale up and down with unpredictable or spikey workloads or its little to no operational overhead of running Apache Kafka.

You can also use MSK Serverless to reduce operational complexity by automating provisioning and the management of capacity needs, including the need to constantly monitor brokers and storage.

For more serverless learning resources, visit Serverless Land.

For more information on MSK Serverless, check out the following:


About the Authors

Philipp Klose is a Global Solutions Architect at AWS based in Munich. He works with enterprise FSI customers and helps them solve business problems by architecting serverless platforms. In this free time, Philipp spends time with his family and enjoys every geek hobby possible.

Daniel Wessendorf is a Global Solutions Architect at AWS based in Munich. He works with enterprise FSI customers and is primarily specialized in machine learning and data architectures. In his free time, he enjoys swimming, hiking, skiing, and spending quality time with his family.

Marvin Gersho is a Senior Solutions Architect at AWS based in New York City. He works with a wide range of startup customers. He previously worked for many years in engineering leadership and hands-on application development, and now focuses on helping customers architect secure and scalable workloads on AWS with a minimum of operational overhead. In his free time, Marvin enjoys cycling and strategy board games.

Nathan Lichtenstein is a Senior Solutions Architect at AWS based in New York City. Primarily working with startups, he ensures his customers build smart on AWS, delivering creative solutions to their complex technical challenges. Nathan has worked in cloud and network architecture in the media, financial services, and retail spaces. Outside of work, he can often be found at a Broadway theater.

How Vercel Shipped Cron Jobs in 2 Months Using Amazon EventBridge Scheduler

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/how-vercel-shipped-cron-jobs-in-2-months-using-amazon-eventbridge-scheduler/

Vercel implemented Cron Jobs using Amazon EventBridge Scheduler, enabling their customers to create, manage, and run scheduled tasks at scale. The adoption of this feature was rapid, reaching over 7 million weekly cron invocations within a few months of release. This article shows how they did it and how they handle the massive scale they’re experiencing.

Vercel builds a front-end cloud that makes it easier for engineers to deploy and run their front-end applications. With more than 100 million deployments in Vercel in the last two years, Vercel helps users take advantage of best-in-class AWS infrastructure with zero configuration by relying heavily on serverless technology. Vercel provides a lot of features that help developers host their front-end applications. However, until the beginning of this year, they hadn’t built Cron Jobs yet.

A cron job is a scheduled task that automates running specific commands or scripts at predetermined intervals or fixed times. It enables users to set up regular, repetitive actions, such as backups, sending notification emails to customers, or processing payments when a subscription needs to be renewed. Cron jobs are widely used in computing environments to improve efficiency and automate routine operations, and they were a commonly requested feature from Vercel’s customers.

In December 2022, Vercel hosted an internal hackathon to foster innovation. That’s where Vincent Voyer and Andreas Schneider joined forces to build a prototype cron job feature for the Vercel platform. They formed a team of five people and worked on the feature for a week. The team worked on different tasks, from building a user interface to display the cron jobs to creating the backend implementation of the feature.

Amazon EventBridge Scheduler
When the hackathon team started thinking about solving the cron job problem, their first idea was to use Amazon EventBridge rules that run on a schedule. However, they realized quickly that this feature has a limit of 300 rules per account per AWS Region, which wasn’t enough for their intended use. Luckily, one of the team members had read the announcement of Amazon EventBridge Scheduler in the AWS Compute blog and they thought this would be a perfect tool for their problem.

By using EventBridge Scheduler, they could schedule one-time or recurrently millions of tasks across over 270 AWS services without provisioning or managing the underlying infrastructure.

How cron jobs work

For creating a new cron job in Vercel, a customer needs to define the frequency in which this task will run and the API they want to invoke. Vercel, in the backend, uses EventBridge Scheduler and creates a new schedule when a new cron job is created.

To call the endpoint, the team used an AWS Lambda function that receives the path that needs to be invoked as input parameters.

How cron jobs works

When the time comes for the cron job to run, EventBridge Scheduler invokes the function, which then calls the customer website endpoint that was configured.

By the end of the week, Vincent and his team had a working prototype version of the cron jobs feature, and they won a prize at the hackathon.

Building Vercel Cron Jobs
After working for one week on this prototype in December, the hackathon ended, and Vincent and his team returned to their regular jobs. In early January 2023, Vicent and the Vercel team decided to take the project and turn it into a real product.

During the hackathon, the team built the fundamental parts of the feature, but there were some details that they needed to polish to make it production ready. Vincent and Andreas worked on the feature, and in less than two months, on February 22, 2023, they announced Vercel Cron Jobs to the public. The announcement tweet got over 400 thousand views, and the community loved the launch.

Tweet from Vercel announcing cron jobs

The adoption of this feature was very rapid. Within a few months of launching Cron Jobs, Vercel reached over 7 million cron invocations per week, and they expect the adoption to continue growing.

Cron jobs adoption

How Vercel Cron Jobs Handles Scale
With this pace of adoption, scaling this feature is crucial for Vercel. In order to scale the amount of cron invocations at this pace, they had to make some business and architectural decisions.

From the business perspective, they defined limits for their free-tier customers. Free-tier customers can create a maximum of two cron jobs in their account, and they can only have hourly schedules. This means that free customers cannot run a cron job every 30 minutes; instead, they can do it at most every hour. Only customers on Vercel paid tiers can take advantage of EventBridge Scheduler minute granularity for scheduling tasks.

Also, for free customers, minute precision isn’t guaranteed. To achieve this, Vincent took advantage of the time window configuration from EventBridge Scheduler. The flexible time window configuration allows you to start a schedule within a window of time. This means that the scheduled tasks are dispersed across the time window to reduce the impact of multiple requests on downstream services. This is very useful if, for example, many customers want to run their jobs at midnight. By using the flexible time window, the load can spread across a set window of time.

From the architectural perspective, Vercel took advantage of hosting the APIs and owning the functions that the cron jobs invoke.

Validating the calls

This means that when the Lambda function is started by EventBridge Scheduler, the function ends its run without waiting for a response from the API. Then Vercel validates if the cron job ran by checking if the API and Vercel function ran correctly from its observability mechanisms. In this way, the function duration is very short, less than 400 milliseconds. This allows Vercel to run a lot of functions per second without affecting their concurrency limits.

Lambda invocations and duration dashboard

What Was The Impact?
Vercel’s implementation of Cron Jobs is an excellent example of what serverless technologies enable. In two months, with two people working full time, they were able to launch a feature that their community needed and enthusiastically adopted. This feature shows the completeness of Vercel’s platform and is an important feature to convince their customers to move to a paid account.

If you want to get started with EventBridge Scheduler, see Serverless Land patterns for EventBridge Scheduler, where you’ll find a broad range of examples to help you.

Marcia

AWS SAM support for HashiCorp Terraform now generally available

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/aws-sam-support-for-hashicorp-terraform-now-generally-available/

In November 2022, AWS announced the public preview of AWS Serverless Application Model (AWS SAM) support for HashiCorp Terraform. The public preview introduces a subset of features to help Terraform users test serverless applications locally. Today, AWS is announcing the general availability of Terraform support in AWS SAM. This GA release expands AWS SAM’s feature set to enhance the local development of serverless applications.

Terraform and AWS SAM are both open-source frameworks allowing developers to define infrastructure as code (IaC). Developers can version and share infrastructure definitions in the same way they share code. However, because AWS SAM is specifically designed for serverless, it includes a command line interface (CLI) designed for serverless development. The CLI enables developers to create, debug, and deploy serverless applications using local emulators along with build and deployment tools. In this release, AWS SAM is making a subset of those tools to Terraform users as well.

Terraform support

The public preview blog demonstrated the initial support for Terraform. This blog demonstrates AWS SAM’s expanded feature set for local development. The blog also simplifies the implementation by using the Serverless.tf modules for AWS Lambda functions and layers rather than the native Terraform resources.

Modules can build the deployment artifacts for the Lambda functions and layers. Additionally, the module automatically generates the metadata required by AWS SAM to interface with the Terraform resources. To use the native Terraform resources, refer to the preview blog for metadata configuration.

Downloading the code

To explore AWS SAM’s support for Terraform, visit the aws-sam-terraform-examples repository. Clone the repository and change to the ga directory to get started:

git clone https://github.com/aws-samples/aws-sam-terraform-examples

cd ga

In this directory, there are two demo applications. Both of the applications are identical except for api_gateway_v1 uses an Amazon API Gateway REST API (v1) and api_gateway_v2 uses an Amazon API Gateway HTTP API (v2). Choose one and change to the tf-resources folder in that directory.

cd api_gateway_v1/tf-resources

Unless indicated otherwise, examples in this post reference the api_gateway_v1 application.

Code structure

Code structure diagram

Code structure diagram

Terraform supports spreading IaC across multiple files. Because of this, developers often collect all the Terraform files in a single directory and keep the resource files elsewhere. The example applications are configured this way.

Any Terraform or AWS SAM command must run from the location of the main.tf file, in this case, the tf-resources directory. Because AWS SAM commands are generally run from the project root, AWS SAM has a command to support nested structures. If running the sam build command from a nested folder, pass the flag terraform-project-root-path with a relative or absolute path to the root of the project.

Local invoke

The preview version of Terraform supported local invocation but the team simplified the experience with support for Serverless.tf. The demonstration applications have two functions in them. A responder function is the backend integration for the API Gateway endpoints and the Auth function is a custom authorizer. Find both module definitions in the functions.tf file.

Responder function

module "lambda_function_responder" {
  source        = "terraform-aws-modules/lambda/aws"
  version       = "~> 6.0"
  timeout       = 300
  source_path   = "${path.module}/src/responder/"
  function_name = "responder"
  handler       = "app.open_handler"
  runtime       = "python3.9"
  create_sam_metadata = true
  publish       = true
  allowed_triggers = {
    APIGatewayAny = {
      service    = "apigateway"
      source_arn = "${aws_api_gateway_rest_api.api.execution_arn}/*/*"
    }
  }
}

There are two important parameters:

  • source_path, which points to a local folder. Because this is not a zip file, Serverless.tf builds the artifacts as needed.
  • create_sam_data, which generates the metadata required for AWS SAM to locate the necessary files and modules.

To invoke the function locally, run the following commands:

  1. Run build to run any build scripts
    sam build --hook-name terraform --terraform-project-root-path ../
  2. Run local invoke to invoke the desired Lambda function
    sam local invoke --hook-name terraform --terraform-project-root-path ../ 'module.lambda_function_responder.aws_lambda_function.this[0]’

Because the project is Terraform, the hook-name parameter with the value terraform is required to let AWS SAM know how to proceed. The function name is a combination of the module name and the resource type that it becomes. If you are unsure of the name, run the command without the name:

sam local invoke --hook-name terraform

AWS SAM evaluates the template. If there is only one function, AWS SAM proceeds to invoke it. If there are more than one, as is the case here, AWS SAM asks you which one and provides a list of options.

Example error text

Example error text

Auth function

The authorizer function requires some input data as a mock event. To generate a mock event for the api_gateway_v1 project:

sam local generate-event apigateway authorizer

For the api_gateway_v2 project use:

sam local generate-event apigateway request-authorizer

The resulting events are different because API Gateway REST and HTTP APIs can handle custom authorizers differently. In these examples, REST uses a standard token authorizer and returns the proper AWS Identity and Access Management (IAM) role. The HTTP API example uses a simple pass or fail option.

Each of the examples already has the properly formatted event for testing included at events/auth.json. To invoke the Auth function, run the following:

sam local invoke --hook-name terraform 'module.lambda_function_auth.aws_lambda_function.this[0]' -e events/auth.json

There is no need to run the sam build command again because the application has not changed.

Local start-api

You can now emulate a local version of API Gateway with the generally available release. Each of these examples have two endpoints. One endpoint is open and a custom authorizer secures the other. Both return the same response:

{
  “message”: “Hello TF World”,
  “location”: “ip address”
}

To start the local emulator, run the following:

sam local start-api –hook-name terraform

AWS SAM starts the emulator and exposes the two endpoints for local testing.

Open endpoint

Using curl, test the open endpoint:

curl --location http://localhost:3000/open

The local emulator processes the request and provides a response in the terminal window. The emulator also includes logs from the Lambda function.

Open endpoint example output

Open endpoint example output

Auth endpoint

Test the secure endpoint and pass the extra required header, myheader:

curl -v --location http://localhost:3000/secure --header 'myheader: 123456789'

The endpoint returns an authorized response with the “Hello TF World” messaging. Try the endpoint again with an invalid header value:

curl --location http://localhost:3000/secure --header 'myheader: IamInvalid'

The endpoint returns an unauthenticated response.

Unauthenticated response

Unauthenticated response

Parameters

There are several options when using AWS SAM with Terraform:

  • Hook-name: required for every command when working with Terraform. This informs AWS SAM that the project is a Terraform application.
  • Skip-prepare-infra: AWS SAM uses the terraform plan command identify and process all the required artifacts. However, it should only be run when new resources are added or modified. This option keeps AWS SAM from running the terraform plan command. If this flag is passed and a plan does not exist, AWS SAM ignores the flag and run the terraform plan command anyway.
  • Prepare-infra: forces AWS SAM to run the terraform plan command.
  • Terraform-project-root-path: overrides the current directory as the root of the project. You can use an absolute path (/path/to/project/root) or relative path (../ or ../../).
  • Terraform-plan-file: allows a developer to specify a specific Terraform plan file. This command also enables Terraform users to use local commands.

Combining these options can create long commands:

sam build --hook-name terraform --terraform-project-root-path ../

or

sam local invoke –hook-name terraform –skip-prepare-infra 'module.lambda_function_responder.aws_lambda_function.this[0]'

You can use the samconfig file to set defaults, shorten commands, and optimize the development process. Using the new samconfig YAML support, the file looks like this:

version: 0.1
default:
  global:
    parameters:
      hook_name: terraform
      skip_prepare_infra: true
  build:
    parameters:
      terraform_project_root_path: ../

By setting these defaults, the command is now shorter:

sam local invoke 'module.lambda_function_responder.aws_lambda_function.this[0]'

AWS SAM now knows it is a Terraform project and skips the preparation task unless the Terraform plan is missing. If a plan refresh is required, add the –prepare-infra flag to override the default setting.

Deployment and remote debugging

The applications in these projects are regular Terraform applications. Deploy them as any other Terraform project.

terraform plan
terraform apply

Currently, AWS SAM accelerate does not support Terraform projects. However, because Terraform deploys using the API method, serverless applications deploy quickly. Use a third party watch and the terraform apply –auto-approve command to approximate this experience.

For logging, take advantage of the sam logs command. Refer to the deploy output of the projects for an example of tailing the logs for one or all of the resources.

HashiCorp Cloud Platform

HashiCorp Cloud Platform allows developers to run deployments using a centralized location to maintain security and state. When developers run builds in the cloud, a local plan file is not available for AWS SAM to use in local testing and debugging. However, developers can generate a plan in the cloud and use the plan locally for development. For instructions, refer to the documentation.

Conclusion

HashiCorp Terraform is a popular IaC framework for building applications in the AWS Cloud. AWS SAM is an IaC framework and the CLI is specifically designed to help developers build serverless applications.

This blog covers the new AWS SAM support for Terraform and how developers can use them together to maximize the development experience. The blog covers locally invoking a single function, emulating API Gateway endpoints locally, and testing a Lambda authorizer locally before deploying. Finally, the blog deploys the application and uses AWS SAM to monitor the deployed resources.

For more serverless learning resources, visit Serverless Land.

Enhancing Workflow Studio with new features for streamlined authoring

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/enhancing-workflow-studio-with-new-features-for-streamlined-authoring/

AWS Step Functions is emerging as a foundational tool for building scalable and distributed serverless applications through workflows. In 2021, the Step Functions team launched Workflow Studio, a low-code visual tool for creating Step Functions workflows in the AWS Management Console. This made workflow building accessible even to those with limited coding experience.

In response to feedback from customers, today the Step Functions team introduces a comprehensive set of new features. Addressing some of the most common requests, these make the authoring experience even more intuitive, versatile, and aligned with your specific development approach.

What’s new?

The latest release includes three new components:

1. Enhanced Starter Template Experience: This update offers developers and business users an advanced foundational point, streamlining the process of creating and prototyping workflows swiftly.

2. Code Mode for Workflow Studio: Today, Workflow Studio introduces a new code mode, enabling builders to alternate between design and code authoring views. This feature expedites workflow construction by reducing the need for context switching. For instance, you can seamlessly paste an Amazon States Language (ASL) workflow definition from the Step Functions workflows collection directly into Workflow Studio. You can then transition to the design view to continue your workflow development. Alternatively, opt for a starter template from the new authoring experience. If necessary, you can switch to the new code mode for meticulous adjustments.

3. Enhanced Workflow Execution and Configuration: This version of Workflow Studio also incorporates the capability to execute your workflows directly from the authoring view within Workflow Studio. Additionally, you can configure supplementary workflow settings such as permissions, logging, and tracing to enhance your workflow management.

Introducing the starter template experience

A standout feature is the introduction of the improved starter template experience. This is a new interface designed to expedite the workflow creation process.

By allowing you to filter templates by use-case or service, this feature provides a curated selection that aligns with your project’s needs. The starter template experience serves as a powerful stepping stone, equipping you with a robust foundation to build upon.

To create a workflow from a template:

  1. Navigate to the Step Functions state machines page in the AWS Management Console.
  2. Choose Create state machine.
  3. This presents you with the new template selection. Search by keyword, or filter by use-case and service:
  4. Choose “Distributed Map to Process a CSV file in S3” and choose Select.
  5. The following view shows a visual representation of the workflow, along with a detailed description.

    There are two usage options for each template:

    • Run a demo: Step Functions automatically deploys an AWS CloudFormation stack to your account, equipped with the state machine and all related resources. This ready-to-run demo workflow not only showcases the capabilities of your chosen template, but also serves as a springboard for your unique creations. Building upon this foundation, customize, fine tune, and tailor workflows to meet your exact specifications.
    • Build on it: This places the workflow’s ASL into the new Workflow Studio code view. Importantly, this transition does not deploy any associated resources. The goal is to let you with an expedited workflow creation process that uses best practices templates, while allowing you to customize and adapt them to your specific needs without the need to build from scratch.
  6. Choose Run a demo, and then choose Use template. This places the workflow template into Workflow Studio in Read-only mode. Allowing you to inspect the workflow definition further before deploying the demo resources.
  7. To deploy the demo, choose Deploy and run:

    After a few moments, the demo application is deployed to your account.

Seamless transitions between drag-and-drop design and code mode

Another enhancement in Workflow Studio is the ability to switch seamlessly between the drag-and-drop design view and the new code mode. This versatility allows you to transition between visual design and code-based authoring, catering to varying preferences and skill sets. While the design view offers an intuitive approach to creating workflows, the code mode provides a dynamic space akin to familiar coding environments.

Open up the previously deployed workflow demo by selecting it from the state machines console and choosing Edit:

Choose the Code button to switch to the code authoring view:

Here you are presented with an interface reminiscent of industry standard coding environments such as Visual Studio Code. This transformation lets experienced developers use the full potential of ASL enabling intricate customization and fine-tuning. It also allows you to use the graph visualization on the right to re-order easily and quickly, duplicate, or delete steps.

Chose the Design button to toggle back to the low code editor:

This is ideal for builders that are less experienced in ASL or for experienced developers needing to build workflow mocks rapidly, templates for further editing or prototype workflows.

Execute workflows directly from Workflow Studio

Workflow Studio now enables you to start a workflow from within the interface. This feature bridges the gap between design and execution, allowing developers to start their workflow from the Workflow Studio authoring environment.

To start a workflow from within Workflow Studio, choose the Execute button:

This takes you directly to the Step Functions executions interface where you can enter an input payload and inspect the workflow execution. This feature reduces the need to switch between interfaces, enabling developers to iterate more swiftly and efficiently. Choose Edit to jump directly back into Workflow Studio and continue iteratively refining your workflow.

Workflow Studio can now also view and edit execution role permissions, configure logging, and adjust additional parameters. To access this view, choose the Config button from Workflow Studio:

Availability for existing workflows

The new features are automatically available for all your existing workflows at no additional cost. This ensures that you can use the enhanced capabilities of Workflow Studio without any additional steps or configuration.

Workflow Studio’s new features allow developers to amplify their efforts. By simplifying the creation and execution of workflows, developers can channel more time and energy into the creative aspects of application development. Workflow Studio’s enhancements not only boost productivity but also provide a platform for turning creative designs into tangible, impactful applications.

Conclusion

Workflow Studio continues to evolve with the ongoing goal of simplifying and enhancing the process of building Step Functions workflows. The introduction of seamless authoring mode transitions, direct execution capabilities, and the improved starter template experience represents a pragmatic step towards improving authoring efficiency and flexibility, establishing Workflow Studio as the default authoring experience to Step Functions.

For additional starter templates, patterns, and best practices, visit the Serverless Workflows Collection on Serverless land.

Let’s Architect! Cost-optimizing AWS workloads

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-cost-optimizing-aws-workloads/

Every software component built by engineers and architects is designed with a purpose: to offer particular functionalities and, ultimately, contribute to the generation of business value. We should consider fundamental factors, such as the scalability of the software and the ease of evolution during times of business changes. However, performance and cost are important factors as well since they can impact the business profitability.

This edition of Let’s Architect! follows a similar series post from 2022, which discusses optimizing the cost of an architecture. Today, we focus on architectural patterns, services, and best practices to design cost-optimized cloud workloads. We also want to identify solutions, such as the use of Graviton processors, for increased performance at lower price. Cost optimization is a continuous process that requires the identification of the right tools for each job, as well as the adoption of efficient designs for your system.

AWS re:Invent 2022 – Manage and control your AWS costs

Govern cloud usage and avoid cost surprises without slowing down innovation within your organization. In this re:Invent 2022 session, you can learn how to set up guardrails and operationalize cost control within your organizations using services, such as AWS Budgets and AWS Cost Anomaly Detection, and explore the latest enhancements in the AWS cost control space. Additionally, Mercado Libre shares how they automate their cloud cost control through central management and automated algorithms.

Take me to this re:Invent 2022 video!

Work backwards from team needs to define/deploy cloud governance in AWS environments

Work backwards from team needs to define/deploy cloud governance in AWS environments

Compute optimization

When it comes to optimizing compute workloads, there are many tools available, such as AWS Compute Optimizer, Amazon EC2 Spot Instances, Amazon EC2 Reserved Instances, and Graviton instances. Modernizing your applications can also lead to cost savings, but you need to know how to use the right tools and techniques in an effective and efficient way.

For AWS Lambda functions, you can use the AWS Lambda Cost Optimization video to learn how to optimize your costs. The video covers topics, such as understanding and graphing performance versus cost, code optimization techniques, and avoiding idle wait time. If you are using Amazon Elastic Container Service (Amazon ECS) and AWS Fargate, you can watch a Twitch video on cost optimization using Amazon ECS and AWS Fargate to learn how to adjust your costs. The video covers topics like using spot instances, choosing the right instance type, and using Fargate Spot.

Finally, with Amazon Elastic Kubernetes Service (Amazon EKS), you can use Karpenter, an open-source Kubernetes cluster auto scaler to help optimize compute workloads. Karpenter can help you launch right-sized compute resources in response to changing application load, help you adopt spot and Graviton instances. To learn more about Karpenter, read the post How CoStar uses Karpenter to optimize their Amazon EKS Resources on the AWS Containers Blog.

Take me to Cost Optimization using Amazon ECS and AWS Fargate!
Take me to AWS Lambda Cost Optimization!
Take me to How CoStar uses Karpenter to optimize their Amazon EKS Resources!

Karpenter launches and terminates nodes to reduce infrastructure costs

Karpenter launches and terminates nodes to reduce infrastructure costs

AWS Lambda general guidance for cost optimization

AWS Lambda general guidance for cost optimization

AWS Graviton deep dive: The best price performance for AWS workloads

The choice of the hardware is a fundamental driver for performance, cost, as well as resource consumption of the systems we build. Graviton is a family of processors designed by AWS to support cloud-based workloads and give improvements in terms of performance and cost. This re:Invent 2022 presentation introduces Graviton and addresses the problems it can solve, how the underlying CPU architecture is designed, and how to get started with it. Furthermore, you can learn the journey to move different types of workloads to this architecture, such as containers, Java applications, and C applications.

Take me to this re:Invent 2022 video!

AWS Graviton processors are specifically designed by AWS for cloud workloads to deliver the best price performance

AWS Graviton processors are specifically designed by AWS for cloud workloads to deliver the best price performance

AWS Well-Architected Labs: Cost Optimization

The Cost Optimization section of the AWS Well Architected Workshop helps you learn how to optimize your AWS costs by using features, such as AWS Compute Optimizer, Spot Instances, and Reserved Instances. The workshop includes hands-on labs that walk you through the process of optimizing costs for different types of workloads and services, such as Amazon Elastic Compute Cloud, Amazon ECS, and Lambda.

Take me to this AWS Well-Architected lab!

Savings Plans is a flexible pricing model that can help reduce expenses compared with on-demand pricing

Savings Plans is a flexible pricing model that can help reduce expenses compared with on-demand pricing

See you next time!

Thanks for joining us to discuss cost optimization! In 2 weeks, we’ll talk about in-memory databases and caching systems.

To find all the blogs from this series, visit the Let’s Architect! list of content on the AWS Architecture Blog.

Monitoring Amazon OpenSearch Serverless using AWS User Notifications

Post Syndicated from Raj Ramasubbu original https://aws.amazon.com/blogs/big-data/monitoring-amazon-opensearch-serverless-using-aws-user-notifications/

Amazon OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple for you to run search and analytics workloads without having to think about infrastructure management. The compute capacity used for data ingestion, and search and query in OpenSearch Serverless is measured in OpenSearch Compute Units (OCUs). Customers can configure maximum OCU limits in their AWS account to control costs. In the past, customers had to monitor resource usage metrics to make sure that the collections did not deplete their configured storage and computational capacity. With the new AWS User Notification integration, you can configure the system to send notifications whenever the capacity threshold is breached. The User Notification feature eliminates the need to monitor the service constantly. AWS User Notifications enables users to centrally set up and view notifications from various AWS services across accounts and regions in a human-friendly format. Users can view notifications in a Console Notifications Center and also configure various delivery channels.

You can now use AWS User Notifications to set up delivery channels to get notified when resource usage is nearing or exceeding the capacity threshold. You receive a notification when an event matches a rule that you specify. There are multiple channels for receiving notifications, including email, AWS Chatbot chat notifications, and AWS Console Mobile Application push notifications. In this post, you will see how you can use AWS user notifications to receive notifications for OCU threshold breaches across all your OpenSearch Serverless collections.

Solution overview

OpenSearch Serverless allows you to receive OCU utilization notifications for both search and indexing in the two scenarios listed below. OCU utilization percent is calculated based on your configured maximum capacity limit and the current OCU consumption.

  • OCU Utilization Approaching Max Limit – OpenSearch Serverless sends this event through AWS User Notifications when current OCU usage percent reaches greater than or equal to 75 percent of the configured maximum OCU capacity.
  • OCU Utilization Reached Max Limit – OpenSearch Serverless sends this event when OCU usage percent reaches 100 percent of the configured maximum OCU capacity.

If you receive OCU utilization notification, you can adjust the maximum capacity limit for your collection using either the console or AWS CLI.

The following sections detail the steps you can take to receive both these OCU utilization notifications for all collections.

Prerequisites

As a prerequisite to receive OCU utilization notifications, you will set up notification configuration in notification hubs to store the notifications data. This has to be done only once, and at least one AWS region should be selected. The following screenshot shows a sample notification hub configuration for the US East (Ohio) AWS Region.

Also, please configure the maximum OCU capacity depending on your requirements for all collections.

Set up OCU utilization notifications

To set up the notifications, complete the following steps:

  1. On the AWS User Notifications console, create a notification configuration to receive notifications about OCU utilization.
  2. Choose Amazon OpenSearch Serverless for the service name and choose OCU Utilization Approaching Max Limit for the event type.
  3. Click Add another event rule and choose OCU Utilization Reached Max Limit for the event type.
  4. Using AWS User Notifications, you can receive the notifications as soon as they occur or receive them within 5 minutes to avoid receiving too many notifications all at once. We recommend receiving notifications every 5 minutes. Choose Receive within 5 minutes (recommended).
  5. You can opt to receive notifications through many delivery channels like the AWS Console Mobile Application and chat channels like Slack. For this blog post, to keep it simple, you can add your email address as the delivery channel, as shown in the following image.
  6. After you complete the configuration, your notification configurations page should look like the following screenshot.
  7. Once the OCU consumption for any of your collections reaches greater than or equal to 75 percent of the configured maximum OCU capacity, you will get a notification to your configured email address within 5 minutes, as shown in the following image.
  8. Once the OCU consumption reaches 100 percent of allocated OCU capacity for any of your collections, you will get a notification to your configured email address within 5 minutes, as shown in the following image.
  9. You can also see notifications in the AWS User Notifications console, as shown in the following screenshots

Summary

You can use AWS User Notifications to get notifications from various AWS services in one place. Now with Amazon OpenSearch Serverless integration, you can receive OCU utilization notifications as well. In this post, we explored how to enable notifications for all Amazon OpenSearch Serverless collections and receive notifications using an email delivery channel. If you have any questions or suggestions, please write to us in the comments section.


About the Author

Raj Ramasubbu is a Senior Analytics Specialist Solutions Architect focused on big data and analytics and AI/ML with Amazon Web Services. He helps customers architect and build highly scalable, performant, and secure cloud-based solutions on AWS. Raj provided technical expertise and leadership in building data engineering, big data analytics, business intelligence, and data science solutions for over 18 years prior to joining AWS. He helped customers in various industry verticals like healthcare, medical devices, life science, retail, asset management, car insurance, residential REIT, agriculture, title insurance, supply chain, document management, and real estate.

Enhancing file sharing using Amazon S3 and AWS Step Functions

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/enhancing-file-sharing-using-amazon-s3-and-aws-step-functions/

This post is written by Islam Elhamaky, Senior Solutions Architect and Adrian Tadros, Senior Solutions Architect.

Amazon S3 is a cloud storage service that many customers use for secure file storage. S3 offers a feature called presigned URLs to generate temporary download links, which are effective and secure way to upload and download data to authorized users.

There are times when customers need more control over how data is accessed. For example, they may want to limit downloads based on IAM roles instead of presigned URLs, or limit the number of downloads per object to control data access costs. Additionally, it can be useful to track individuals access those download URLs.

This blog post presents an example application that can provide this extra functionality, using AWS serverless services.

Overview

The code included in this example uses a variety of serverless services:

  • Amazon API Gateway receives all incoming requests from users and authorizes access using Amazon Cognito.
  • AWS Step Functions coordinates file sharing and downloading activities such as user validation, checking download eligibility, recording events, request routing, and response formatting.
  • AWS Lambda implements admin activities such as retrieving metadata, listing files and deletion.
  • Amazon DynamoDB stores permissions to ensure users only have access to files that have been shared with them.
  • Amazon S3 provides durable storage for users to upload and download files.
  • Amazon Athena provides an efficient way to query S3 Access Logs to extract download and bandwidth usage.
  • Amazon QuickSight provides a visual dashboard to view download and bandwidth analytics.

AWS Cloud Development Kit (AWS CDK) deploys the AWS resources and can plug into your preferred CI/CD process.

Architecture Overview

Architecture

  1. User Interface: The front end is a static React single page application hosted on S3 and served via Amazon CloudFront. The UI uses AWS NorthStar and Cloudscape design components. Amplify UI simplifies interactions with Amazon Cognito such as providing the ability to log in, sign up, and perform email verification.
  2. API Gateway: Users interact via an API Gateway REST API.
  3. Authentication:  Amazon Cognito manages user identities and access. Users sign up using their email address and then verify their email address. Requests to the API include an access token, which is verified using a Amazon Cognito authorizer.
  4. Microservices: The core operations are built with Lambda. The primary workflows allow users to share and download files and Step Functions orchestrates multiple steps in the process. These can include validating requests, authorizing that users have the correct permissions to access files, sending notifications, auditing, and keeping tracking of who is accessing files.
  5. Permission store: DynamoDB stores essential information about files such as ownership details and permissions for sharing. It tracks who owns a file and who has been granted access to download it.
  6. File store: An S3 bucket is the central file repository. Each user has a dedicated folder within the S3 bucket to store files.
  7. Notifications: The solution uses Amazon Simple Notification Service (SNS) to send email notifications to recipients when a file is shared.
  8. Analytics: S3 Access Logs are generated whenever users download or upload files to the file storage bucket. Amazon Athena filters these logs to generate a download report, extracting key information (such as the identity of the users who downloaded files and the total bandwidth consumed during the downloads).
  9. Reporting: Amazon QuickSight provides an interface for administrators to view download reports and dashboards.

Walkthrough

As prerequisites, you need:

  • Node.js version 16+.
  • AWS CLI version 2+.
  • An AWS account and a profile set up on your computer.

Follow the instructions in the code repository to deploy the example to your AWS account. Once the application is deployed, you can access the user interface.

In this example, you walk through the steps to create upload a file and share it with a recipient:

  1. The example requires users to identify themselves using an email address. Choose Create Account then Sign In with your credentials.
    Create account
  2. Select Share a file.
    Share a file
  3. Select Choose file to browse and select file to share. Choose Next.
    Choose file
  4. You must populate at least one recipient. Choose Add recipient to add more recipients. Choose Next.
    Step 4
  5. Set Expire date and Limit downloads to configure share expiry date and limit the number of allowed downloads. Choose Next.
    Step 5
  6. Review the share request details. You can navigate to previous screens to modify. Choose Submit once done.
    Step 6
  7. Choose My files to view your shared file.
    Step 7

Extending the solution

The example uses Step Functions to allow you to extend and customize the workflows. This implements a default workflow, providing you with the ability to override logic or introduce new steps to meet your requirements.

This section walks through the default behavior of the Share File and Download File Step Functions workflows.

The Share File workflow

Share File workflow

The share file workflow consists of the following steps:

  1. Validate: check that the share request contains all mandatory fields.
  2. Get User Info: retrieve the logged in user’s information such as name and email address from Amazon Cognito.
  3. Authorize: check the permissions stored in DynamoDB to verify if the user owns the file and has permission to share the file.
  4. Audit: record the share attempt for auditing purposes.
  5. Process: update the permission store in DynamoDB.
  6. Send notifications: send email notifications to recipients to let them know that a new file has been shared with them.

The Download File workflow

Download File workflow

The download file workflow consists of the following steps:

  1. Validate: check that the download request contains the required fields (for example, user ID and file ID).
  2. Get user info: retrieve the user’s information from Amazon Cognito such as their name and email address.
  3. Authorize: check the permissions store in DynamoDB to check if the user owns the file or is valid recipient with permissions to download the file.
  4. Audit: record the download attempt.
  5. Process: generate a short-lived S3 pre-signed download URL and return to the user.

Step Functions API data mapping

The example uses API Gateway request and response data mappings to allow the REST API to communicate directly with Step Functions. This section shows how to customize the mapping based on your use case.

Request data mapping

The API Gateway REST API uses Apache VTL templates to transform and construct requests to the underlying service. This solution abstracts the construction of these templates using a CDK construct:

api.root
.addResource('share')
.addResource('{fileId}')
.addMethod(
  'POST',
   StepFunctionApiIntegration(shareStepFunction, [
      { name: 'fileId', sourceType: 'params' },
      { name: 'recipients', sourceType: 'body' },
      /* your custom input fields */
   ]),
   authorizerSettings,
);

The StepFunctionApiIntegration construct handles the request mapping allowing you to extract fields from the incoming API request and pass these as inputs to a Step Functions workflow. This generates the following VTL template:

{
  "name": "$context.requestId",
  "input": "{\"userId\":\"$context.authorizer.claims.sub\",\"fileId\":\"$util.escap eJavaScript($input.params('fileId'))\",\"recipients\":$util.escapeJavaScript($input.json('$.recipients'))}",
  "stateMachineArn": "...stateMachineArn"
}

In this scenario, fields are extracted from the API request parameters, body, and authorization header and passed to the workflow. You can customize the configuration to meet your requirements.

Response data mapping

The example has response mapping templates using Apache VTL. The output of the last step in a workflow is mapped as a JSON response and returned to the user through API Gateway. The response also includes CORS headers:

#set($context.responseOverride.header.Access-Control-Allow-Headers = '*')
#set($context.responseOverride.header.Access-Control-Allow-Origin = '*')
#set($context.responseOverride.header.Access-Control-Allow-Methods = '*')
#if($input.path('$.status').toString().equals("FAILED"))
#set($context.responseOverride.status = 500)
{
  "error": "$input.path('$.error')",
  "cause": "$input.path('$.cause')"
}
#else
  $input.path('$.output')
#end

You can customize this response template to meet your requirements. For example, you may provide custom behavior for different response codes.

Conclusion

In this blog post, you learn how you can securely share files with authorized external parties and track their access using AWS serverless services. The sample application presented uses Step Functions to allow you to extend and customize the workflows to meet your use case requirements.

To learn more about the concepts discussed, visit:

For more serverless learning resources, visit Serverless Land. Learn about data processing in Step Functions by reading the guide: Introduction to Distributed Map for Serverless Data Processing.

Protecting an AWS Lambda function URL with Amazon CloudFront and Lambda@Edge

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/protecting-an-aws-lambda-function-url-with-amazon-cloudfront-and-lambdaedge/

This post is written by Jerome Van Der Linden, Senior Solutions Architect Builder.

A Lambda function URL is a dedicated HTTPs endpoint for an AWS Lambda function. When configured, you can invoke the function directly with an HTTP request. You can choose to make it public by setting the authentication type to NONE for an open API. Or you can protect it with AWS IAM, setting the authentication type to AWS_IAM. In that case, only authenticated users and roles are able to invoke the function via the function URL.

Lambda@Edge is a feature of Amazon CloudFront that can run code closer to the end user of an application. It is generally used to manipulate incoming HTTP requests or outgoing HTTP responses between the user client and the application’s origin. In particular, it can add extra headers to the request (‘Authorization’, for example).

This blog post shows how to use CloudFront and Lambda@Edge to protect a Lambda function URL configured with the AWS_IAM authentication type by adding the appropriate headers to the request before it reaches the origin.

Overview

There are four main components in this example:

  • Lambda functions with function URLs enabled: This is the heart of the ‘application’, the functions that contain the business code exposed to the frontend. The function URL is configured with AWS_IAM authentication type, so that only authenticated users/roles can invoke it.
  • A CloudFront distribution: CloudFront is a content delivery network (CDN) service used to deliver content to users with low latency. It also improves the security with traffic encryption and built-in DDoS protection. In this example, using CloudFront in front of the Lambda URL can add this layer of security and potentially cache content closer to the users.
  • A Lambda function at the edge: CloudFront also provides the ability to run Lambda functions close to the users: Lambda@Edge. This example does this to sign the request made to the Lambda function URL and adds the appropriate headers to the request so that invocation of the URL is authenticated with IAM.
  • A web application that invokes the Lambda function URLs: The example also contains a single page application built with React, from which the users make requests to one or more Lambda function URLs. The static assets (for example, HTML and JavaScript files) are stored in Amazon S3 and also exposed and cached by CloudFront.

This is the example architecture:

Architecture

The request flow is:

  1. The user performs requests via the client to reach static assets from the React application or Lambda function URLs.
  2. For a static asset, CloudFront retrieves it from S3 or its cache and returns it to the client.
  3. If the request is for a Lambda function URL, it first goes to a Lambda@Edge. The Lambda@Edge function has the lambda:InvokeFunctionUrl permission on the target Lambda function URL and uses this to sign the request with the signature V4. It adds the Authorization, X-Amz-Security-Token, and X-Amz-Date headers to the request.
  4. After the request is properly signed, CloudFront forwards it to the Lambda function URL.
  5. Lambda triggers the execution of the function that performs any kind of business logic. The current solution is handling books (create, get, update, delete).
  6. Lambda returns the response of the function to CloudFront.
  7. Finally, CloudFront returns the response to the client.

There are several types of events where a Lambda@Edge function can be triggered:

Lambda@Edge events

  • Viewer request: After CloudFront receives a request from the client.
  • Origin request: Before the request is forwarded to the origin.
  • Origin response: After CloudFront receives the response from the origin.
  • Viewer response: Before the response is sent back to the client.

The current example, to update the request before it is sent to the origin (the Lambda function URL), uses the “Origin Request” type.

You can find the complete example, based on the AWS Cloud Development Kit (CDK), on GitHub.

Backend stack

The backend contains the different Lambda functions and Lambda function URLs. It uses the AWS_IAM auth type and the CORS (Cross Origin Resource Sharing) definition when adding the function URL to the Lambda function. Use a more restrictive allowedOrigins for a real application.

const getBookFunction = new NodejsFunction(this, 'GetBookFunction', {
    runtime: Runtime.NODEJS_18_X,  
    memorySize: 256,
    timeout: Duration.seconds(30),
    entry: path.join(__dirname, '../functions/books/books.ts'),
    environment: {
      TABLE_NAME: bookTable.tableName
    },
    handler: 'getBookHandler',
    description: 'Retrieve one book by id',
});
bookTable.grantReadData(getBookFunction);
const getBookUrl = getBookFunction.addFunctionUrl({
    authType: FunctionUrlAuthType.AWS_IAM,
    cors: {
        allowedOrigins: ['*'],
        allowedMethods: [HttpMethod.GET],
        allowedHeaders: ['*'],
        allowCredentials: true,
    }
});

Frontend stack

The Frontend stack contains the CloudFront distribution and the Lambda@Edge function. This is the Lambda@Edge definition:

const authFunction = new cloudfront.experimental.EdgeFunction(this, 'AuthFunctionAtEdge', {
    handler: 'auth.handler',
    runtime: Runtime.NODEJS_16_X,  
    code: Code.fromAsset(path.join(__dirname, '../functions/auth')),
 });

The following policy allows the Lambda@Edge function to sign the request with the appropriate permission and to invoke the function URLs:

authFunction.addToRolePolicy(new PolicyStatement({
    sid: 'AllowInvokeFunctionUrl',
    effect: Effect.ALLOW,
    actions: ['lambda:InvokeFunctionUrl'],
    resources: [getBookArn, getBooksArn, createBookArn, updateBookArn, deleteBookArn],
    conditions: {
        "StringEquals": {"lambda:FunctionUrlAuthType": "AWS_IAM"}
    }
}));

The function code uses the AWS JavaScript SDK and more precisely the V4 Signature part of it. There are two important things here:

  • The service for which we want to sign the request: Lambda
  • The credentials of the function (with the InvokeFunctionUrl permission)
const request = new AWS.HttpRequest(new AWS.Endpoint(`https://${host}${path}`), region);
// ... set the headers, body and method ...
const signer = new AWS.Signers.V4(request, 'lambda', true);
signer.addAuthorization(AWS.config.credentials, AWS.util.date.getDate());

You can get the full code of the function here.

CloudFront distribution and behaviors definition

The CloudFront distribution has a default behavior with an S3 origin for the static assets of the React application.

It also has one behavior per function URL, as defined in the following code. You can notice the configuration of the Lambda@Edge function with the type ORIGIN_REQUEST and the behavior referencing the function URL:

const getBehaviorOptions: AddBehaviorOptions  = {
    viewerProtocolPolicy: ViewerProtocolPolicy.HTTPS_ONLY,
    cachePolicy: CachePolicy.CACHING_DISABLED,
    originRequestPolicy: OriginRequestPolicy.CORS_CUSTOM_ORIGIN,
    responseHeadersPolicy: ResponseHeadersPolicy.CORS_ALLOW_ALL_ORIGINS_WITH_PREFLIGHT,
    edgeLambdas: [{
        functionVersion: authFunction.currentVersion,
        eventType: LambdaEdgeEventType.ORIGIN_REQUEST,
        includeBody: false, // GET, no body
    }],
    allowedMethods: AllowedMethods.ALLOW_GET_HEAD_OPTIONS,
}
this.distribution.addBehavior('/getBook/*', new HttpOrigin(Fn.select(2, Fn.split('/', getBookUrl)),), getBehaviorOptions);

Regional consideration

The Lambda@Edge function must be in the us-east-1 Region (N. Virginia), as does the frontend stack. If you deploy the backend stack in another Region, you’ll must pass the Lambda function URLs (and ARNs) to the frontend. Using a custom resource in CDK, it’s possible to create parameters in AWS Systems Manager Parameter Store in the us-east-1 Region containing this information. For more details, review the code in the GitHub repo.

Walkthrough

Before deploying the solution, follow the README in the GitHub repo and make sure to meet the prerequisites.

Deploying the solution

  1. From the solution directory, install the dependencies:
    npm install
  2. Start the deployment of the solution (it can take up to 15 minutes):
    cdk deploy --all
  3. Once the deployment succeeds, the outputs contain both the Lambda function URLs and the URLs “protected” behind the CloudFront distribution:Outputs

Testing the solution

  1. Using cURL, query the Lambda Function URL to retrieve all books (GetBooksFunctionURL in the CDK outputs):
    curl -v https://qwertyuiop1234567890.lambda-url.eu-west-1.on.aws/
    

    You should get the following output. As expected, it’s forbidden to directly access the Lambda function URL without the proper IAM authentication:

    Output

  2. Now query the “protected” URL to retrieve all books (GetBooksURL in the CDK outputs):
    curl -v https://q1w2e3r4t5y6u.cloudfront.net/getBooks
    

    This time you should get a HTTP 200 OK with an empty list as a result.

    Output

The logs of the Lambda@Edge function (search for “AuthFunctionAtEdge” in CloudWatch Logs in the closest Region) show:

  • The incoming request:Incoming request
  • The signed request, with the additional headers (Authorization, X-Amz-Security-Token, and X-Amz-Date). These headers make the difference when the Lambda URL receives the request and validates it with IAM.Headers

You can test the complete solution throughout the frontend, using the FrontendURL in the CDK outputs.

Cleaning up

The Lambda@Edge function is replicated in all Regions where you have users. You must delete the replicas before deleting the rest of the solution.

To delete the deployed resources, run the cdk destroy --all command from the solution directory.

Conclusion

This blog post shows how to protect a Lambda Function URL, configured with IAM authentication, using a CloudFront distribution and Lambda@Edge. CloudFront helps protect from DDoS, and the function at the edge adds appropriate headers to the request to authenticate it for Lambda.

Lambda function URLs provide a simpler way to invoke your function using HTTP calls. However, if you need more advanced features like user authentication with Amazon Cognito, request validation or rate throttling, consider using Amazon API Gateway.

For more serverless learning resources, visit Serverless Land.

AWS Weekly Roundup – AWS AppSync, AWS CodePipeline, Events and More – August 21, 2023

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-aws-appsync-aws-codepipeline-events-and-more-august-21-2023/

In a few days, I will board a plane towards the south. My tour around Latin America starts. But I won’t be alone in this adventure, you can find some other News Blog authors, like Jeff or Seb, speaking at AWS Community Days and local events in Peru, Argentina, Chile, and Uruguay. If you see us, come and say hi. We would love to meet you.

Latam Community in reInvent 2022

Last Week’s Launches
Here are some launches that got my attention during the previous week.

AWS AppSync now supports JavaScript for all resolvers in GraphQL APIs – Last year, we announced that AppSync now supports JavaScript pipeline resolvers. And starting last week, developers can use JavaScript to write unit resolvers, pipeline resolvers, and AppSync functions that are run on the AppSync Javascript runtime.

AWS CodePipeline now supports GitLabNow you can use your GitLab.com source repository to build, test, and deploy code changes using AWS CodePipeline, in addition to other providers like AWS CodeCommit, Bitbucket, GitHub.com, and GitHub Enterprise Server.

Amazon CloudWatch Agent adds support for OpenTelemetry traces and AWS X-Ray With the new version of the agent you are now able to collect metrics, logs, and traces with a single agent, not only for CloudWatch but also for OpenTelemetry and AWS X-Ray. Simplifying the installation, configuration, and management of telemetry collection.

New instance types: Amazon EC2 M7a and Amazon EC2 Hpc7a – The new Amazon EC2 M7a is a general purpose instance type powered by 4th Gen AMD EPYC processor. In the announcement blog, you can find all the specifics for this instance type. The new Amazon EC2 Hpc7a instances are also powered by 4th Gen AMD EPYC processors. These instance types are optimized for high performance computing and Channy Yun wrote a blog post describing the different characteristics of the Amazon EC2 Hpc7a instance type.

AWS DeepRacer Educator PlaybooksLast week we introduced the AWS DeepRacer educator playblooks, these are a tool for educators to integrate foundational machine learning (ML) curriculum and labs into their classrooms. Educators can use these playbooks to easily upskill students in the basics of ML with autonomous vehicles.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates and news that you might have missed:

Guide for using AWS Lambda to process Apache Kafka StreamsJulian Wood just published the most complete guide you can find on how to use Lambda with Apache Kafka. If you are an Amazon Kinesis user, don’t worry. We’ve got you covered with this video series where you will find similar topics.

Using AWS Lambda with Kafka guide

The Official AWS Podcast – Listen each week for updates on the latest AWS news and deep dives into exciting use cases. There are also official AWS podcasts in several languages. Check out the ones in FrenchGermanItalian, and Spanish.

AWS Open-Source News and Updates – This is a newsletter curated by my colleague Ricardo to bring you the latest open source projects, posts, events, and more.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

Join AWS Hybrid Cloud & Edge Day to learn how to deploy your applications in the everywhere cloud

AWS Global SummitsAWS Summits – The 2023 AWS Summits season is almost ending with the last two in-person events in Mexico City (August 30) and Johannesburg (September 26).

AWS re:Invent reInvent(November 27–December 1) – But don’t worry because re:Invent season is coming closer. Join us to hear the latest from AWS, learn from experts, and connect with the global cloud community. Registration is now open.

AWS Community Days AWS Community Day– Join a community-led conference run by AWS user group leaders in your region:Taiwan (August 26), Aotearoa (September 6), Lebanon (September 9), Munich (September 14), Argentina (September 16), Spain (September 23), and Chile (September 30). Check all the upcoming AWS Community Days here.

CDK Day (September 29) – A community-led fully virtual event with tracks in English and in Spanish about CDK and related projects. Learn more in the website.

That’s all for this week. Check back next Monday for another Week in Review!

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

— Marcia

Amazon OpenSearch Serverless expands support for larger workloads and collections

Post Syndicated from Prashant Agrawal original https://aws.amazon.com/blogs/big-data/amazon-opensearch-serverless-expands-support-for-larger-workloads-and-collections/

We recently announced new enhancements to Amazon OpenSearch Serverless that can scan and search source data sizes of up to 6 TB. At launch, OpenSearch Serverless supported searching one or more indexes within a collection, with the total combined size of up to 1 TB. With the support for 6 TB source data, you can now scale up your log analytics, machine learning applications, and ecommerce data more effectively. With OpenSearch Serverless, you can enjoy the benefits of these expanded limits without having to worry about sizing, monitoring your usage, or manually scaling an OpenSearch domain. If you are new to OpenSearch Serverless, refer to Log analytics the easy way with Amazon OpenSearch Serverless to get started.

The compute capacity in OpenSearch Serverless used for data ingestion and search and query is measured in OpenSearch Compute Units (OCUs). To support larger datasets, we have raised the OCU limit from 50 to 100 for indexing and search, including redundancy for Availability Zone outages and infrastructure failures. These OCUs are shared among various collections, each containing one or more indexes of varied sizes. You can configure maximum OCU limits on search and indexing independently using the AWS Command Line Interface (AWS CLI), SDK, or AWS Management Console to manage costs. Furthermore, you can have multiple 6 TB collections. If you wish to expand the OCU limits for indexes and collection sizes beyond 6 TB, reach out to us through AWS Support.

Set max OCU to 100

To get started, you must first change the OCU limits for indexing and search to 100. Note that you only pay for the resources consumed and not for the max OCU configuration.

Ingesting the data

You can use the load generation scripts shared in the following workshop or you can use your own application or data generator to create load. You can run multiple instances of these scripts to generate a burst in indexing requests. As seen in the following screenshot, in this test, we created six indexes approximating to 1 TB or more.

Auto scaling resources in OpenSearch Serverless

The highlighted points in the following figures show how OpenSearch Serverless responds to the increasing indexing traffic from 2,000 bulk request operations to 7,000 bulk requests per second by auto scaling the OCUs. Each bulk request includes 7,500 documents. OpenSearch Serverless uses various system signals to automatically scale out the OCUs based on your workload demand.

OpenSearch Serverless also scales down indexing OCUs when there is a decrease in your workload’s activity level. The highlighted points in the following figures show a gradual decrease in indexing traffic from 7,000 bulk ingest operations to less than 1,000 operations per second. OpenSearch Serverless reacts to the changes in load by reducing the number of OCUs.

Conclusion

We encourage you to take advantage of the 6 TB index support and put it to the test! Migrate your data, explore the improved throughput, and take advantage of the enhanced scaling capabilities. Our goal is to deliver a seamless and efficient experience that aligns with your requirements.

To get started, refer to Log analytics the easy way with Amazon OpenSearch Serverless. To get hands-on with OpenSearch Serverless, follow the Getting started with Amazon OpenSearch Serverless workshop, which has a step-by-step guide for configuring and setting up an OpenSearch Serverless collection.

If you have feedback about this post, share it in the comments section. If you have questions about this post, start a new thread on the Amazon OpenSearch Service forum or contact AWS Support.


About the author

Prashant Agrawal is a Sr. Search Specialist Solutions Architect with Amazon OpenSearch Service. He works closely with customers to help them migrate their workloads to the cloud and helps existing customers fine-tune their clusters to achieve better performance and save on cost. Before joining AWS, he helped various customers use OpenSearch and Elasticsearch for their search and log analytics use cases. When not working, you can find him traveling and exploring new places. In short, he likes doing Eat → Travel → Repeat.

Implementing the transactional outbox pattern with Amazon EventBridge Pipes

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/implementing-the-transactional-outbox-pattern-with-amazon-eventbridge-pipes/

This post is written by Sayan Moitra, Associate Solutions Architect, and Sangram Sonawane, Senior Solutions Architect.

Microservice architecture is an architectural style that structures an application as a collection of loosely coupled and independently deployable services. Services must communicate with each other to exchange messages and perform business operations. Ensuring message reliability while maintaining loose coupling between services is crucial for building robust and scalable systems.

This blog demonstrates how to use Amazon DynamoDB, a fully managed serverless key-value NoSQL database, and Amazon EventBridge, a managed serverless event bus, to implement reliable messaging for microservices using the transactional outbox pattern.

Business operations can span across multiple systems or databases to maintain consistency and synchronization between them. One approach often used in distributed systems or architectures where data must be replicated across multiple locations or components is dual writes. In a dual write scenario, when a write operation is performed on one system or database, the same data or event also triggers another system in real-time or near real-time. This ensures that both systems always have the same data, minimizing data inconsistencies.

Dual writes can also introduce data integrity challenges in distributed systems. Failure to update the database or to send events to other downstream systems after an initial system update can lead to data loss and leave the application in an inconsistent state. One design approach to overcome this challenge is to combine dual writes with the transactional outbox pattern.

Challenges with dual writes

Consider an online food ordering application to illustrate the challenges with dual writes. Once the user submits the order, the order service updates the order status in a persistent data store. The order status update should also be sent to notify_restaurant and order_tracking services using a message bus for asynchronous communication. After successfully updating the order status in the database, the order service writes the event to the message bus. The order_service performs a dual write operation of updating the database and publishing the event details on the message bus for other services to read.

This approach works until there are issues encountered in publishing the event to the message bus. Publishing events can fail for multiple reasons like a network error or a message bus outage. When failure occurs, the notify_restaurant and order_tracking service will not be notified of the order update event, leaving the system in an inconsistent state. Implementing the transactional outbox pattern with dual writes can help ensure reliable messaging between systems after a database update.

This illustration shows a sequence diagram for an online food ordering application and the challenges with dual writes:

Sequence diagram

Overview of the transactional outbox pattern

In the transactional outbox pattern, a second persistent data store is introduced to store the outgoing messages. In the online food order example, updating the database with order details and storing the event information in the outbox table becomes a single atomic transaction.

The transaction is only successful when writing to both the database and the outbox table. Any failures to write to the outbox table rolls back the transaction. A separate process then reads the event from the outbox table and publishes the event on the message bus. Once the message is available on the message bus, it can be read by the notify_restaurant and order_tracking services. Combining transactional outbox pattern with dual writes allows for data consistency across systems and reliable message delivery with the transactional context.

The following illustration shows a sequence diagram for an online food ordering application with transactional outbox pattern for reliable message delivery.

Sequence diagram 2

Implementing the transaction outbox pattern

DynamoDB includes a feature called DynamoDB Streams to capture a time-ordered sequence of item-level modifications in the DynamoDB table and stores this information in a log for up to 24 hours. Applications can access this log and view the data items as they appeared before and after they were modified, in near real time.

Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams writes a stream record with the primary key attributes of the items that were modified. A stream record contains information about a data modification to a single item in a DynamoDB table. DynamoDB Streams writes stream records in near real time and these can be consumed for processing based on the contents. Enabling this feature removes the need to maintain a separate outbox table and lowers the management and operational overhead.

EventBridge Pipes connects event producers to consumers with options to transform, filter, and enrich messages. EventBridge Pipes can integrate with DynamoDB Streams to capture table events without writing any code. There is no need to write and maintain a separate process to read from the stream. EventBridge Pipes also supports retries, and any failed events can be routed to a dead-letter queue (DLQ) for further analysis and reprocessing.

EventBridge polls shards in DynamoDB stream for records and invokes pipes as soon as records are available. You can configure this to read records from DynamoDB only when it has gathered a specified batch size or the batch window expires. Pipes maintains the order of records from the data stream when sending that data to the destination. You can optionally filter or enhance these records before sending them to a target for processing.

Example overview

The following diagram illustrates the implementation of transactional outbox pattern with DynamoDB Streams and EventBridge Pipe. Amazon API Gateway is used to trigger a DynamoDB operation via a POST request. The change in the DynamoDB triggers an EventBridge event bus via Amazon EventBridge Pipes. This event bus invokes the Lambda functions through an SQS Queue, depending on the filters applied.

Architecture overview

  1. In this sample implementation, Amazon API Gateway makes a POST call to the DynamoDB table for database updates. Amazon API Gateway supports CRUD operations for Amazon DynamoDB without the need of a compute layer for database calls.
  2. DynamoDB Streams is enabled on the table, which captures a time-ordered sequence of item-level modifications in the DynamoDB table in near real time.
  3. EventBridge Pipes integrates with DynamoDB Streams to capture the events and can optionally filter and enrich the data before it is sent to a supported target. In this example, events are sent to Amazon EventBridge, which acts as a message bus. This can be replaced with any of the supported targets as detailed in Amazon EventBridge Pipes targets. DLQ can be configured to handle any failed events, which can be analyzed and retried.
  4. Consumers listening to the event bus receive messages. You can optionally fan out and deliver the events to multiple consumers and apply filters. You can configure a DLQ to handle any failures and retries.

Prerequisites

  1. AWS SAM CLI, version 1.85.0 or higher
  2. Python 3.10

Deploying the example application

  1. Clone the repository:
    git clone https://github.com/aws-samples/amazon-eventbridge-pipes-dynamodb-stream-transactional-outbox.git
  2. Change to the root directory of the project and run the following AWS SAM CLI commands:
    cd amazon-eventbridge-pipes-dynamodb-stream-transactional-outbox               
    sam build
    sam deploy --guided
    
  3. Enter the name for your stack during guided deployment. During the deploy process, select the default option for all the additional steps.
    SAM deployment
  4. The resources are deployed.
    Testing the application

Testing the application

Once the deployment is complete, it provides the API Gateway URL in the output. You can test using that URL. To test the application, use Postman to make a POST call to API Gateway prod URL:

Postman

You can also test using the curl command:

curl -s --header "Content-Type: application/json" \
  --request POST \
  --data '{"Status":"Created"}' \
  <API_ENDPOINT>

This produces the following output:

Expected output

To verify if the order details are updated in the DynamoDB table, run this command for performing a scan operation on the table.

aws dynamodb scan \
    --table-name <DynamoDB Table Name>

Handling failures

DynamoDB Streams captures a time-ordered sequence of item-level modifications in the DynamoDB table and stores this information in a log for up to 24 hours. If EventBridge is unavailable to read from DynamoDB Stream due to misconfiguration, for example, the records are available in the log for 24 hours. Once EventBridge is reintegrated, it retrieves all undelivered records from the last 24 hours. For integration issues between EventBridge Pipes and the target application, all failed messages can be sent to the DLQ for reprocessing at a later time.

Cleaning up

To clean up your AWS based resources, run following AWS SAM CLI command, answering “y” to all questions:

sam delete --stack-name <stack_name>

Conclusion

Reliable interservice communication is an important consideration in microservice design, especially when faced with dual writes. Combining the transactional outbox pattern with dual writes provides a robust way of improving message reliability.

This blog demonstrates an architecture pattern to tackle the challenge of dual writes by combining it with the transactional outbox pattern using DynamoDB and EventBridge Pipes. This solution provides a no-code approach with AWS Managed Services, reducing management and operational overhead.

For more serverless learning resources, visit Serverless Land.

Integrating IBM MQ with Amazon SQS and Amazon SNS using Apache Camel

Post Syndicated from Pascal Vogel original https://aws.amazon.com/blogs/compute/integrating-ibm-mq-with-amazon-sqs-and-amazon-sns-using-apache-camel/

This post is written by Joaquin Rinaudo, Principal Security Consultant and Gezim Musliaj, DevOps Consultant.

IBM MQ is a message-oriented middleware (MOM) product used by many enterprise organizations, including global banks, airlines, and healthcare and insurance companies.

Customers often ask us for guidance on how they can integrate their existing on-premises MOM systems with new applications running in the cloud. They’re looking for a cost-effective, scalable and low-effort solution that enables them to send and receive messages from their cloud applications to these messaging systems.

This blog post shows how to set up a bi-directional bridge from on-premises IBM MQ to Amazon MQ, Amazon Simple Queue Service (Amazon SQS), and Amazon Simple Notification Service (Amazon SNS).

This allows your producer and consumer applications to integrate using fully managed AWS messaging services and Apache Camel. Learn how to deploy such a solution and how to test the running integration using SNS, SQS, and a demo IBM MQ cluster environment running on Amazon Elastic Container Service (ECS) with AWS Fargate.

This solution can also be used as part of a step-by-step migration using the approach described in the blog post Migrating from IBM MQ to Amazon MQ using a phased approach.

Solution overview

The integration consists of an Apache Camel broker cluster that bi-directionally integrates an IBM MQ system and target systems, such as Amazon MQ running ActiveMQ, SNS topics, or SQS queues.

In the following example, AWS services, in this case AWS Lambda and SQS, receive messages published to IBM MQ via an SNS topic:

Solution architecture overview for sending messages

  1. The cloud message consumers (Lambda and SQS) subscribe to the solution’s target SNS topic.
  2. The Apache Camel broker connects to IBM MQ using secrets stored in AWS Secrets Manager and reads new messages from the queue using IBM MQ’s Java library. Only IBM MQ messages are supported as a source.
  3. The Apache Camel broker publishes these new messages to the target SNS topic. It uses the Amazon SNS Extended Client Library for Java to store any messages larger than 256 KB in an Amazon Simple Storage Service (Amazon S3) bucket.
  4. Apache Camel stores any message that cannot be delivered to SNS after two retries in an S3 dead letter queue bucket.

The next diagram demonstrates how the solution sends messages back from an SQS queue to IBM MQ:

Solution architecture overview for sending messages

  1. A sample message producer using Lambda sends messages to an SQS queue. It uses the Amazon SQS Extended Client Library for Java to send messages larger than 256 KB.
  2. The Apache Camel broker receives the messages published to SQS, using the SQS Extended Client Library if needed.
  3. The Apache Camel broker sends the message to the IBM MQ target queue.
  4. As before, the broker stores messages that cannot be delivered to IBM MQ in the S3 dead letter queue bucket.

A phased live migration consists of two steps:

  1. Deploy the broker service to allow reading messages from and writing to existing IBM MQ queues.
  2. Once the consumer or producer is migrated, migrate its counterpart to the newly selected service (SNS or SQS).

Next, you will learn how to set up the solution using the AWS Cloud Development Kit (AWS CDK).

Deploying the solution

Prerequisites

  • AWS CDK
  • TypeScript
  • Java
  • Docker
  • Git
  • Yarn

Step 1: Cloning the repository

Clone the repository using git:

git clone https://github.com/aws-samples/aws-ibm-mq-adapter

Step 2: Setting up test IBM MQ credentials

This demo uses IBM MQ’s mutual TLS authentication. To do this, you must generate X.509 certificates and store them in AWS Secrets Manager by running the following commands in the app folder:

  1. Generate X.509 certificates:
    ./deploy.sh generate_secrets
  2. Set up the secrets required for the Apache Camel broker (replace <integration-name> with, for example, dev):
    ./deploy.sh create_secrets broker <integration-name>
  3. Set up secrets for the mock IBM MQ system:
    ./deploy.sh create_secrets mock
  4. Update the cdk.json file with the secrets ARN output from the previous commands:
    • IBM_MOCK_PUBLIC_CERT_ARN
    • IBM_MOCK_PRIVATE_CERT_ARN
    • IBM_MOCK_CLIENT_PUBLIC_CERT_ARN
    • IBMMQ_TRUSTSTORE_ARN
    • IBMMQ_TRUSTSTORE_PASSWORD_ARN
    • IBMMQ_KEYSTORE_ARN
    • IBMMQ_KEYSTORE_PASSWORD_ARN

If you are using your own IBM MQ system and already have X.509 certificates available, you can use the script to upload those certificates to AWS Secrets Manager after running the script.

Step 3: Configuring the broker

The solution deploys two brokers, one to read messages from the test IBM MQ system and one to send messages back. A separate Apache Camel cluster is used per integration to support better use of Auto Scaling functionality and to avoid issues across different integration operations (consuming and reading messages).

Update the cdk.json file with the following values:

  • accountId: AWS account ID to deploy the solution to.
  • region: name of the AWS Region to deploy the solution to.
  • defaultVPCId: specify a VPC ID for an existing VPC in the AWS account where the broker and mock are deployed.
  • allowedPrincipals: add your account ARN (e.g., arn:aws:iam::123456789012:root) to allow this AWS account to send messages to and receive messages from the broker. You can use this parameter to set up cross-account relationships for both SQS and SNS integrations and support multiple consumers and producers.

Step 4: Bootstrapping and deploying the solution

  1. Make sure you have the correct AWS_PROFILE and AWS_REGION environment variables set for your development account.
  2. Run yarn cdk bootstrap –-qualifier mq <aws://<account-id>/<region> to bootstrap CDK.
  3. Run yarn install to install CDK dependencies.
  4. Finally, execute yarn cdk deploy '*-dev' –-qualifier mq --require-approval never to deploy the solution to the dev environment.

Step 5: Testing the integrations

Use AWS System Manager Session Manager and port forwarding to establish tunnels to the test IBM MQ instance to access the web console and send messages manually. For more information on port forwarding, see Amazon EC2 instance port forwarding with AWS System Manager.

  1. In a command line terminal, make sure you have the correct AWS_PROFILE and AWS_REGION environment variables set for your development account.
  2. In addition, set the following environment variables:
    • IBM_ENDPOINT: endpoint for IBM MQ. Example: network load balancer for IBM mock mqmoc-mqada-1234567890.elb.eu-west-1.amazonaws.com.
    • BASTION_ID: instance ID for the bastion host. You can retrieve this output from Step 4: Bootstrapping and deploying the solution listed after the mqBastionStack deployment.

    Use the following command to set the environment variables:

    export IBM_ENDPOINT=mqmoc-mqada-1234567890.elb.eu-west-1.amazonaws.com
    export BASTION_ID=i-0a1b2c3d4e5f67890
  3. Run the script test/connect.sh.
  4. Log in to the IBM web console via https://127.0.0.1:9443/admin using the default IBM user (admin) and the password stored in AWS Secrets Manager as mqAdapterIbmMockAdminPassword.

Sending data from IBM MQ and receiving it in SNS:

  1. In the IBM MQ console, access the local queue manager QM1 and DEV.QUEUE.1.
  2. Send a message with the content Hello AWS. This message will be processed by AWS Fargate and published to SNS.
  3. Access the SQS console and choose the snsIntegrationStack-dev-2 prefix queue. This is an SQS queue subscribed to the SNS topic for testing.
  4. Select Send and receive message.
  5. Select Poll for messages to see the Hello AWS message previously sent to IBM MQ.

Sending data back from Amazon SQS to IBM MQ:

  1. Access the SQS console and choose the queue with the prefix sqsPublishIntegrationStack-dev-3-dev.
  2. Select Send and receive messages.
  3. For Message Body, add Hello from AWS.
  4. Choose Send message.
  5. In the IBM MQ console, access the local queue manager QM1 and DEV.QUEUE.2 to find your message listed under this queue.

Step 6: Cleaning up

Run cdk destroy '*-dev' to destroy the resources deployed as part of this walkthrough.

Conclusion

In this blog, you learned how you can exchange messages between IBM MQ and your cloud applications using Amazon SQS and Amazon SNS.

If you’re interested in getting started with your own integration, follow the README file in the GitHub repository. If you’re migrating existing applications using industry-standard APIs and protocols such as JMS, NMS, or AMQP 1.0, consider integrating with Amazon MQ using the steps provided in the repository.

If you’re interested in running Apache Camel in Kubernetes, you can also adapt the architecture to use Apache Camel K instead.

For more serverless learning resources, visit Serverless Land.