Tag Archives: contributed

Managing federated schema with AWS Lambda and Amazon S3

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/managing-federated-schema-with-aws-lambda-and-amazon-s3/

This post is written by Krzysztof Lis, Senior Software Development Engineer, IMDb.

GraphQL schema management is one of the biggest challenges in the federated setup. IMDb has 19 subgraphs (graphlets) – each of them owns and publishes a part of the schema as a part of an independent CI/CD pipeline.

To manage federated schema effectively, IMDb introduced a component called Schema Manager. This is responsible for fetching the latest schema changes and validating them before publishing it to the Gateway.

Part 1 presents the migration from a monolithic REST API to a federated GraphQL (GQL) endpoint running on AWS Lambda. This post focuses on schema management in federated GQL systems. It shows the challenges that the teams faced when designing this component and how we addressed them. It also shares best practices and processes for schema management, based on our experience.

Comparing monolithic and federated GQL schema

In the standard, monolithic implementation of GQL, there is a single file used to manage the whole schema. This makes it easier to ensure that there are no conflicts between the new changes and the earlier schema. Everything can be validated at the build time and there is no risk that external changes break the endpoint during runtime.

This is not true for the federated GQL endpoint. The gateway fetches service definitions from the graphlets on runtime and composes the overall schema. If any of the graphlets introduces a breaking change, the gateway fails to compose the schema and won’t be able to serve the requests.

The more graphlets we federate to, the higher the risk of introducing a breaking change. In enterprise scale systems, you need a component that protects the production environment from potential downtime. It must notify graphlet owners that they are about to introduce a breaking change, preferably during development before releasing the change.

Federated schema challenges

There are other aspects of handling federated schema to consider. If you use AWS Lambda, the default schema composition increases the gateway startup time, which impacts the endpoint’s performance. If any of the declared graphlets are unavailable at the time of schema composition, there may be gateway downtime or at least an incomplete overall schema. If schemas are pre-validated and stored in a highly available store such as Amazon S3, you mitigate both of these issues.

Another challenge is schema consistency. Ideally, you want to propagate the changes to the gateway in a timely manner after a schema change is published. You also need to consider handling field deprecation and field transfer across graphlets (change of ownership). To catch potential errors early, the system should support dry-run-like functionality that will allow developers to validate changes against the current schema during the development stage.

The Schema Manager

Schema Manager

To mitigate these challenges, the Gateway/Platform team introduces a Schema Manager component to the workload. Whenever there’s a deployment in any of the graphlet pipelines, the schema validation process is triggered.

Schema Manager fetches the most recent sub-schemas from all the graphlets and attempts to compose an overall schema. If there are no errors and conflicts, a change is approved and can be safely promoted to production.

In the case of a validation failure, the breaking change is blocked in the graphlet deployment pipeline and the owning team must resolve the issue before they can proceed with the change. Deployments of graphlet code changes also depend on this approval step, so there is no risk that schema and backend logic can get out of sync, when the approval step blocks the schema change.

Integration with the Gateway

To handle versioning of the composed schema, a manifest file stores the locations of the latest approved set of graphlet schemas. The manifest is a JSON file mapping the name of the graphlet to the S3 key of the schema file, in addition to the endpoint of the graphlet service.

The file name of each graphlet schema is a hash of the schema. The Schema Manager pulls the current manifest and uses the hash of the validated schema to determine if it has changed:

{
   "graphlets": {
     "graphletOne": {
        "schemaPath": "graphletOne/1a3121746e75aafb3ca9cccb94f23d89",
        "endpoint": "arn:aws:lambda:us-east-1:123456789:function:GraphletOne"
     },
     "graphletTwo": { 
        "schemaPath": "graphletTwo/213362c3684c89160a9b2f40cd8f191a",
        "endpoint": "arn:aws:lambda:us-east-1:123456789:function:GraphletTwo"
     },
     ...
  }
}

Based on these details, the Gateway fetches the graphlet schemas from S3 as part of service startup and stores them in the in-memory cache. It later polls for the updates every 5 minutes.

Using S3 as the schema store addresses the latency, availability and validation concerns of fetching schemas directly from the graphlets on runtime.

Eventual schema consistency

Since there are multiple graphlets that can be updated at the same time, there is no guarantee that one schema validation workflow will not overwrite the results of another.

For example:

  1. SchemaUpdater 1 runs for graphlet A.
  2. SchemaUpdater 2 runs for graphlet B.
  3. SchemaUpdater 1 pulls the manifest v1.
  4. SchemaUpdater 2 pulls the manifest v1.
  5. SchemaUpdater 1 uploads manifest v2 with change to graphlet A
  6. SchemaUpdater 2 uploads manifest v3 that overwrites the changes in v2. Contains only changes to graphlet B.

This is not a critical issue because no matter which version of the manifest wins in this scenario both manifests represent a valid schema and the gateway does not have any issues. When SchemaUpdater is run for graphlet A again, it sees that the current manifest does not contain the changes uploaded before, so it uploads again.

To reduce the risk of schema inconsistency, Schema Manager polls for schema changes every 15 minutes and the Gateway polls every 5 minutes.

Local schema development

Schema validation runs automatically for any graphlet change as a part of deployment pipelines. However, that feedback loop happens too late for an efficient schema development cycle. To reduce friction, the team uses a tool that performs this validation step without publishing any changes. Instead, it would output the results of the validation to the developer.

Schema validation

The Schema Validator script can be added as a dependency to any of the graphlets. It fetches graphlet’s schema definition described in Schema Definition Language (SDL) and passes it as payload to Schema Manager. It performs the full schema validation and returns any validation errors (or success codes) to the user.

Best practices for federated schema development

Schema Manager addresses the most critical challenges that come from federated schema development. However, there are other issues when organizing work processes at your organization.

It is crucial for long term maintainability of the federated schema to keep a high-quality bar for the incoming schema changes. Since there are multiple owners of sub-schemas, it’s good to keep a communication channel between the graphlet teams so that they can provide feedback for planned schema changes.

It is also good to extract common parts of the graph to a shared library and generate typings and the resolvers. This lets the graphlet developers benefit from strongly typed code. We use open-source libraries to do this.

Conclusion

Schema Management is a non-trivial challenge in federated GQL systems. The highest risk to your system availability comes with the potential of introducing breaking schema change by one of the graphlets. Your system cannot serve any requests after that. There is the problem of the delayed feedback loop for the engineers working on schema changes and the impact of schema composition during runtime on the service latency.

IMDb addresses these issues with a Schema Manager component running on Lambda, using S3 as the schema store. We have put guardrails in our deployment pipelines to ensure that no breaking change is deployed to production. Our graphlet teams are using common schema libraries with automatically generated typings and review the planned schema changes during schema working group meetings to streamline the development process.

These factors enable us to have stable and highly maintainable federated graphs, with automated change management. Next, our solution must provide mechanisms to prevent still-in-use fields from getting deleted and to allow schema changes coordinated between multiple graphlets. There are still plenty of interesting challenges to solve at IMDb.

For more serverless learning resources, visit Serverless Land.

Building federated GraphQL on AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-federated-graphql-on-aws-lambda/

This post is written by Krzysztof Lis, Senior Software Development Engineer, IMDb.

IMDb is the world’s most popular source for movie, TV, and celebrity content. It deals with a complex business domain including movies, shows, celebrities, industry professionals, events, and a distributed ownership model. There are clear boundaries between systems and data owned by various teams.

Historically, IMDb uses a monolithic REST gateway system that serves clients. Over the years, it has become challenging to manage effectively. There are thousands of files, business logic that lacks clear ownership, and unreliable integration tests tied to the data. To fix this, the team used GraphQL (GQL). This is a query language for APIs that lets you request only the data that you need and a runtime for fulfilling those queries with your existing data.

It’s common to implement this protocol by creating a monolithic service that hosts the complete schema and resolves all fields requested by the client. It is good for applications with a relatively small domain and clear, single-threaded ownership. IMDb chose the federated approach, that allows us to federate GQL requests to all existing data teams. This post shows how to build federated GraphQL on AWS Lambda.

Overview

This article covers migration from a monolithic REST API and monolithic frontend to a federated backend system powering a modern frontend. It enumerates challenges in the earlier system and explains why federated GraphQL addresses these problems.

I present the architecture overview and describe the decisions made when designing the new system. I also present our experiences with developing and running high-traffic and high-visibility pages on the new endpoint – improvement in IMDb’s ownership model, development lifecycle, in addition to ease of scaling.

Comparing GraphQL with federated GraphQL

Comparing GraphQL with federated GraphQL

Federated GraphQL allows you to combine GraphQLs APIs from multiple microservices into a single API. Clients can make a single request and fetch data from multiple sources, including joining across data sources, without additional support from the source services.

This is an example schema fragment:

type TitleQuote {
  "Quote ID"
  id: ID!
  "Is this quote a spoiler?"
  isSpoiler: Boolean!
  "The lines that make up this quote"
  lines: [TitleQuoteLine!]!
  "Votes from users about this quote..."
  interestScore: InterestScore!
  "The language of this quote"
  language: DisplayableLanguage!
}
"A specific line in the Title Quote. Can be a verbal line with characters speaking or stage directions"
type TitleQuoteLine {
  "The characters who speak this line, e.g.  'Rick'. Not required: a line may be non-verbal"
  characters: [TitleQuoteCharacter!]
  "The body of the quotation line, e.g 'Here's looking at you kid. '.  Not required: you may have stage directions with no dialogue."
  text: String
  "Stage direction, e.g. 'Rick gently places his hand under her chin and raises it so their eyes meet'. Not required."
  stageDirection: String
}

This is an example monolithic query: “Get the 2 top quotes from The A-Team (title identifier: tt0084967)”:

{ 
  title(id:"tt0084967"){ 
    quotes(first:2){ 
      lines { text } 
    } 
  }
}

Here is an example response:

{ 
  "data": { 
    "title": { 
      "quotes": { 
        "lines": [
          { 
            "text": "I love it when a plan comes together!"
          },
          {
            "text": "10 years ago a crack commando unit was sent to prison by a military court for a crime they didn't commit..."
          }
        ]
      } 
    }
  }
}

This is an example federated query: “What is Jackie Chan (id nm0000329) known for? Get the text, rating and image for each title”

{
  name(id: "nm0000329") {
    knownFor(first: 4) {
      title {
        titleText {
          text
        }
        ratingsSummary {
          aggregateRating
        }
        primaryImage {
          url
        }
      }
    }
  }
}

The monolithic example fetches quotes from a single service. In the federated example, knownFor, titleText, ratingsSummary, primaryImage are fetched transparently by the gateway from separate services. IMDb federates the requests across 19 graphlets, which are transparent to the clients that call the gateway.

Architecture overview

Architecture overview

IMDb supports three channels for users: website, iOS, and Android apps. Each of the channels can request data from a single federated GraphQL gateway, which federates the request to multiple graphlets (sub-graphs). Each of the invoked graphlets returns a partial response, which the gateway merges with responses returned by other graphlets. The client receives only the data that they requested, in the shape specified in the query. This can be especially useful when the developers must be conscious of network usage (for example, over mobile networks).

This is the architecture diagram:

Architecture diagram

There are two core components in the architecture: the Gateway and Schema Manager, which run on Lambda. The Gateway is a Node.js-based Lambda function that is built on top of open-source Apollo Gateway code. It is customized with code responsible predominantly for handling authentication, caching, metrics, and logging.

Other noteworthy components are Upstream Graphlets and an A/B Testing Service that enables A/B tests in the graph. The Gateway is connected to an Application Load Balancer, which is protected by AWS WAF and fronted by Amazon CloudFront as our CDN layer. This uses Amazon ElastiCache with Redis as the engine to cache partial responses coming from graphlets. All logs and metrics generated by the system are collected in Amazon CloudWatch.

Choosing the compute platform

This uses Lambda, since it scales on demand. IMDb uses Lambda’s Provisioned Concurrency to reduce cold start latency. The infrastructure is abstracted away so there is no need for us to manage our own capacity and handle patches.

Additionally, Application Load Balancer (ALB) has support for directing HTTP traffic to Lambda. This is an alternative to API Gateway. The workload does not need many of the features that API Gateway provides, since the gateway has a single endpoint, making ALB a better choice. ALB also supports AWS WAF.

Using Lambda, the team designed a way to fetch schema updates without needing to fetch the schema with every request. This is addressed with the Schema Manager component. This component improves latency and improves the overall customer experience.

Integration with legacy data services

The main purpose of the federated GQL migration is to deprecate a monolithic service that aggregates data from multiple backend services before sending it to the clients. Some of the data in the federated graph comes from brand new graphlets that are developed with the new system in mind.

However, much of the data powering the GQL endpoint is sourced from the existing backend services. One benefit of running on Lambda is the flexibility to choose the runtime environment that works best with the underlying data sources and data services.

For the graphlets relying on the legacy services, IMDb uses lightweight Java Lambda functions using provided client libraries written in Java. They connect to legacy backends via AWS PrivateLink, fetch the data, and shape it in the format expected by the GQL request. For the modern graphlets, we recommend the graphlet teams to explore Node.js as the first option due to improved performance and ease of development.

Caching

The gateway supports two caching modes for graphlets: static and dynamic. Static caching allows graphlet owners to specify a default TTL for responses returned by a graphlet. Dynamic caching calculates TTL based on a custom caching extension returned with the partial response. It allows graphlet owners to decide on the optimal TTL for content returned by their graphlet. For example, it can be 24 hours for queries containing only static text.

Permissions

Each of the graphlets runs in a separate AWS account. The graphlet accounts grant the gateway AWS account (as AWS principal) invoke permissions on the graphlet Lambda function. This uses a cross-account IAM role in the development environment that is assumed by stacks deployed in engineers’ personal accounts.

Experience with developing on federated GraphQL

The migration to federated GraphQL delivered on expected results. We moved the business logic closer to the teams that have the right expertise – the graphlet teams. At the same time, a dedicated platform team owns and develops the core technical pieces of the ecosystem. This includes the Gateway and Schema Manager, in addition to the common libraries and CDK constructs that can be reused by the graphlet teams. As a result, there is a clear ownership structure, which is aligned with the way IMDb teams are organized.

In terms of operational excellence of the platform team, this reduced support tickets related to business logic. Previously, these were routed to an appropriate data service team with a delay. Integration tests are now stable and independent from underlying data, which increases our confidence in the Continuous Deployment process. It also eliminates changing data as a potential root cause for failing tests and blocked pipelines.

The graphlet teams now have full ownership of the data available in the graph. They own the partial schema and the resolvers that provide data for that part of the graph. Since they have the most expertise in that area, the potential issues are identified early on. This leads to a better customer experience and overall health of the system.

The web and app developers groups are also impacted by the migration. The learning curve was aided by tools like GraphQL Playground and Apollo Client. The teams covered the learning gap quickly and started delivering new features.

One of the main pages at IMDb.com is the Title Page (for example, Shutter Island). This was successfully migrated to use the new GQL endpoint. This proves that the new, serverless federated system can serve production traffic at over 10,000 TPS.

Conclusion

A single, highly discoverable, and well-documented backend endpoint enabled our clients to experiment with the data available in the graph. We were able to clean up the backend API layer, introduce clear ownership boundaries, and give our client powerful tools to speed up their development cycle.

The infrastructure uses Lambda to remove the burden of managing, patching, and scaling our EC2 fleets. The team dedicated this time to work on features and operational excellence of our systems.

Part two will cover how IMDb manages the federated schema and the guardrails used to ensure high availability of the federated GraphQL endpoint.

For more serverless learning resources, visit Serverless Land.

Building a serverless distributed application using a saga orchestration pattern

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-serverless-distributed-application-using-a-saga-orchestration-pattern/

This post is written by Anitha Deenadayalan, Developer Specialist SA, DevAx (Developer Acceleration).

This post shows how to use the saga design pattern to preserve data integrity in distributed transactions across multiple services. In a distributed transaction, multiple services can be called before a transaction is completed. When the services store data in different data stores, it can be challenging to maintain data consistency across these data stores.

To maintain consistency in a transaction, relational databases provide two-phase commit (2PC). This consists of a prepare phase and a commit phase. In the prepare phase, the coordinating process requests the transaction’s participating processes (participants) to promise to commit or rollback the transaction. In the commit phase, the coordinating process requests the participants to commit the transaction. If the participants cannot agree to commit in the prepare phase, then the transaction is rolled back.

In distributed systems architected with microservices, two-phase commit is not an option as the transaction is distributed across various databases. In this case, one solution is to use the saga pattern.

A saga consists of a sequence of local transactions. Each local transaction in saga updates the database and triggers the next local transaction. If a transaction fails, then the saga runs compensating transactions to revert the database changes made by the preceding transactions.

There are two types of implementations for the saga pattern: choreography and orchestration.

Saga choreography

The saga choreography pattern depends on the events emitted by the microservices. The saga participants (microservices) subscribe to the events and they act based on the event triggers. For example, the order service in the following diagram emits an OrderPlaced event. The inventory service subscribes to that event and updates the inventory when the OrderPlaced event is emitted. Similarly the participant services act based on the context of the emitted event.

Solution architecture

Saga orchestration

The saga orchestration pattern has a central coordinator called orchestrator. The saga orchestrator manages and coordinates the entire transaction lifecycle. It is aware of the series of steps to be performed to complete the transaction. To run a step, it sends a message to the participant microservice to perform the operation. The participant microservice completes the operation and sends a message to the orchestrator. Based on the received message, the orchestrator decides which microservice to run next in the transaction:

Sage orchestrator in flow

You can use AWS Step Functions to implement the saga orchestration when the transaction is distributed across multiple databases.

Overview

This example uses a Step Functions workflow to implement the saga orchestration pattern, using the following architecture:

API Gateway to Lambda to Step Functions

When a customer calls the API, the invocation occurs, and pre-processing occurs in the Lambda function. The function starts the Step Functions workflow to start processing the distributed transaction.

The Step Functions workflow calls the individual services for order placement, inventory update, and payment processing to complete the transaction. It sends an event notification for further processing. The Step Functions workflow acts as the orchestrator to coordinate the transactions. If there is any error in the workflow, the orchestrator runs the compensatory transactions to ensure that the data integrity is maintained across various services.

When pre-processing is not required, you can also trigger the Step Functions workflow directly from API Gateway without the Lambda function.

The Step Functions workflow

The following diagram shows the steps that are run inside the Step Functions workflow. The green boxes show the steps that are run successfully. The order is placed, inventory is updated, and payment is processed before a Success state is returned to the caller.

The orange boxes indicate the compensatory transactions that are run when any one of the steps in the workflow fails. If the workflow fails at the Update inventory step, then the orchestrator calls the Revert inventory and Remove order steps before returning a Fail state to the caller. These compensatory transactions ensure that the data integrity is maintained. The inventory reverts to original levels and the order is reverted back.

Step Functions workflow

This preceding workflow is an example of a distributed transaction. The transaction data is stored across different databases and each service writes to its own database.

Prerequisites

For this walkthrough, you need:

Setting up the environment

For this walkthrough, use the AWS CDK code in the GitHub Repository to create the AWS resources. These include IAM roles, REST API using API Gateway, DynamoDB tables, the Step Functions workflow and Lambda functions.

  1. You need an AWS access key ID and secret access key for configuring the AWS Command Line Interface (AWS CLI). To learn more about configuring the AWS CLI, follow these instructions.
  2. Clone the repo:
    git clone https://github.com/aws-samples/saga-orchestration-netcore-blog
  3. After cloning, this is the directory structure:
    Directory structure
  4. The Lambda functions in the saga-orchestration directory must be packaged and copied to the cdk-saga-orchestration\lambdas directory before deployment. Run these commands to process the PlaceOrderLambda function:
    cd PlaceOrderLambda/src/PlaceOrderLambda 
    dotnet lambda package
    cp bin/Release/netcoreapp3.1/PlaceOrderLambda.zip ../../../../cdk-saga-orchestration/lambdas
    
  5. Repeat the same commands for all the Lambda functions in the saga-orchestration directory.
  6. Build the CDK code before deploying to the console:
    cd cdk-saga-orchestration/src/CdkSagaOrchestration
    dotnet build
    
  7. Install the aws-cdk package:
    npm install -g aws-cdk 
  8. The cdk synth command causes the resources defined in the application to be translated into an AWS CloudFormation template. The cdk deploy command deploys the stacks into your AWS account. Run:
    cd cdk-saga-orchestration
    cdk synth 
    cdk deploy
    
  9. CDK deploys the environment to AWS. You can monitor the progress using the CloudFormation console. The stack name is CdkSagaOrchestrationStack:
    CloudFormation console

The Step Functions configuration

The CDK creates the Step Functions workflow, DistributedTransactionOrchestrator. The following snippet defines the workflow with AWS CDK for .NET:

var stepDefinition = placeOrderTask
    .Next(new Choice(this, "Is order placed")
        .When(Condition.StringEquals("$.Status", "ORDER_PLACED"), updateInventoryTask
            .Next(new Choice(this, "Is inventory updated")
                .When(Condition.StringEquals("$.Status", "INVENTORY_UPDATED"),
                    makePaymentTask.Next(new Choice(this, "Is payment success")
                        .When(Condition.StringEquals("$.Status", "PAYMENT_COMPLETED"), successState)
                        .When(Condition.StringEquals("$.Status", "ERROR"), revertPaymentTask)))
                .When(Condition.StringEquals("$.Status", "ERROR"), waitState)))
        .When(Condition.StringEquals("$.Status", "ERROR"), failState));

Step Functions workflow

Compare the states language definition for the state machine with the definition above. Also observe the inputs and outputs for each step and how the conditions have been configured. The steps with type Task call a Lambda function for the processing. The steps with type Choice are decision-making steps that define the workflow.

Setting up the DynamoDB table

The Orders and Inventory DynamoDB tables are created using AWS CDK. The following snippet creates a DynamoDB table with AWS CDK for .NET:

var inventoryTable = new Table(this, "Inventory", new TableProps
{
    TableName = "Inventory",
    PartitionKey = new Attribute
    {
        Name = "ItemId",
        Type = AttributeType.STRING
    },
    RemovalPolicy = RemovalPolicy.DESTROY
});
  1. Open the DynamoDB console and select the Inventory table.
  2. Choose Create Item.
  3. Select Text, paste the following contents, then choose Save.
    {
      "ItemId": "ITEM001",
      "ItemName": "Soap",
      "ItemsInStock": 1000,
      "ItemStatus": ""
    }
    

    Create Item dialog

  4. Create two more items in the Inventory table:
    {
      "ItemId": "ITEM002",
      "ItemName": "Shampoo",
      "ItemsInStock": 500,
      "ItemStatus": ""
    }
    
    {
      "ItemId": "ITEM003",
      "ItemName": "Toothpaste",
      "ItemsInStock": 2000,
      "ItemStatus": ""
    }
    

The Lambda functions UpdateInventoryLambda and RevertInventoryLambda increment and decrement the ItemsInStock attribute value. The Lambda functions PlaceOrderLambda and UpdateOrderLambda insert and delete items in the Orders table. These are invoked by the saga orchestration workflow.

Triggering the saga orchestration workflow

The API Gateway endpoint, SagaOrchestratorAPI, is created using AWS CDK. To invoke the endpoint:

  1. From the API Gateway service page, select the SagaOrchestratorAPI:
    List of APIs
  2. Select Stages in the left menu panel:
    Stages menu
  3. Select the prod stage and copy the Invoke URL:
    Invoke URL
  4. From Postman, open a new tab. Select POST in the dropdown and enter the copied URL in the textbox. Move to the Headers tab and add a new header with the key ‘Content-Type’ and value as ‘application/json’:
    Postman configuration
  5. In the Body tab, enter the following input and choose Send.
    {
      "ItemId": "ITEM001",
      "CustomerId": "ABC/002",
      "MessageId": "",
      "FailAtStage": "None"
    }
    
  6. You see the output:
    Output
  7. Open the Step Functions console and view the execution. The graph inspector shows that the execution has completed successfully.
    Successful workflow execution
  8. Check the items in the DynamoDB tables, Orders & Inventory. You can see an item in the Orders table indicating that an order is placed. The ItemsInStock in the Inventory table has been deducted.
    Changes in DynamoDB tables
  9. To simulate the failure workflow in the saga orchestrator, send the following JSON as body in the Postman call. The FailAtStage parameter injects the failure in the workflow. Select Send in Postman after updating the Body:
    {
      "ItemId": "ITEM002",
      "CustomerId": "DEF/002",
      "MessageId": "",
      "FailAtStage": "UpdateInventory"
    }
    
  10. Open the Step Functions console to see the execution.
  11. While the function waits in the wait state, look at the items in the DynamoDB tables. A new item is added to the Orders table and the stock for Shampoo is deducted in the Inventory table.
    Changes in DynamoDB table
  12. Once the wait completes, the compensatory transaction steps are run:
    Workflow execution result
  13. In the graph inspector, select the Update Inventory step. On the right pane, click on the Step output tab. The status is ERROR, which changes the control flow to run the compensatory transactions.
    Step output
  14. Look at the items in the DynamoDB table again. The data is now back to a consistent state, as the compensatory transactions have run to preserve data integrity:
    DynamoDB table items

The Step Functions workflow implements the saga orchestration pattern. It performs the coordination across distributed services and runs the transactions. It also performs compensatory transactions to preserve the data integrity.

Cleaning up

To avoid incurring additional charges, clean up all the resources that have been created. Run the following command from a terminal window. This deletes all the resources that were created as part of this example.

cdk destroy

Conclusion

This post showed how to implement the saga orchestration pattern using API Gateway, Step Functions, Lambda, DynamoDB, and .NET Core 3.1. This can help maintain data integrity in distributed transactions across multiple services. Step Functions makes it easier to implement the orchestration in the saga pattern.

To learn more about developing microservices on AWS, refer to the whitepaper on microservices. To learn more about the features, refer to the AWS CDK Features page.

Increasing performance of Java AWS Lambda functions using tiered compilation

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/increasing-performance-of-java-aws-lambda-functions-using-tiered-compilation/

This post is written by Mark Sailes, Specialist Solutions Architect, Serverless and Richard Davison, Senior Partner Solutions Architect.

The Operating Lambda: Performance optimization blog series covers important topics for developers, architects, and system administrators who are managing applications using AWS Lambda functions. This post explains how you can reduce the initialization time to start a new execution environment when using the Java-managed runtimes.

Lambda lifecycle

Many Lambda workloads are designed to deliver fast responses to synchronous or asynchronous workloads in a fraction of a second. Examples of these could be public APIs to deliver dynamic content to websites or a near-real time data pipeline doing small batch processing.

As usage of these systems increases, Lambda creates new execution environments. When a new environment is created and used for the first time, additional work is done to make it ready to process an event. This creates two different performance profiles: one with and one without the additional work.

To improve the response time, you can minimize the effect of this additional work. One way you can minimize the time taken to create a new managed Java execution environment is to tune the JVM. It can be optimized specifically for these workloads that do not have long execution durations.

One example of this is configuring a feature of the JVM called tiered compilation. From version 8 of the Java Development Kit (JDK), the two just-in-time compilers C1 and C2 have been used in combination. C1 is designed for use on the client side and to enable short feedback loops for developers. C2 is designed for use on the server side and to achieve higher performance after profiling.

Tiering is used to determine which compiler to use to achieve better performance. These are represented as five levels:

Tiering levels

Profiling has an overhead, and performance improvements are only achieved after a method has been invoked a number of times, the default being 10,000. Lambda customers wanting to achieve faster startup times can use level 1 with little risk of reducing warm start performance. The article “Startup, containers & Tiered Compilation” explains tiered compilation further.

For customers who are doing highly repetitive processing, this configuration might not be suited. Applications that repeat the same code paths many times want the JVM to profile and optimize these paths. Concrete examples of these would be using Lambda to run Monte Carlo simulation or hash calculations. You can run the same simulations thousands of times and the JVM profiling can reduce the total execution time significantly.

Performance improvements

The example project is a Java 11-based application used to analyze the impact of this change. The application is triggered by Amazon API Gateway and then puts an item into Amazon DynamoDB. To compare the performance difference caused by this change, there is one Lambda function with the additional changes and one without. There are no other differences in the code.

Download the code for this example project from the GitHub repo: https://github.com/aws-samples/aws-lambda-java-tiered-compilation-example.

To install prerequisite software:

  1. Install the AWS CDK.
  2. Install Apache Maven, or use your preferred IDE.
  3. Build and package the Java application in the software folder:
    cd software/ExampleFunction/
    mvn package
  4. Zip the execution wrapper script:
    cd ../OptimizationLayer/
    ./build-layer.sh
    cd ../../
  5. Synthesize CDK. This previews changes to your AWS account before it makes them:
    cd infrastructure
    cdk synth
  6. Deploy the Lambda functions:
    cdk deploy --outputs-file outputs.json

The API Gateway endpoint URL is displayed in the output and saved in a file named outputs.json. The contents are similar to:

InfrastructureStack.apiendpoint = https://{YOUR_UNIQUE_ID_HERE}.execute-api.eu-west-1.amazonaws.com

Using Artillery to load test the changes

First, install prerequisites:

  1. Install jq and Artillery Core.
  2. Run the following two scripts from the /infrastructure directory:
    artillery run -t $(cat outputs.json | jq -r '.InfrastructureStack.apiendpoint') -v '{ "url": "/without" }' loadtest.yml
    
    artillery run -t $(cat outputs.json | jq -r '.InfrastructureStack.apiendpoint') -v '{ "url": "/with" }' loadtest.yml

Check the results using Amazon CloudWatch Insights

  1. Navigate to Amazon CloudWatch.
  2. Select Logs then Logs Insights.
  3. Select the following two log groups from the drop-down list:
    /aws/lambda/example-with-layer
    /aws/lambda/example-without-layer
  4. Copy the following query and choose Run query:
        filter @type = "REPORT"
        | parse @log /\d+:\/aws\/lambda\/example-(?<function>\w+)-\w+/
        | stats
        count(*) as invocations,
        pct(@duration, 0) as p0,
        pct(@duration, 25) as p25,
        pct(@duration, 50) as p50,
        pct(@duration, 75) as p75,
        pct(@duration, 90) as p90,
        pct(@duration, 95) as p95,
        pct(@duration, 99) as p99,
        pct(@duration, 100) as p100
        group by function, ispresent(@initDuration) as coldstart
        | sort by function, coldstart
    

    Query window

You see results similar to:

Query results

Here is a simplified table of the results:

Settings Type

# of invocations

p90 (ms)

p95 (ms)

p99 (ms)

Default settings Cold start

754

5,212

5,338

5,517

Default settings Warm start

35,247

58

93

255

Tiered compilation stopping at level 1 Cold start

383

2,071

2,086

2,221

Tiered compilation stopping at level 1 Warm start

35,618

23

32

86

The results are from testing 120 concurrent requests over 5 minutes using an open-source software project called Artillery. You can find instructions on how to run these tests in the GitHub repo. The results show that for this application, cold starts for 90% of invocations improve by 3141 ms (60%). These numbers are specific for this application and your application may behave differently.

Using wrapper scripts for Lambda functions

Wrapper scripts are a feature available in Java 8 and Java 11 on Amazon Linux 2 managed runtimes. They are not available for the Java 8 on Amazon Linux 1 managed runtime.

To apply this optimization flag to Java Lambda functions, create a wrapper script and add it to a Lambda layer zip file. This script will alter the JVM flags which Java is started with, within the execution environment.

#!/bin/sh
shift
export _JAVA_OPTIONS="-XX:+TieredCompilation -XX:TieredStopAtLevel=1"
java "[email protected]"

Read the documentation to learn how to create and share a Lambda layer.

Console walkthrough

This change can be configured using AWS Serverless Application Model (AWS SAM), the AWS Command Line Interface (AWS CLI), AWS CloudFormation, or from within the AWS Management Console.

Using the AWS Management Console:

  1. Navigate to the AWS Lambda console.
  2. Select Functions and choose the Lambda function to add the layer to.
    Lambda functions
  3. The Code tab is selected by default. Scroll down to the Layers panel.
  4. Select Add a layer.
    Add a layer
  5. Select Custom layers and choose your layer.
    Add layer
  6. Select the Version. Choose Add.
  7. From the menu, select the Configuration tab and Environment variables. Choose Edit.
    Configuration tab
  8. Choose Add environment variable. Add the following:
    – Key: AWS_LAMBDA_EXEC_WRAPPER
    – Value: /opt/java-exec-wrapper
    Edit environment variables
  9. Choose Save.You can verify that the changes are applied by invoking the function and viewing the log events. The log line Picked up _JAVA_OPTIONS: -XX:+TieredCompilation -XX:TieredStopAtLevel=1 is added.Log events

Conclusion

Tiered compilation stopping at level 1 reduces the time the JVM spends optimizing and profiling your code. This could help reduce start up times for Java applications that require fast responses, where the workload doesn’t meet the requirements to benefit from profiling.

You can make further reductions in startup time using GraalVM. Read more about GraalVM and the Quarkus framework in the architecture blog. View the code example at https://github.com/aws-samples/aws-lambda-java-tiered-compilation-example to see how you can apply this to your Lambda functions.

For more serverless learning resources, visit Serverless Land.

Sending mobile push notifications and managing device tokens with serverless applications

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/sending-mobile-push-notifications-and-managing-device-tokens-with-serverless-application/

This post is written by Rafa Xu, Cloud Architect, Serverless and Joely Huang, Cloud Architect, Serverless.

Amazon Simple Notification Service (SNS) is a fast, flexible, fully managed push messaging service in the cloud. SNS can send mobile push notifications directly to applications on mobile devices such as message alerts and badge updates. SNS sends push notifications to a mobile endpoint created by supplying a mobile token and platform application.

When publishing mobile push notifications, a device token is used to generate an endpoint. This identifies where the push notification is sent (target destination). To push notifications successfully, the token must be up to date and the endpoint must be validated and enabled.

A common challenge when pushing notifications is keeping the token up to date. Tokens can automatically change due to reasons such as mobile operating system (OS) updates and application store updates.

This post provides a serverless solution to this challenge. It also provides a way to publish push notifications to specific end users by maintaining a mapping between users, endpoints, and tokens.

Overview

To publish mobile push notifications using SNS, generate an SNS endpoint to use as a destination target for the push notification. To create the endpoint, you must supply:

  1. A mobile application token: The mobile operating system (OS) issues the token to the application. It is a unique identifier for the application and mobile device pair.
  2. Platform Application Amazon Resource Name (ARN): SNS provides this ARN when you create a platform application object. The platform application object requires a valid set of credentials issued by the mobile platform, which you provide to SNS.

Once the endpoint is generated, you can store and reuse it again. This prevents the application from creating endpoints indefinitely, which could exhaust the SNS endpoint limit.

To reuse the endpoints and successfully push notifications, there are a number of challenges:

  • Mobile application tokens can change due to a number of reasons, such as application updates. As a result, the publisher must update the platform endpoint to ensure it uses an up-to-date token.
  • Mobile application tokens can become invalid. When this happens, messages won’t be published, and SNS disables the endpoint with the invalid token. To resolve this, publishers must retrieve a valid token and re-enable the platform endpoint
  • Mobile applications can have many users, each user could have multiple devices, or one device could have multiple users. To send a push notification to a specific user, a mapping between the user, device, and platform endpoints should be maintained.

For more information on best practices for managing mobile tokens, refer to this post.

Follow along the blog post to learn how to implement a serverless workflow for managing and maintaining valid endpoints and user mappings.

Solution overview

The solution uses the following AWS services:

  • Amazon API Gateway: Provides a token registration endpoint URL used by the mobile application. Once called, it invokes an AWS Lambda function via the Lambda integration.
  • Amazon SNS: Generates and maintains the target endpoint and manages platform application objects.
  • Amazon DynamoDB: Serverless database for storing endpoints that also maintains a mapping between the user, endpoint, and mobile operating system.
  • AWS Lambda: Retrieves endpoints from DynamoDB, validates and generates endpoints, and publishes notifications by making requests to SNS.

The following diagram represents a simplified interaction flow between the AWS services:

Solution architecture

To register the token, the mobile app invokes the registration token endpoint URL generated by Amazon API Gateway. The token registration happens every time a user logs in or opens the application. This ensures that the token and endpoints are always valid during the application usage.

The mobile application passes the token, user, and mobileOS as parameters to API Gateway, which forwards the request to the Lambda function.

The Lambda function validates the token and endpoint for the user by making API calls to DynamoDB and SNS:

  1. The Lambda function checks DynamoDB to see if the endpoint has been previously created.
    1. If the endpoint does not exist, it creates a platform endpoint via SNS.
  2. Obtain the endpoint attributes from SNS:
    1. Check the “enabled” endpoint attribute and set to “true” to enable the platform endpoint, if necessary.
    2. Validate the “token” endpoint attribute with the token provided in the API Gateway request. If it does not match, update the “token” attribute.
    3. Send a request to SNS to update the endpoint attributes.
  3. If a new endpoint is created, update DynamoDB with the new endpoint.
  4. Return a successful response to API Gateway.

Deploying the AWS Serverless Application Model (AWS SAM) template

Use the AWS SAM template to deploy the infrastructure for this workflow. Before deploying the template, first create a platform application in SNS.

  1. Navigate to the SNS console. Select Push Notifications on the left-hand menu to create a platform application:
    Mobile push notifications
  2. This shows the creation of a platform application for iOS applications:
    Create platform application
  3. To install AWS SAM, visit the installation page.
  4. To deploy the AWS SAM template, navigate to the directory where the template is located. Run the commands in the terminal:
    git clone https://github.com/aws-samples/serverless-mobile-push-notification
    cd serverless-mobile-push-notification
    sam build
    sam deploy --guided

Lambda function code snippets

The following section explains code from the Lambda function for the workflow.

Create the platform endpoint

If the endpoint exists, store it as a variable in the code. If the platform endpoint does not exist in the DynamoDB database, create a new endpoint:

        need_update_ddb = False
        response = table.get_item(Key={'username': username, 'appos': appos})
        if 'Item' not in response:
            # create endpoint
            response = snsClient.create_platform_endpoint(
                PlatformApplicationArn=SUPPORTED_PLATFORM[appos],
                Token=token,
            )
            devicePushEndpoint = response['EndpointArn']
            need_update_ddb = True
        else:
            # update the endpoint
            devicePushEndpoint = response['Item']['endpoint']

Check and update endpoint attributes

Check that the token attribute for the platform endpoint matches the token received from the mobile application through the request. This also checks for the endpoint “enabled” attribute and re-enables the endpoint if necessary:

response = snsClient.get_endpoint_attributes(
                EndpointArn=devicePushEndpoint
            )
            endpointAttributes = response['Attributes']

            previousToken = endpointAttributes['Token']
            previousStatus = endpointAttributes['Enabled']
            if previousStatus.lower() != 'true' or previousToken != token:
                snsClient.set_endpoint_attributes(
                    EndpointArn=devicePushEndpoint,
                    Attributes={
                        'Token': token,
                        'Enabled': 'true'
                    }
                )

Update the DynamoDB table with the newly generated endpoint

If a platform endpoint is newly created, meaning there is no item in the DynamoDB table, create a new item in the table:

        if need_update_ddb:
            table.update_item(
                Key={
                    'username': username,
                    'appos': appos
                },
                UpdateExpression="set endpoint=:e",
                ExpressionAttributeValues={
                    ':e': devicePushEndpoint
                },
                ReturnValues="UPDATED_NEW"
            )

As best practice, the code cleans up the table, in case there are multiple entries for the same endpoint mapped to different users. This can happen when the mobile application is used by multiple users on the same device. When one user logs out and a different user logs in, this creates a new entry in the DynamoDB table to map the endpoint with the new user.

As a result, you must remove the entry that maps the same endpoint to the previously logged in user. This way, you only keep the endpoint that matches the user provided by the mobile application through the request.

result = table.query(
    # Add the name of the index you want to use in your query.
        IndexName="endpoint-index",
        KeyConditionExpression=Key('endpoint').eq(devicePushEndpoint),
    )
    for item in result['Items']:
        if item['username'] != username and item['appos'] == appos:
            print(f"deleting orphan item: username {username}, os {appos}".format(username=item['username'], appos=appos))
            table.delete_item(
                Key={
                    'username': item['username'],
                    'appos': appos
                },
            )

Conclusion

This blog shows how to deploy a serverless solution for validating and managing SNS platform endpoints and tokens. To publish push notifications successfully, use SNS to check the endpoint attribute and ensure it is mapped to the correct token and the endpoint is enabled.

This approach uses DynamoDB to store the device token and platform endpoints for each user. This allows you to send push notifications to specific users, retrieve, and reuse previously created endpoints. You create a Lambda function to facilitate the workflow, including validating the DynamoDB item for storing an enabled and up-to-date token.

Visit this link to learn more about Amazon SNS mobile push notifications: http://docs.aws.amazon.com/sns/latest/dg/SNSMobilePush.html

For more serverless learning resources, visit Serverless Land.

Adding resiliency to AWS CloudFormation custom resource deployments

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/adding-resiliency-to-aws-cloudformation-custom-resource-deployments/

This post is written by Dathu Patil, Solutions Architect and Naomi Joshi, Cloud Application Architect.

AWS CloudFormation custom resources allow you to write custom provisioning logic in templates. These run anytime you create, update, or delete stacks. Using AWS Lambda-backed custom resources, you can associate a Lambda function with a CloudFormation custom resource. The function is invoked whenever the custom resource is created, updated, or deleted.

When CloudFormation asynchronously invokes the function, it passes the request data, such as the request type and resource properties to the function. The customizability of Lambda functions in combination with CloudFormation allow a wide range of scenarios. For example, you can dynamically look up Amazon Machine Image (AMI) IDs during stack creation or use utilities such as string reversal functions.

Unhandled exceptions or transient errors in the custom resource Lambda function can cause your code to exit without sending a response. CloudFormation requires an HTTPS response to confirm if the operation is successful or not. An unreported exception causes CloudFormation to wait until the operation times out before starting a stack rollback.

If the exception occurs again on rollback, CloudFormation waits for a timeout exception before ending in a rollback failure. During this time, your stack is unusable. You can learn more about this and best practices by reviewing Best Practices for CloudFormation Custom Resources.

In this blog, you learn how you can use Amazon SQS and Lambda to add resiliency to your Lambda-backed CloudFormation custom resource deployments. The example shows how to use CloudFormation custom resource to look up an AMI ID dynamically during Amazon EC2 creation.

Overview

CloudFormation templates that declare an EC2 instance must also specify an AMI ID. This includes an operating system and other software and configuration information used to launch the instance. The correct AMI ID depends on the instance type and Region in which you’re launching your stack. AMI ID can change regularly, such as when an AMI is updated with software updates.

Customers often implement a CloudFormation custom resource to look up an AMI ID while creating an EC2 instance. In this example, the lookup Lambda function calls the EC2 API. It fetches the available AMI IDs, uses the latest AMI ID, and checks for a compliance tag. This implementation assumes that there are separate processes for creating AMI and running compliance checks. The process that performs compliance and security checks creates a compliance tag on a successful scan.

This solution shows how you can use SQS and Lambda to add resiliency to handle an exception. In this case, the exception occurs in the AMI lookup custom resource due to a missing compliance tag. When the AMI lookup function fails processing, it uses the Lambda destination configuration to send the request to an SQS queue. The message is reprocessed using the SQS queue and Lambda function.

Solution architecture

  1. The CloudFormation custom resource asynchronously invokes the AMI lookup Lambda function to perform appropriate actions.
  2. The AMI lookup Lambda function calls the EC2 API to fetch the list of AMIs and checks for a compliance tag. If the tag is missing, it throws an unhandled exception.
  3. On failure, the Lambda destination configuration sends the request to the retry queue that is configured as a dead-letter queue (DLQ). SQS adds a custom delay between retry processing to support more than two retries.
  4. The retry Lambda function processes messages in the retry queue using Lambda with SQS. Lambda polls the queue and invokes the retry Lambda function synchronously with an event that contains queue messages.
  5. The retry function then synchronously invokes the AMI lookup function using the information from the request SQS message.

The AMI Lookup Lambda function

An AWS Serverless Application Model (AWS SAM) template is used to create the AMI lookup Lambda function. You can configure async event options such as number of retries on the Lambda function. The maximum retries allowed is 2 and there is no option to set a delay between the invocation attempts.

When a transient failure or unhandled error occurs, the request is forwarded to the retry queue. This part of the AWS SAM template creates AMI lookup Lambda function:

  AMILookupLambda:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: amilookup/
      Handler: app.lambda_handler
      Runtime: python3.8
      Timeout: 300
      EventInvokeConfig:
          MaximumEventAgeInSeconds: 60
          MaximumRetryAttempts: 2
          DestinationConfig:
            OnFailure:
              Type: SQS
              Destination: !GetAtt RetryQueue.Arn
      Policies:
        - AMIDescribePolicy: {}

This function calls the EC2 API using the boto3 AWS SDK for Python. It calls the describe_images method to get a list of images with given filter conditions. The Lambda function iterates through the AMI list and checks for compliance tags. If the tag is not present, it raises an exception:

ec2_client = boto3.client('ec2', region_name=region)
         # Get AMI IDs with the specified name pattern and owner
         describe_response = ec2_client.describe_images(
            Filters=[{'Name': "name", 'Values': architectures},
                     {'Name': "tag-key", 'Values': ['ami-compliance-check']}],
            Owners=["amazon"]
        )

The queue and the retry Lambda function

The retry queue adds a 60-second delay before a message is available for the processing. The time delay between retry processing attempts provides time for transient errors to be corrected. This is the AWS SAM template for creating these resources:

RetryQueue:
  Type: AWS::SQS::Queue
  Properties:
    VisibilityTimeout: 60
    DelaySeconds: 60
    MessageRetentionPeriod: 600

RetryFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: retry/
      Handler: app.lambda_handler
      Runtime: python3.8
      Timeout: 60
      Events:
        MySQSEvent:
          Type: SQS
          Properties:
            Queue: !GetAtt RetryQueue.Arn
            BatchSize: 1
      Policies:
        - LambdaInvokePolicy:
            FunctionName: !Ref AMILookupFunction

The retry Lambda function periodically polls for new messages in the retry queue. The function synchronously invokes the AMI lookup Lambda function. On success, a response is sent to CloudFormation. This process runs until the AMI lookup function returns a successful response or the message is deleted from the SQS queue. The deletion is based on the MessageRetentionPeriod, which is set to 600 seconds in this case.

for record in event['Records']:
        body = json.loads(record['body'])
        response = client.invoke(
            FunctionName=body['requestContext']['functionArn'],
            InvocationType='RequestResponse',
            Payload=json.dumps(body['requestPayload']).encode()
        )            

Deployment walkthrough

Prerequisites

To get started with this solution, you need:

  • AWS CLI and AWS SAM CLI installed to deploy the solution.
  • An existing Amazon EC2 public image. You can choose any of the AMIs from the AWS Management Console with Architecture = X86_64 and Owner = amazon for test purposes. Note the AMI ID.

Download the source code from the resilient-cfn-custom-resource GitHub repository. The template.yaml file is an AWS SAM template. It deploys the Lambda functions, SQS, and IAM roles required for the Lambda function. It uses Python 3.8 as the runtime and assigns 128 MB of memory for the Lambda functions.

  1. To build and deploy this application using the AWS SAM CLI build and guided deploy:
    sam build --use-container
    sam deploy --guided

The custom resource stack creation invokes the AMI lookup Lambda function. This fetches the AMI ID from all public EC2 images available in your account with the tag ami-compliance-check. Typically, the compliance tags are created by a process that performs security scans.

In this example, the security scan process is not running and the tag is not yet added to any AMIs. As a result, the custom resource throws an exception, which goes to the retry queue. This is retried by the retry function until it is successfully processed.

  1. Use the console or AWS CLI to add the tag to the chosen EC2 AMI. In this example, this is analogous to a separate governance process that checks for AMI compliance and adds the compliance tag if passed. Replace the $AMI-ID with the AMI ID captured in the prerequisites:
    aws ec2 create-tags –-resources $AMI-ID --tags Key=ami-compliance-check,Value=True
  2. After the tags are added, a response is sent successfully from the custom resource Lambda function to the CloudFormation stack. It includes your $AMI-ID and a test EC2 instance is created using that image. The stack creation completes successfully with all resources deployed.

Conclusion

This blog post demonstrates how to use SQS and Lambda to add resiliency to CloudFormation custom resources deployments. This solution can be customized for use cases where CloudFormation stacks have a dependency on a custom resource.

CloudFormation custom resource failures can happen due to unhandled exceptions. These are caused by issues with a dependent component, internal service, or transient system errors. Using this solution, you can handle the failures automatically without the need for manual intervention. To get started, download the code from the GitHub repo and start customizing.

For more serverless learning resources, visit Serverless Land.

Authenticating and authorizing Amazon MQ users with Lightweight Directory Access Protocol

Post Syndicated from Talia Nassi original https://aws.amazon.com/blogs/compute/authenticating-and-authorizing-amazon-mq-users-with-lightweight-directory-access-protocol/

This post is written by Dominic Gagné and Mithun Mallick.

Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that simplifies setting up and operating message brokers in the AWS Cloud. Integrating an Amazon MQ broker with a Lightweight Directory Access Protocol (LDAP) server allows you to manage credentials and permissions for users in a single location. There is also the added benefit of not requiring a message broker reboot for new authorization rules to take effect.

This post explores concepts around Amazon MQ’s authentication and authorization model. It covers the steps to set up Amazon MQ access for a Microsoft Active Directory user.

Authentication and authorization

Amazon MQ for ActiveMQ uses native ActiveMQ authentication to manage user permissions by default. Users are created within Amazon MQ to allow broker access, and are mapped to read, write, and admin operations on various destinations. This local user model is referred to as the simple authentication type.

As an alternative to simple authentication, you can maintain broker access control authorization rules within an LDAP server on a per-destination or destination set basis. Wildcards are also supported for rules that apply to multiple destinations.

The LDAP integration feature uses the ActiveMQ standard Java Authentication and Authorization Service (JAAS) plugin. Additional details on the plugin can be found within ActiveMQ security documentation. Authentication details are defined as part of the ldapServerMetadata attribute. Authorization settings are configured as part of the cachedLDAPAuthorizationMap node in the broker’s activemq.xml configuration.

Here is an overview of the integration:

overview graph

  1. Client requests access to a queue or topic.
  2. Authenticate and authorize the client via JAAS.
  3. Grant or deny Access to the specified queue or topic.
  4. If access is granted, allow the client to read, write, or create.

Integration with LDAP

ActiveMQ integration with LDAP sets up a secure LDAP access connection between an Amazon MQ for ActiveMQ broker and a Microsoft Active Directory server. You can also use other implementations of LDAP as the directory server, such as OpenLDAP.

Amazon MQ encrypts all data between a broker and LDAP server, and enforces secure LDAP (LDAPS) via public certificates. Unsecured LDAP on port 389 is not supported; traffic must communicate via the secure LDAP port 636. In this example, a Microsoft Active Directory server has LDAPS configured with a public certificate. To set up a Simple AD server with LDAPS and a public certificate, read this blog post.

To integrate with a Microsoft Active Directory server:

  1. Configure users in the Microsoft Active Directory directory information tree (DIT) structure for client authentication to the broker.
  2. Configure destinations in the Microsoft Active Directory DIT structure to allow destination-level authorization for individual users or entire groups.
  3. Create an ActiveMQ configuration to allow authorization via LDAP.
  4. Create a broker and perform a basic test to validate authentication and authorization access for a test user.

Configuring Microsoft Active Directory for client authentication

Create the hierarchy structure within the Microsoft Active Directory DIT to provision users. The server must be part of the domain and has a domain admin user. The domain admin user is needed in the broker configuration.

In this DIT, the domain corp.example.com is used, though you can use any domain name. An organizational unit (OU) named corp exists under the root. ActiveMQ related entities are defined under the corp OU.

This OU is the user base that the broker uses to search for users when performing authentication operations. Represented as LDIF, the user base is:

OU=Users,OU=corp,DC=corp,DC=example,DC=com

To create this OU and user:

  1. Log on to the Windows Server using a domain admin user.
  2. Open Active Directory Users and Computers by running dsa.msc from the command line.
  3. Choose corp and create an OU named Users, located within corp.
  4. Select the Users OU and enter the name mquser.
  5. Deselect the option to change password on next logon.
  6. Finally, choose Next to create the user.

Because the ActiveMQ source code hardcodes the attribute name for users to uid, make sure that each user has this attribute set. For simplicity, use the user’s connection user name. For more information, see the ActiveMQ source code and knowledgebase article.

Users must belong to the amazonmq-console-admins group to enable console access. Members of this group can create and delete any destinations via the console, regardless of other authorization rules in place. Access to this group should be granted sparingly.

Configuring Microsoft Active Directory for authorization

Now that our broker knows where to search for users, configure the DIT such that the broker can search for user permissions relating to authorization.

Back in the root OU corp where the Users OU was previously created:

  1. Create a new OU named Destination.
  2. Within the Destination OU, create an OU for each type of destination that ActiveMQ offers. These are Queue, Topic, and Temp.

For each destination that you want to allow authorization:

  1. Add an OU under the type of destination.
  2. Provide the name of the destination as the name of the OU. Wildcards are also supported, as found in ActiveMQ documentation.

This example shows three OUs that require authorization. These are DEMO.MYQUEUE, DEMO.MYSECONDQUEUE, and DEMO.EVENTS.$. The queue search base, which provides authorization information for destinations of type Queue, has the following location in the DIT:

OU=Queue,OU=Destination,OU=corp,DC=corp,DC=example,DC=com

Note the DEMO.EVENTS.$ wildcard queue name. Permissions in that OU apply to all queue names matching that wildcard.

Within each OU representing a destination or wildcard destination set, create three security groups. These groups relate to specific permissions on the relevant destination, using the same admin, read, and write permissions rules as ActiveMQ documentation describes.

There is a conflict with the group name “admin”. Legacy “pre-Windows 2000” rules do not allow groups to share the same name, even in different locations of the DIT. The value in the “pre-Windows 2000” text box does not impact the setup but it must be globally unique. In the following screenshot, a uuid suffix is appended to each admin group name.

Adding a user to the admin security group for a particular destination enables the user to create and delete that topic. Adding them to the read security group enables them to read from the destination, and adding them to the write group enables them to write to the destination.

In this example, mquser is added to the admin and write groups for the queue DEMO.MYQUEUE. Later, you test this user’s authorization permissions to confirm that the integration works as expected.

In addition to adding individual users to security group permissions, you can add entire groups. Because ActiveMQ hardcodes attribute names for groups, ensure that the group has the object class groupOfNames, as shown in the ActiveMQ source code.

To do this, follow the same process as with the UID for users. See the knowledgebase article for additional information.

The LDAP server is now compatible with ActiveMQ. Next, create a broker and configure LDAP values based on the LDAP deployment.

Creating a configuration to enable authorization via LDAP

Authorization rules in ActiveMQ are sourced from the broker’s activemq.xml configuration file.

  1. Begin by navigating to the Amazon MQ console to create a configuration with the Authentication Type set as LDAP.
  2. Edit this configuration to include the cachedLDAPAuthorizationMap, which is the node used to configure the locations in the LDAP DIT where authorization rules are stored. For more information on this topic, visit ActiveMQ documentation.
  3. Within the cachedLDAPAuthorizationMap in the broker’s configuration,Add the location of the OUs related to authorization in the broker’s configuration.
  4. Under the authorizationPlugin tag, enter a cachedLDAPAuthorizationMap node.
  5. Do not specify connectionUrl, connectionUsername, or connectionPassword. These values are filled in using the LDAP Server Metadata specified when creating the broker. If you specify these values, they are ignored.An example cachedLDAPAuthorizationMap is presented in the following image:

Creating a broker and testing Active Directory integration

Start by creating a broker using the default durability optimized storage.

  1. Select a Single-instance broker. You can use Active/standby broker or Network of Brokers if required.
  2. Choose Next.
  3. In the next page, under Configure Settings, set a name for the broker.
  4. Select an instance type.
  5. In the ActiveMQ Access section, select LDAP Authentication & Authorization.The input fields display parameters for connecting with the LDAP server. The service account must be associated with a user that can bind to your LDAP server. The server does not need to be public but the domain name must be publicly resolvable.
  6. The next section of the page includes the search configuration for Active Directory users who are authorized to access the queues and topics. The values depend on the org structure created in the Active Directory setup. These values are based on your DIT.
  7. Once users and role search metadata are provided, configure the broker to launch with the configuration created in the previous section (named my-ldap-authorization-conf). Do this by selecting the Additional Settings drop-down and choose the correct configuration file.
  8. Use the configuration where you defined cachedLDAPAuthorizationMap. This enables the broker to enforce read/write/admin permissions for client connections to the broker. These are defined in the LDAP server’s Destination OU.

Once the broker is running, authentication and authorization rules are enforced using the users and authorization rules defined in the configured LDAP server. During the Microsoft Active Directory setup, mquser is added to the admin and write groups for the queue DEMO.MYQUEUE. This means mquser can create and write to the queue DEMO.MYQUEUE but cannot perform any actions on other queues.

Test this by writing to the queue:

The client can connect to the broker and send messages to the queue DEMO.MYQUEUE using the credentials for mquser.

Conclusion

This post shows the steps to integrate an LDAP server with an Amazon MQ broker. After the integration, you can manage authentication and authorization rules for your users, without rebooting the broker.

For more serverless learning resources, visit Serverless Land.

 

Understanding VPC links in Amazon API Gateway private integrations

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/understanding-vpc-links-in-amazon-api-gateway-private-integrations/

This post is written by Jose Eduardo Montilla Lugo, Security Consultant, AWS.

A VPC link is a resource in Amazon API Gateway that allows for connecting API routes to private resources inside a VPC. A VPC link acts like any other integration endpoint for an API and is an abstraction layer on top of other networking resources. This helps simplify configuring private integrations.

This post looks at the underlying technologies that make VPC links possible. I further describe what happens under the hood when a VPC link is created for both REST APIs and HTTP APIs. Understanding these details can help you better assess the features and benefits provided by each type. This also helps you make better architectural decisions when designing API Gateway APIs.

This article assumes you have experience in creating APIs in API Gateway. The main purpose is to provide a deeper explanation of the technologies that make private integrations possible. For more information on creating API Gateway APIs with private integrations, refer to the Amazon API Gateway documentation.

Overview

AWS Hyperplane and AWS PrivateLink

There are two types of VPC links: VPC links for REST APIs and VPC links for HTTP APIs. Both provide access to resources inside a VPC. They are built on top of an internal AWS service called AWS Hyperplane. This is an internal network virtualization platform, which supports inter-VPC connectivity and routing between VPCs. Internally, Hyperplane supports multiple network constructs that AWS services use to connect with the resources in customers’ VPCs. One of those constructs is AWS PrivateLink, which is used by API Gateway to support private APIs and private integrations.

AWS PrivateLink allows access to AWS services and services hosted by other AWS customers, while maintaining network traffic within the AWS network. Since the service is exposed via a private IP address, all communication is virtually local and private. This reduces the exposure of data to the public internet.

In AWS PrivateLink, a VPC endpoint service is a networking resource in the service provider side that enables other AWS accounts to access the exposed service from their own VPCs. VPC endpoint services allow for sharing a specific service located inside the provider’s VPC by extending a virtual connection via an elastic network interface in the consumer’s VPC.

An interface VPC endpoint is a networking resource in the service consumer side, which represents a collection of one or more elastic network interfaces. This is the entry point that allows for connecting to services powered by AWS PrivateLink.

Comparing private APIs and private integrations

Private APIs are different to private integrations. Both use AWS PrivateLink but they are used in different ways.

A private API means that the API endpoint is reachable only through the VPC. Private APIs are accessible only from clients within the VPC or from clients that have network connectivity to the VPC. For example, from on-premises clients via AWS Direct Connect. To enable private APIs, an AWS PrivateLink connection is established between the customer’s VPC and API Gateway’s VPC.

Clients connect to private APIs via an interface VPC endpoint, which routes requests privately to the API Gateway service. The traffic is initiated from the customer’s VPC and flows through the AWS PrivateLink to the API Gateway’s AWS account:

Consumer connected to provider through VPC Link

Consumer connected to provider through VPC Link

When the VPC endpoint for API Gateway is enabled, all requests to API Gateway APIs made from inside the VPC go through the VPC endpoint. This is true for private APIs and public APIs. Public APIs are still accessible from the internet and private APIs are accessible only from the interface VPC endpoint. Currently, you can only configure REST APIs as private.

A private integration means that the backend endpoint resides within a VPC and it’s not publicly accessible. With a private integration, API Gateway service can access the backend endpoint in the VPC without exposing the resources to the public internet.

A private integration uses a VPC link to encapsulate connections between API Gateway and targeted VPC resources. VPC links allow access to HTTP/HTTPS resources within a VPC without having to deal with advanced network configurations. Both REST APIs and HTTP APIs offer private integrations but only VPC links for REST APIs use AWS PrivateLink internally.

VPC links for REST APIs

When you create a VPC link for a REST API, a VPC endpoint service is also created, making the AWS account a service provider. The service consumer in this case is API Gateway’s account. The API Gateway service creates an interface VPC endpoint in their account for the Region where the VPC link is being created. This establishes an AWS PrivateLink from the API Gateway VPC to your VPC. The target of the VPC endpoint service and the VPC link is a Network Load Balancer, which forwards requests to the target endpoints:

VPC Link for REST APIs

VPC Link for REST APIs

Before establishing any AWS PrivateLink connection, the service provider must approve the connection request. Requests from the API Gateway accounts are automatically approved in the VPC link creation process. This is because the AWS accounts that serve API Gateway for each Region are allow-listed in the VPC endpoint service.

When a Network Load Balancer is associated with an endpoint service, the traffic to the targets is sourced from the NLB. The targets receive the private IP addresses of the NLB, not the IP addresses of the service consumers.

This is helpful when configuring the security groups of the instances behind the NLB for two reasons. First, you do not know the IP address range of the VPC that’s connecting to the service. Second, NLB’s elastic network interfaces do not have any security groups attached. This means that they cannot be used as a source in the security groups of the targets. To learn more, read how to find the internal IP addresses assigned to an NLB.

To create a private API with a private integration, two AWS PrivateLink connections are established. The first is from a customer VPC to API Gateway’s VPC so that clients in the VPC can reach the API Gateway service endpoint. The other is from API Gateway’s VPC to the customer VPC so that API Gateway can reach the backend endpoint. Here is an example architecture:

Private API with private integrations

Private API with private integrations

VPC links for HTTP APIs

HTTP APIs are the latest type of API Gateway APIs that are cheaper and faster than REST APIs. VPC links for HTTP APIs do not require the creation of VPC endpoint services so a Network Load Balancer is not necessary. With VPC Links for HTTP APIs, you can now use an ALB or an AWS Cloud Map service to target private resources. This allows for more flexibility and scalability in the configuration required on both sides.

Configuring multiple integration targets is also easier with VPC links for HTTP APIs. For example, VPC links for REST APIs can be associated only with a single NLB. Configuring multiple backend endpoints requires some workarounds such as using multiple listeners on the NLB, associated with different target groups.

In contrast, a single VPC link for HTTP APIs can be associated with multiple backend endpoints without additional configuration. Also, with the new VPC link, customers with containerized applications can use ALBs instead of NLBs and take advantage of layer-7 load-balancing capabilities and other features such as authentication and authorization.

AWS Hyperplane supports multiple types of network virtualization constructs, including AWS PrivateLink. VPC links for REST APIs rely on AWS PrivateLink. However, VPC links for HTTP APIs use VPC-to-VPC NAT, which provides a higher level of abstraction.

The new construct is conceptually similar to a tunnel between both VPCs. These are created via elastic network interface attachments on the provider and consumer ends, which are both managed by AWS Hyperplane. This tunnel allows a service hosted in the provider’s VPC (API Gateway) to initiate communications to resources in a consumer’s VPC. API Gateway has direct connectivity to these elastic network interfaces and can reach the resources in the VPC directly from their own VPC. Connections are permitted according to the configuration of the security groups attached to the elastic network interfaces in the customer side.

Although it seems to provide the same functionality as AWS PrivateLink, these constructs differ in implementation details. A service endpoint in AWS PrivateLink allows for multiple connections to a single endpoint (the NLB), whereas the new approach allows a source VPC to connect to multiple destination endpoints. As a result, a single VPC link can integrate with multiple Application Load Balancers, Network Load Balancers, or resources registered with an AWS Cloud Map service on the customer side:

VPC Link for HTTP APIs

VPC Link for HTTP APIs

This approach is similar to the way that other services such as Lambda access resources inside customer VPCs.

Conclusion

This post explores how VPC links can set up API Gateway APIs with private integrations. VPC links for REST APIs encapsulate AWS PrivateLink resources such as interface VPC endpoints and VPC endpoint services to configure connections from API Gateway’s VPC to customer’s VPC to access private backend endpoints.

VPC links for HTTP APIs use a different construct in the AWS Hyperplane service to provide API Gateway with direct network access to VPC private resources. Understanding the differences between the two is important when adding private integrations as part of your API architecture design.

For more serverless learning resources, visit Serverless Land.

Building a serverless multiplayer game that scales: Part 2

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-serverless-multiplayer-game-that-scales-part-2/

This post is written by Vito De Giosa, Sr. Solutions Architect and Tim Bruce, Sr. Solutions Architect, Developer Acceleration.

This series discusses solutions for scaling serverless games, using the Simple Trivia Service, a game that relies on user-generated content. Part 1 describes the overall architecture, how to deploy to your AWS account, and different communications methods.

This post discusses how to scale via automation and asynchronous processes. You can use automation to minimize the need to scale personnel to review player-generated content for acceptability. It also introduces asynchronous processing, which allows you to run non-critical processes in the background and batch data together. This helps to improve resource usage and game performance. Both scaling techniques can also reduce overall spend.

To set up the example, see the instructions in the GitHub repo and the README.md file. This example uses services beyond the AWS Free Tier and incurs charges. Instructions to remove the example application from your account are also in the README.md file.

Technical implementation

Games require a mechanism to support auto-moderated avatars. Specifically, this is an upload process to allow the player to send the content to the game. There is a content moderation process to remove unacceptable content and a messaging process to provide players with a status regarding their content.

Here is the architecture for this feature in Simple Trivia Service, which is combined within the avatar workflow:

Architecture diagram

This architecture processes images uploaded to Amazon S3 and notifies the user of the processing result via HTTP WebPush. This solution uses AWS Serverless services and the Amazon Rekognition moderation API.

Uploading avatars

Players start the process by uploading avatars via the game client. Using presigned URLs, the client allows players to upload images directly to S3 without sharing AWS credentials or exposing the bucket publicly.

The URL embeds all the parameters of the S3 request. It includes a SignatureV4 generated with AWS credentials from the backend allowing S3 to authorize the request.

S3 upload process

  1. The front end retrieves the presigned URL invoking an AWS Lambda function through an Amazon API Gateway HTTP API endpoint.
  2. The front end uses the URL to send a PUT request to S3 with the image.

Processing avatars

After the upload completes, the backend performs a set of activities. These include content moderation, generating the thumbnail variant, and saving the image URL to the player profile. AWS Step Functions orchestrates the workflow by coordinating tasks and integrating with AWS services, such as Lambda and Amazon DynamoDB. Step Functions enables creating workflows without writing code and handles errors, retries, and state management. This enables traffic control to avoid overloading single components when traffic surges.

The avatar processing workflow runs asynchronously. This allows players to play the game without being blocked and enables you to batch the requests. The Step Functions workflow is triggered from an Amazon EventBridge event. When the user uploads an image to S3, an event is published to EventBridge. The event is routed to the avatar processing Step Functions workflow.

The single avatar feature runs in seconds and uses Step Functions Express Workflows, which are ideal for high-volume event-processing use cases. Step Functions can also support longer running processes and manual steps, depending on your requirements.

To keep performance at scale, the solution adopts four strategies. First, it moderates content automatically, requiring no human intervention. This is done via Amazon Rekognition moderation API, which can discover inappropriate content in uploaded avatars. Developers do not need machine learning expertise to use this API. If it identifies unacceptable content, the Step Functions workflow deletes the uploaded picture.

Second, it uses avatar thumbnails on the top navigation bar and on leaderboards. This speeds up page loading and uses less network bandwidth. Image-editing software runs in a Lambda function to modify the uploaded file and store the result in S3 with the original.

Third, it uses Amazon CloudFront as a content delivery network (CDN) with the S3 bucket hosting images. This improves performance by implementing caching and serving static content from locations closer to the player. Additionally, using CloudFront allows you to keep the bucket private and provide greater security for the content stored within S3.

Finally, it stores profile picture URLs in DynamoDB and replicates the thumbnail URL in an Amazon Cognito user attribute named picture. This allows the game to retrieve the avatar URL as part of the login process, saving an HTTP GET request for the player profile.

The last step of the workflow publishes the result via an event to EventBridge for downstream systems to consume. The service routes the event to the notification component to inform the player about the moderation status.

Notifying users of the processing result

The result of the avatar workflow to the player is important but not urgent. Players want to know the result but not impact their gameplay experience. A solution for this challenge is to use HTTP web push. It uses the HTTP protocol and does not require a constant communication channel between backend and front end. This allows players to play games without being blocked or by introducing latency to the game communications channel.

Applications requiring low latency fully bidirectional communication, such as highly interactive multi-player games, typically use WebSockets. This creates a persistent two-way channel for front end and backend to exchange information. The web push mechanism can provide non-urgent data and messages to the player without interrupting the WebSockets channel.

The web push protocol describes how to use a consolidated push service as a broker between the web-client and the backend. It accepts subscriptions from the client and receives push message delivery requests from the backend. Each browser vendor provides a push service implementation that is compliant with the W3C Push API specification and is external to both client and backend.

The web client is typically a browser where a JavaScript application interacts with the push service to subscribe and listen for incoming notifications. The backend is the application that notifies the front end. Here is an overview of the protocol with all the parties involved.

Notification process

  1. A component on the client subscribes to the configured push service by sending an HTTP POST request. The client keeps a background connection waiting for messages.
  2. The push service returns a URL identifying a push resource that the client distributes to backend applications that are allowed to send notifications.
  3. Backend applications request a message delivery by sending an HTTP POST request to the previously distributed URL.
  4. The push service forwards the information to the client.

This approach has four advantages. First, it reduces the effort to manage the reliability of the delivery process by off-loading it to an external and standardized component. Second, it minimizes cost and resource consumption. This is because it doesn’t require the backend to keep a persistent communication channel or compute resources to be constantly available. Third, it keeps complexity to a minimum because it relies on HTTP only without requiring additional technologies. Finally, HTTP web push addresses concepts such as message urgency and time-to-live (TTL) by using a standard.

Serverless HTTP web push

The implementation of the web push protocol requires the following components, per the Push API specification. First, the front end is required to create a push subscription. This is implemented through a service worker, a script running in the origin of the application. The service worker exposes operations to access the push service either creating subscriptions or listening for push events.

Serverless HTTP web push

  1. The client uses the service worker to subscribe to the push service via the Push API.
  2. The push service responds with a payload including a URL, which is the client’s push endpoint. The URL is used to create notification delivery requests.
  3. The browser enriches the subscription with public cryptographic keys, which are used to encrypt messages ensuring confidentiality.
  4. The backend must receive and store the subscription for when a delivery request is made to the push service. This is provided by API Gateway, Lambda, and DynamoDB. API Gateway exposes an HTTP API endpoint that accepts POST requests with the push service subscription as payload. The payload is stored in DynamoDB alongside the player identifier.

This front end code implements the process:

//Once service worker is ready
navigator.serviceWorker.ready
  .then(function (registration) {
    //Retrieve existing subscription or subscribe
    return registration.pushManager.getSubscription()
      .then(async function (subscription) {
        if (subscription) {
          console.log('got subscription!', subscription)
          return subscription;
        }
        /*
         * Using Public key of our backend to make sure only our
         * application backend can send notifications to the returned
         * endpoint
         */
        const convertedVapidKey = self.vapidKey;
        return registration.pushManager.subscribe({
          userVisibleOnly: true,
          applicationServerKey: convertedVapidKey
        });
      });
  }).then(function (subscription) {
    //Distributing the subscription to the application backend
    console.log('register!', subscription);
    const body = JSON.stringify(subscription);
    const parms = {jwt: jwt, playerName: playerName, subscription: body};
    //Call to the API endpoint to save the subscription
    const res = DataService.postPlayerSubscription(parms);
    console.log(res);
  });

 

Next, the backend reacts to the avatar workflow completed custom event to create a delivery request. This is accomplished with EventBridge and Lambda.

Backend process after avater workflow completed

  1. EventBridge routes the event to a Lambda function.
  2. The function retrieves the player’s agent subscriptions, including push endpoint and encryption keys, from DynamoDB.
  3. The function sends an HTTP POST to the push endpoint with the encrypted message as payload.
  4. When the push service delivers the message, the browser activates the service worker updating local state and displaying the notification.

The push service allows creating delivery requests based on the knowledge of the endpoint and the front end allows the backend to deliver messages by distributing the endpoint. HTTPS provides encryption for data in transit while DynamoDB encrypts all your data at rest to provide confidentiality and security for the endpoint.

Security of WebPush can be further improved by using Voluntary Application Server Identification (VAPID). With WebPush, the clients authenticate messages at delivery time. VAPID allows the push service to perform message authentication on behalf of the web client avoiding denial-of-service risk. Without the additional security of VAPID, any application knowing the push service endpoint might successfully create delivery requests with an invalid payload. This can cause the player’s agent to accept messages from unauthorized services and, possibly, cause a denial-of-service to the client by overloading its capabilities.

VAPID requires backend applications to own a key pair. In Simple Trivia Service, a Lambda function, which is an AWS CloudFormation custom resource, generates the key pair when deploying the stack. It securely saves values in AWS System Manager (SSM) Parameter Store.

Here is a representation of VAPID in action:

VAPID process architecture

  1. The front end specifies which backend the push service can accept messages from. It does this by including the public key from VAPID in the subscription request.
  2. When requesting a message delivery, the backend self-identifies by including the public key and a token signed with the private key in the HTTP Authorization header. If the keys match and the client uses the public key at subscription, the message is sent. If not, the message is blocked by the push service.

The Lambda function that sends delivery requests to the push service reads the key values from SSM. It uses them to generate the Authorization header to include in the request, allowing for successful delivery to the client endpoint.

Conclusion

This post shows how you can add scaling support for a game via automation. The example uses Amazon Rekognition to check images for unacceptable content and uses asynchronous architecture patterns with Step Functions and HTTP WebPush. These scaling approaches can help you to maximize your technical and personnel investments.

For more serverless learning resources, visit Serverless Land.

Automating Amazon CloudWatch dashboards and alarms for Amazon Managed Workflows for Apache Airflow

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/automating-amazon-cloudwatch-dashboards-and-alarms-for-amazon-managed-workflows-for-apache-airflow/

This post is written by Mark Richman, Senior Solutions Architect.

Amazon Managed Workflows for Apache Airflow (MWAA) is a fully managed service that makes it easier to run open-source versions of Apache Airflow on AWS. It allows you to build workflows to run your extract-transform-load (ETL) jobs and data pipelines.

When working with MWAA, you often need to know more about the performance of your Airflow environment to achieve full insight and observability of your system. Airflow emits a number of useful metrics to Amazon CloudWatch, which are described in the product documentation. MWAA allows customers to define CloudWatch dashboards and alarms based upon the metrics and logs that Apache Airflow emits.

Airflow exposes metrics such as number of Directed Acyclic Graph (DAG) processes, DAG bag size, number of currently running tasks, task failures, and successes. Airflow is already set up to send metrics for an MWAA environment to CloudWatch.

This blog demonstrates a solution that automatically detects any deployed Airflow environments associated with the AWS account. It builds a CloudWatch dashboard and some useful alarms for each. All source code for this blog post is available on GitHub.

You automate the creation of a CloudWatch dashboard, which displays several of these key metrics together with CloudWatch alarms. These alarms receive notifications when the metrics fall outside of the thresholds that you configure, and allow you to perform actions in response.

Prerequisites

Deploying this solution requires:

Overview

Based on AWS serverless services, this solution includes:

  1. An Amazon EventBridge rule that runs on a schedule to invoke an AWS Step Functions workflow.
  2. Step Function orchestrates several AWS Lambda functions to query the existing MWAA environments.
  3. Lambda functions will update the CloudWatch dashboard definition to include metrics such as QueuedTasks, RunningTasks, SchedulerHeartbeat, TasksPending, and TotalParseTime.
  4. CloudWatch alarms are created for unhealthy workers and heartbeat failure across all MWAA environments. These alarms are removed for any nonexistent environments.
  5. Any MWAA environments that no longer exist have their respective CW dashboards removed.

Reference architecture

EventBridge, a serverless event service, is configured to invoke a Step Functions workflow every 10 minutes. You can configure this for your preferred interval. Step Functions invokes a number of Lambda functions in parallel. If a function throws an error, the failed step in the state machine transitions to its respective failed state and the entire workflow ends.

Each of the Lambda functions performs a single task, orchestrated by Step Functions. Each function has a descriptive name for the task it performs in the workflow.

Understanding the CreateDashboardFunction function

When you deploy the AWS SAM template, the SeedDynamoDBFunction Lambda function is invoked. The function populates a DynamoDB table called DashboardTemplateTable with a CloudWatch dashboard definition. This definition is a template for any new CloudWatch dashboard created by CreateDashboardFunction.

You can see this definition in the GitHub repo:

{
  "widgets": [{
      "type": "metric",
      "x": 0,
      "y": 0,
      "width": 12,
      "height": 6,
      "properties": {
        "view": "timeSeries",
        "stacked": true,
        "metrics": [
          [
            "AmazonMWAA",
            "QueuedTasks",
            "Function",
            "Executor",
            "Environment",
            "${EnvironmentName}"
          ]
        ],
        "region": "${AWS::Region}",
        "title": "QueuedTasks ${EnvironmentName}",
        "period": 300
      }
    },
    ...
}

When this Step Functions workflow runs, it runs CreateDashboardFunction. This iterates through all the MWAA environments in the account, creating or updating its corresponding CloudWatch dashboard. You can see the code in /functions/create_dashboard/app.py:

for env in mwaa_environments:
        dashboard_name = f"Airflow-{env}"

        dashboard_body = dashboard_template.replace(
            "${AWS::Region}", os.getenv("AWS_REGION", "us-east-1")
        ).replace("${EnvironmentName}", env)

        logger.info(f"Creating/updating dashboard: {dashboard_name}")
        logger.debug(dashboard_body)

        response = cloudwatch.put_dashboard(
            DashboardName=dashboard_name, DashboardBody=dashboard_body
        )

        logger.info(json.dumps(response, indent=2))

Step Functions state machine

Here is a visualization of the Step Functions state machine:

State machine visualization

Step Functions is based on state machines and tasks. A state machine is a workflow. A task is a state in a workflow that represents a single unit of work that another AWS service performs. Each step in a workflow is a state. The state machine in this solution is defined in JSON format in the repo.

Building and deploying the example application

To build and deploy this solution:

  1. Clone the repo from GitHub:
    git clone https://github.com/aws-samples/mwaa-dashboard
    cd mwaa-dashboard
    
  2. Build the application. Since Lambda functions may depend on packages that have natively compiled programs, use the --use-container flag. This flag compiles your functions locally in a Docker container that behaves like the Lambda environment:
    sam build --use-container
  3. Deploy the application to your AWS account:
    sam deploy --guided

This command packages and deploys the application to your AWS account. It provides a series of prompts:

  • Stack Name: The name of the stack to deploy to AWS CloudFormation. This should be unique to your account and Region. This walkthrough uses mwaa-dashboard throughout this project.
  • AWS Region: The AWS Region you want to deploy your app to.
  • Confirm changes before deploy: If set to yes, any change sets will be shown to you for manual review. If set to no, the AWS SAM CLI automatically deploys application changes.
  • Respond to the remaining prompts per the SAM CLI command reference.

Updating the CloudWatch dashboard template definition in DynamoDB

The CloudWatch dashboard template definition is stored in DynamoDB. This is a one-time setup step, performed by the functions/seed_dynamodb Lambda custom resource at stack deployment time.

To override the template, you can edit the data directly in DynamoDB using the AWS Management Console. Alternatively, modify the scripts/dashboard-template.json file and update DynamoDB using the scripts/seed.py script.

cd scripts
./seed.py -t <dynamodb-table-name>
cd ..

Here, <dynamodb-table-name> is the name of the DynamoDB table created during deployment. For example:

./seed.py -t mwaa-dashboard-DashboardTemplateTable-VA2M5945RCF1

Viewing the CloudWatch dashboards and alarms

If you have any existing MWAA environments, or create new ones, a dashboard for each appears with the Airflow- prefix. If you delete an MWAA environment, the corresponding dashboard is also deleted. No CloudWatch metrics are deleted.

Upon successful completion of the Step Functions workflow, you see a list of custom dashboards. There is one for each of your MWAA environments:

Custom dashboards view

Choosing the dashboard name displays the widgets defined in the JSON described previously. Each widget corresponds to an Airflow key performance indicator (KPI). The dashboards can be customized through the AWS Management Console without any code changes.

Example dashboard

These are the metrics:

  • QueuedTasks: The number of tasks with queued state. Corresponds to the executor.queued_tasks Airflow metric.
  • TasksPending: The number of tasks pending in executor. Corresponds to the scheduler.tasks.pending Airflow metric.
  • RunningTasks: The number of tasks running in executor. Corresponds to the executor.running_tasks Airflow metric.
  • SchedulerHeartbeat: The number of check-ins Airflow performs on the scheduler job. Corresponds to the scheduler_heartbeat Airflow metrics.
  • TotalParseTime: Number of seconds taken to scan and import all DAG files once. Corresponds to the dag_processing.total_parse_time Airflow metric.

More information on all MWAA metrics available in CloudWatch can be found in the documentation. CloudWatch alarms are also created for each MWAA environment:

CloudWatch alarms list

By default, you create two alarms:

  • {environment name}-UnhealthyWorker: This alarm triggers if the number of QueuedTasks is greater than the number of RunningTasks, and the number of RunningTasks is zero, for a period of 15 minutes.
  • {environment name}-HeartbeatFail: This alarm triggers if SchedulerHeartbeat is zero for a period of five minutes.

You can configure actions in response to these alarms, such as an Amazon SNS notification to email or a Slack message.

Cleaning up

After testing this application, delete the resources created to avoid ongoing charges. You can use the AWS CLI, AWS Management Console, or the AWS APIs to delete the CloudFormation stack deployed by AWS SAM.

To delete the stack via the AWS CLI, run the following command:

aws cloudformation delete-stack --stack-name mwaa-dashboard

The log groups all share the prefix /aws/lambda/mwaa-dashboard. Delete these with the command:

aws logs delete-log-group --log-group-name <log group>

Conclusion

With Amazon MWAA, you can spend more time building workflows and less time managing and scaling infrastructure. This article shows a serverless example that automatically creates CloudWatch dashboards and alarms for all existing and new MWAA environments. With this example, you can achieve better observability for your MWAA environments.

To get started with MWAA, visit the user guide. To deploy this solution in your own AWS account, visit the GitHub repo for this article.

For more serverless learning resources, visit Serverless Land.

Integrating Amazon API Gateway private endpoints with on-premises networks

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/integrating-amazon-api-gateway-private-endpoints-with-on-premises-networks/

This post was written by Ahmed ElHaw, Sr. Solutions Architect

Using AWS Direct Connect or AWS Site-to-Site VPN, customers can establish a private virtual interface from their on-premises network directly to their Amazon Virtual Private Cloud (VPC). Hybrid networking enables customers to benefit from the scalability, elasticity, and ease of use of AWS services while using their corporate network.

Amazon API Gateway can make it easier for developers to interface with and expose other services in a uniform and secure manner. You can use it to interface with other AWS services such as Amazon SageMaker endpoints for real-time machine learning predictions or serverless compute with AWS Lambda. API Gateway can also integrate with HTTP endpoints and VPC links in the backend.

This post shows how to set up a private API Gateway endpoint with a Lambda integration. It uses a Route 53 resolver, which enables on-premises clients to resolve AWS private DNS names.

Overview

API Gateway private endpoints allow you to use private API endpoints inside your VPC. When used with Route 53 resolver endpoints and hybrid connectivity, you can access APIs and their integrated backend services privately from on-premises clients.

You can deploy the example application using the AWS Serverless Application Model (AWS SAM). The deployment creates a private API Gateway endpoint with a Lambda integration and a Route 53 inbound endpoint. I explain the security configuration of the AWS resources used. This is the solution architecture:

Private API Gateway with a Hello World Lambda integration.

Private API Gateway with a Hello World Lambda integration.

 

  1. The client calls the private API endpoint (for example, GET https://abc123xyz0.execute-api.eu-west-1.amazonaws.com/demostage).
  2. The client asks the on-premises DNS server to resolve (abc123xyz0.execute-api.eu-west-1.amazonaws.com). You must configure the on-premises DNS server to forward DNS queries for the AWS-hosted domains to the IP addresses of the inbound resolver endpoint. Refer to the documentation for your on-premises DNS server to configure DNS forwarders.
  3. After the client successfully resolves the API Gateway private DNS name, it receives the private IP address of the VPC Endpoint of the API Gateway.
    Note: Call the DNS endpoint of the API Gateway for the HTTPS certificate to work. You cannot call the IP address of the endpoint directly.
  4. Amazon API Gateway passes the payload to Lambda through an integration request.
  5. If Route 53 Resolver query logging is configured, queries from on-premises resources that use the endpoint are logged.

Prerequisites

To deploy the example application in this blog post, you need:

  • AWS credentials that provide the necessary permissions to create the resources. This example uses admin credentials.
  • Amazon VPN or AWS Direct Connect with routing rules that allow DNS traffic to pass through to the Amazon VPC.
  • The AWS SAM CLI installed.
  • Clone the GitHub repository.

Deploying with AWS SAM

  1. Navigate to the cloned repo directory. Alternatively, use the sam init command and paste the repo URL:

    SAM init example

    SAM init example

  2. Build the AWS SAM application:
    sam build
  3. Deploy the AWS SAM application:
    sam deploy –guided

This stack creates and configures a virtual private cloud (VPC) configured with two private subnets (for resiliency) and DNS resolution enabled. It also creates a VPC endpoint with (service name = “com.amazonaws.{region}.execute-api”), Private DNS Name = enabled, and a security group set to allow TCP Port 443 inbound from a managed prefix list. You can edit the created prefix list with one or more CIDR block(s).

It also deploys an API Gateway private endpoint and an API Gateway resource policy that restricts access to the API, except from the VPC endpoint. There is also a “Hello world” Lambda function and a Route 53 inbound resolver with a security group that allows TCP/UDP DNS port inbound from the on-premises prefix list.

A VPC endpoint is a logical construct consisting of elastic network interfaces deployed in subnets. The elastic network interface is assigned a private IP address from your subnet space. For high availability, deploy in at least two Availability Zones.

Private API Gateway VPC endpoint

Private API Gateway VPC endpoint

Route 53 inbound resolver endpoint

Route 53 resolver is the Amazon DNS server. It is sometimes referred to as “AmazonProvidedDNS” or the “.2 resolver” that is available by default in all VPCs. Route 53 resolver responds to DNS queries from AWS resources within a VPC for public DNS records, VPC-specific DNS names, and Route 53 private hosted zones.

Integrating your on-premises DNS server with AWS DNS server requires a Route 53 resolver inbound endpoint (for DNS queries that you’re forwarding to your VPCs). When creating an API Gateway private endpoint, a private DNS name is created by API Gateway. This endpoint is resolved automatically from within your VPC.

However, the on-premises servers learn about this hostname from AWS. For this, create a Route 53 inbound resolver endpoint and point your on-premises DNS server to it. This allows your corporate network resources to resolve AWS private DNS hostnames.

To improve reliability, the resolver requires that you specify two IP addresses for DNS queries. AWS recommends configuring IP addresses in two different Availability Zones. After you add the first two IP addresses, you can optionally add more in the same or different Availability Zone.

The inbound resolver is a logical resource consisting of two elastic network interfaces. These are deployed in two different Availability Zones for resiliency.

Route 53 inbound resolver

Route 53 inbound resolver

Configuring the security groups and resource policy

In the security pillar of the AWS Well-Architected Framework, one of the seven design principles is applying security at all layers: Apply a defense in depth approach with multiple security controls. Apply to all layers (edge of network, VPC, load balancing, every instance and compute service, operating system, application, and code).

A few security configurations are required for the solution to function:

  • The resolver security group (referred to as ‘ResolverSG’ in solution diagram) inbound rules must allow TCP and UDP on port 53 (DNS) from your on-premises network-managed prefix list (source). Note: configure the created managed prefix list with your on-premises network CIDR blocks.
  • The security group of the VPC endpoint of the API Gateway “VPCEndpointSG” must allow HTTPS access from your on-premises network-managed prefix list (source). Note: configure the crated managed prefix list with your on-premises network CIDR blocks.
  • For a private API Gateway to work, a resource policy must be configured. The AWS SAM deployment sets up an API Gateway resource policy that allows access to your API from the VPC endpoint. We are telling API Gateway to deny any request explicitly unless it is originating from a defined source VPC endpoint.
    Note: AWS SAM template creates the following policy:

    {
      "Version": "2012-10-17",
      "Statement": [
          {
              "Effect": "Allow",
              "Principal": "*",
              "Action": "execute-api:Invoke",
              "Resource": "arn:aws:execute-api:eu-west-1:12345678901:dligg9dxuk/DemoStage/GET/hello"
          },
          {
              "Effect": "Deny",
              "Principal": "*",
              "Action": "execute-api:Invoke",
              "Resource": "arn:aws:execute-api:eu-west-1: 12345678901:dligg9dxuk/DemoStage/GET/hello",
              "Condition": {
                  "StringNotEquals": {
                      "aws:SourceVpce": "vpce-0ac4147ba9386c9z7"
                  }
              }
          }
      ]
    }

     

The AWS SAM deployment creates a Hello World Lambda. For demonstration purposes, the Lambda function always returns a successful response, conforming with API Gateway integration response.

Testing the solution

To test, invoke the API using a curl command from an on-premises client. To get the API URL, copy it from the on-screen AWS SAM deployment outputs. Alternatively, from the console go to AWS CloudFormation outputs section.

CloudFormation outputs

CloudFormation outputs

Next, go to Route 53 resolvers, select the created inbound endpoint and note of the endpoint IP addresses. Configure your on-premises DNS forwarder with the IP addresses. Refer to the documentation for your on-premises DNS server to configure DNS forwarders.

Route 53 resolver IP addresses

Route 53 resolver IP addresses

Finally, log on to your on-premises client and call the API Gateway endpoint. You should get a success response from the API Gateway as shown.

curl https://dligg9dxuk.execute-api.eu-west-1.amazonaws.com/DemoStage/hello

{"response": {"resultStatus": "SUCCESS"}}

Monitoring and troubleshooting

Route 53 resolver query logging allows you to log the DNS queries that originate in your VPCs. It shows which domain names are queried, the originating AWS resources (including source IP and instance ID) and the responses.

You can log the DNS queries that originate in VPCs that you specify, in addition to the responses to those DNS queries. You can also log DNS queries from on-premises resources that use an inbound resolver endpoint, and DNS queries that use an outbound resolver endpoint for recursive DNS resolution.

After configuring query logging from the console, you can use Amazon CloudWatch as the destination for the query logs. You can use this feature to view and troubleshoot the resolver.

{
    "version": "1.100000",
    "account_id": "1234567890123",
    "region": "eu-west-1",
    "vpc_id": "vpc-0c00ca6aa29c8472f",
    "query_timestamp": "2021-04-25T12:37:34Z",
    "query_name": "dligg9dxuk.execute-api.eu-west-1.amazonaws.com.",
    "query_type": "A",
    "query_class": "IN",
    "rcode": "NOERROR",
    "answers": [
        {
            "Rdata": "10.0.140.226”, API Gateway VPC Endpoint IP#1
            "Type": "A",
            "Class": "IN"
        },
        {
            "Rdata": "10.0.12.179", API Gateway VPC Endpoint IP#2
            "Type": "A",
            "Class": "IN"
        }
    ],
    "srcaddr": "172.31.6.137", ONPREMISES CLIENT
    "srcport": "32843",
    "transport": "UDP",
    "srcids": {
        "resolver_endpoint": "rslvr-in-a7dd746257784e148",
        "resolver_network_interface": "rni-3a4a0caca1d0412ab"
    }
}

Cleaning up

To remove the example application, navigate to CloudFormation and delete the stack.

Conclusion

API Gateway private endpoints allow use cases for building private API–based services inside your VPCs. You can keep both the frontend to your application (API Gateway) and the backend service private inside your VPC.

I discuss how to access your private APIs from your corporate network through Direct Connect or Site-to-Site VPN without exposing your endpoints to the internet. You deploy the demo using AWS Serverless Application Model (AWS SAM). You can also change the template for your own needs.

To learn more, visit the API Gateway tutorials and workshops page in the API Gateway developer guide.

For more serverless learning resources, visit Serverless Land.

Using serverless to load test Amazon API Gateway with authorization

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/using-serverless-to-load-test-amazon-api-gateway-with-authorization/

This post was written by Ashish Mehra, Sr. Solutions Architect and Ramesh Chidirala, Solutions Architect

Many customers design their applications to use Amazon API Gateway as the front door and load test their API endpoints before deploying to production. Customers want to simulate the actual usage scenario, including authentication and authorization. The load test ensures that the application works as expected under high traffic and spiky load patterns.

This post demonstrates using AWS Step Functions for orchestration, AWS Lambda to simulate the load and Amazon Cognito for authentication and authorization. There is no need to use any third-party software or containers to implement this solution.

The serverless load test solution shown here can scale from 1,000 to 1,000,000 calls in a few minutes. It invokes API Gateway endpoints but you can reuse the solution for other custom API endpoints.

Overall architecture

Overall architecture diagram

Overall architecture diagram

Solution design 

The serverless API load test framework is built using Step Functions that invoke Lambda functions using a fan-out design pattern. The Lambda function obtains the user specific JWT access token from Amazon Cognito user pool and invokes the API Gateway authenticated route..

The solution contains two workflows.

1. Load test workflow

The load test workflow comprises a multi-step process that includes a combination of sequential and parallel steps. The sequential steps include user pool configuration, user creation, and access token generation followed by API invocation in a fan-out design pattern. Step Functions provides a reliable way to build and run such multi-step workflows with support for logging, retries, and dynamic parallelism.

Step Functions workflow diagram for load test

Step Functions workflow diagram for load test

The Step Functions state machine orchestrates the following workflow:

  1. Validate input parameters.
  2. Invoke Lambda function to create a user ID array in the series loadtestuser0, loadtestuser1, and so on. This array is passed as an input to subsequent Lambda functions.
  3. Invoke Lambda to create:
    1. Amazon Cognito user pool
    2. Test users
    3. App client configured for admin authentication flow.
  4. Invoke Lambda functions in a fan-out pattern using dynamic parallelism support in Step Functions. Each function does the following:
    1. Retrieves an access token (one token per user) from Amazon Cognito
    2. Sends an HTTPS request to the specified API Gateway endpoint by passing an access token in the header.

For testing purposes, users can configure mock integration or use Lambda integration for the backend.

2. Cleanup workflow

Step Functions workflow diagram for cleanup

Step Functions workflow diagram for cleanup

As part of the cleanup workflow, the Step Functions state machine invokes a Lambda function to delete the specified number of users from the Amazon Cognito user pool.

Prerequisites to implement the solution

The following prerequisites are required for this walk-through:

  1. AWS account
  2. AWS SAM CLI
  3. Python 3.7
  4. Pre-existing non-production API Gateway HTTP API deployed with a JWT authorizer that uses Amazon Cognito as an identity provider. Refer to this video from the Twitch series #SessionsWithSAM which provides a walkthough for building and deploying a simple HTTP API with JWT authorizer.

Since this solution involves modifying API Gateway endpoint’s authorizer settings, it is recommended to load test non-production environments or production comparable APIs. Revert these settings after the load test is complete. Also, first check Lambda and Amazon Cognito Service Quotas in the AWS account you plan to use.

Step-by-step instructions

Use the AWS CloudShell to deploy the AWS Serverless Application Model (AWS SAM) template. AWS CloudShell is a browser-based shell pre-installed with common development tools. It includes 1 GB of free persistent storage per Region pre-authenticated with your console credentials. You can also use AWS Cloud9 or your preferred IDE. You can check for AWS CloudShell supported Regions here. Depending on your load test requirements, you can specify the total number of unique users to be created. You can also specify the number of API Gateway requests to be invoked per user every time you run the load test. These factors influence the overall test duration, concurrency and cost. Refer to the cost optimization section of this post for tips on minimizing the overall cost of the solution. Refer to the cleanup section of this post for instructions to delete the resources to stop incurring any further charges.

  1. Clone the repository by running the following command:
    git clone https://github.com/aws-snippets/sam-apiloadtest.git
  2. Change to the sam-apiloadtest directory and run the following command to build the application source:
    sam build
  3. Run the following command to package and deploy the application to AWS, with a series of prompts. When prompted for apiGatewayUrl, provide the API Gateway URL route you intend to load test.
    sam deploy --guided

    Example of SAM deploy

    Example of SAM deploy

  4. After the stack creation is complete, you should see UserPoolID and AppClientID in the outputs section.

    Example of stack outputs

    Example of stack outputs

  5. Navigate to the API Gateway console and choose the HTTP API you intend to load test.
  6. Choose Authorization and select the authenticated route configured with a JWT authorizer.

    API Gateway console display after stack is deployed

    API Gateway console display after stack is deployed

  7. Choose Edit Authorizer and update the IssuerURL with Amazon Cognito user pool ID and audience app client ID with the corresponding values from the stack output section in step 4.

    Editing the issuer URL

    Editing the issuer URL

  8. Set authorization scope to aws.cognito.signin.user.admin.

    Setting the authorization scopes

    Setting the authorization scopes

  9. Open the Step Functions console and choose the state machine named apiloadtestCreateUsersAndFanOut-xxx.
  10. Choose Start Execution and provide the following JSON input. Configure the number of users for the load test and the number of calls per user:
    {
      "users": {
        "NumberOfUsers": "100",
        "NumberOfCallsPerUser": "100"
      }
    }
  11. After the execution, you see the status updated to Succeeded.

 

Checking the load test results

The load test’s primary goal is to achieve high concurrency. The main metric to check the test’s effectiveness is the count of successful API Gateway invocations. While load testing your application, find other metrics that may identify potential bottlenecks. Refer to the following steps to inspect CloudWatch Logs after the test is complete:

  1. Navigate to API Gateway service within the console, choose Monitor → Logging, select the $default stage, and choose the Select button.
  2. Choose View Logs in CloudWatch to navigate to the CloudWatch Logs service, which loads the log group and displays the most recent log streams.
  3. Choose the “View in Logs Insights” button to navigate to the Log Insights page. Choose Run Query.
  4. The query results appear along with a bar graph showing the log group’s distribution of log events. The number of records indicates the number of API Gateway invocations.

    Histogram of API Gateway invocations

    Histogram of API Gateway invocations

  5. To visualize p95 metrics, navigate to CloudWatch metrics, choose ApiGateway → ApiId → Latency.
  6. Choose the “Graphed metrics (1)” tab.

    Addig latency metric

    Addig latency metric

  7. Select p95 from the Statistic dropdown.

    Setting the p95 value

    Setting the p95 value

  8. The percentile metrics help visualize the distribution of latency metrics. It can help you find critical outliers or unusual behaviors, and discover potential bottlenecks in your application’s backend.

    Example of the p95 data

    Example of the p95 data

Cleanup 

  1. To delete Amazon Cognito users, run the Step Functions workflow apiloadtestDeleteTestUsers. Provide the following input JSON with the same number of users that you created earlier:
    {
    “NumberOfUsers”: “100”
    }
  2. Step Functions invokes the cleanUpTestUsers Lambda function. It is configured with the test Amazon Cognito user pool ID and app client ID environment variables created during the stack deployment. The users are deleted from the test user pool.
  3. The Lambda function also schedules the corresponding KMS keys for deletion after seven days, the minimum waiting period.
  4. After the state machine is finished, navigate to Cognito → Manage User Pools → apiloadtest-loadtestidp → Users and Groups. Refresh the page to confirm that all users are deleted.
  5. To delete all the resources permanently and stop incurring cost, navigate to the CloudFormation console, select aws-apiloadtest-framework stack, and choose Delete → Delete stack.

Cost optimization

The load test workflow is repeatable and can be reused multiple times for the same or different API Gateway routes. You can reuse Amazon Cognito users for multiple tests since Amazon Cognito pricing is based on the monthly active users (MAUs). Repeatedly deleting and recreating users may exceed the AWS Free Tier or incur additional charges.

Customizations

You can change the number of users and number of calls per user to adjust the API Gateway load. The apiloadtestCreateUsersAndFanOut state machine validation step allows a maximum value of 1,000 for input parameters NumberOfUsers and NumberOfCallsPerUser.

You can customize and increase these values within the Step Functions input validation logic based on your account limits. To load test a different API Gateway route, configure the authorizer as per the step-by-step instructions provided earlier. Next, modify the api_url environment variable within aws-apiloadtest-framework-triggerLoadTestPerUser Lambda function. You can then run the load test using the apiloadtestCreateUsersAndFanOut state machine.

Conclusion

The blog post shows how to use Step Functions and its features to orchestrate a multi-step load test solution. I show how changing input parameters could increase the number of calls made to the API Gateway endpoint without worrying about scalability. I also demonstrate how to achieve cost optimization and perform clean-up to avoid any additional charges. You can modify this example to load test different API endpoints, identify bottlenecks, and check if your application is production-ready.

For more serverless learning resources, visit Serverless Land.

Developing evolutionary architecture with AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/developing-evolutionary-architecture-with-aws-lambda/

This post was written by Luca Mezzalira, Principal Solutions Architect, Media and Entertainment.

Agility enables you to evolve a workload quickly, adding new features, or introducing new infrastructure as required. The key characteristics for achieving agility in a code base are loosely coupled components and strong encapsulation.

Loose coupling can help improve test coverage and create atomic refactoring. With encapsulation, you expose only what is needed to interact with a service without revealing the implementation logic.

Evolutionary architectures can help achieve agility in your design. In the book “Building Evolutionary Architectures”, this architecture is defined as one that “supports guided, incremental change across multiple dimensions”.

This blog post focuses on how to structure code for AWS Lambda functions in a modular fashion. It shows how to embrace the evolutionary aspect provided by the hexagonal architecture pattern and apply it to different use cases.

Introducing ports and adapters

Hexagonal architecture is also known as the ports and adapters architecture. It is an architectural pattern used for encapsulating domain logic and decoupling it from other implementation details, such as infrastructure or client requests.

Ports and adapters

  1. Domain logic: Represents the task that the application should perform, abstracting any interaction with the external world.
  2. Ports: Provide a way for the primary actors (on the left) to interact with the application, via the domain logic. The domain logic also uses ports for interacting with secondary actors (on the right) when needed.
  3. Adapters: A design pattern for transforming one interface into another interface. They wrap the logic for interacting with a primary or secondary actor.
  4. Primary actors: Users of the system such as a webhook, a UI request, or a test script.
  5. Secondary actors: used by the application, these services are either a Repository (for example, a database) or a Recipient (such as a message queue).

Hexagonal architecture with Lambda functions

Lambda functions are units of compute logic that accomplish a specific task. For example, a function could manipulate data in a Amazon Kinesis stream, or process messages from an Amazon SQS queue.

In Lambda functions, hexagonal architecture can help you implement new business requirements and improve the agility of a workload. This approach can help create separation of concerns and separate the domain logic from the infrastructure. For development teams, it can also simplify the implementation of new features and parallelize the work across different developers.

The following example introduces a service for returning a stock value. The service supports different currencies for a frontend application that displays the information in a dashboard. The translation of a stock value between currencies happens in real time. The service must retrieve the exchange rates with every request made by the client.

The architecture for this service uses an Amazon API Gateway endpoint that exposes a REST API. When the client calls the API, it triggers a Lambda function. This gets the stock value from a DynamoDB table and the currency information from a third-party endpoint. The domain logic uses the exchange rate to convert the stock value to other currencies before responding to the client request.

The full example is available in the AWS GitHub samples repository. Here is the architecture for this service:

Hexagonal architecture example

  1. A client makes a request to the API Gateway endpoint, which invokes the Lambda function.
  2. The primary adapter receives the request. It captures the stock ID and pass it to the port:
    exports.lambdaHandler = async (event) => {
        try{
    	// retrieve the stockID from the request
            const stockID = event.pathParameters.StockID;
    	// pass the stockID to the port
            const response = await getStocksRequest(stockID);
            return response
        } 
    };
  3. The port is an interface for communicating with the domain logic. It enforces the separation between an adapter and the domain logic. With this approach, you can change and test the infrastructure and domain logic in isolation without impacting another part of the code base:
    const retrieveStock = async (stockID) => {
        try{
    	//use the port “stock” to access the domain logic
            const stockWithCurrencies = await stock.retrieveStockValues(stockID)
            return stockWithCurrencies;
        }
    }
    
  4. The port passing the stock ID invokes the domain logic entry point. The domain logic fetches the stock value from a DynamoDB table, then it requests the exchange rates. It returns the computed values to the primary adapter via the port. The domain logic always uses a port to interact with an adapter because the ports are the interfaces with the external world:
    const CURRENCIES = [“USD”, “CAD”, “AUD”]
    const retrieveStockValues = async (stockID) => {
    try {
    //retrieve the stock value from DynamoDB using a port
            const stockValue = await Repository.getStockData(stockID);
    //fetch the currencies value using a port
            const currencyList = await Currency.getCurrenciesData(CURRENCIES);
    //calculate the stock value in different currencies
            const stockWithCurrencies = {
                stock: stockValue.STOCK_ID,
                values: {
                    "EUR": stockValue.VALUE
                }
            };
            for(const currency in currencyList.rates){
                stockWithCurrencies.values[currency] =  (stockValue.VALUE * currencyList.rates[currency]).toFixed(2)
            }
    // return the final computation to the port
            return stockWithCurrencies;
        }
    }
    

 

This is how the domain logic interacts with the DynamoDB table:

DynamoDB interaction

  1. The domain logic uses the Repository port for interacting with the database. There is not a direct connection between the domain and the adapter:
    const getStockData = async (stockID) => {
        try{
    //the domain logic pass the request to fetch the stock ID value to this port
            const data = await getStockValue(stockID);
            return data.Item;
        } 
    }
    
  2. The secondary adapter encapsulates the logic for reading an item from a DynamoDB table. All the logic for interacting with DynamoDB is encapsulated in this module:
    const getStockValue = async (stockID) => {
        let params = {
            TableName : DB_TABLE,
            Key:{
                'STOCK_ID': stockID
            }
        }
        try {
            const stockData = await documentClient.get(params).promise()
            return stockData
        }
    }
    

 

The domain logic uses an adapter for fetching the exchange rates from the third-party service. It then processes the data and responds to the client request:

 

  1. Currencies API interactionThe second operation in the business logic is retrieving the currency exchange rates. The domain logic requests the operation via a port that proxies the request to the adapter:
    const getCurrenciesData = async (currencies) => {
        try{
            const data = await getCurrencies(currencies);
            return data
        } 
    }
    
  2. The currencies service adapter fetches the data from a third-party endpoint and returns the result to the domain logic.
    const getCurrencies = async (currencies) => {
        try{        
            const res = await axios.get(`http://api.mycurrency.io?symbols=${currencies.toString()}`)
            return res.data
        } 
    }
    

These eight steps show how to structure the Lambda function code using a hexagonal architecture.

Adding a cache layer

In this scenario, the production stock service experiences traffic spikes during the day. The external endpoint for the exchange rates cannot support the level of traffic. To address this, you can implement a caching strategy with Amazon ElastiCache using a Redis cluster. This approach uses a cache-aside pattern for offloading traffic to the external service.

Typically, it can be challenging to evolve code to implement this change without the separation of concerns in the code base. However, in this example, there is an adapter that interacts with the external service. Therefore, you can change the implementation to add the cache-aside pattern and maintain the same API contract with the rest of the application:

const getCurrencies = async (currencies) => {
    try{        
// Check the exchange rates are available in the Redis cluster
        let res = await asyncClient.get("CURRENCIES");
        if(res){
// If present, return the value retrieved from Redis
            return JSON.parse(res);
        }
// Otherwise, fetch the data from the external service
        const getCurr = await axios.get(`http://api.mycurrency.io?symbols=${currencies.toString()}`)
// Store the new values in the Redis cluster with an expired time of 20 seconds
        await asyncClient.set("CURRENCIES", JSON.stringify(getCurr.data), "ex", 20);
// Return the data to the port
        return getCurr.data
    } 
}

This is a low-effort change only affecting the adapter. The domain logic and port interacting with the adapter are untouched and maintain the same API contract. The encapsulation provided by this architecture helps to evolve the code base. It also preserves many of the tests in place, considering only an adapter is modified.

Moving domain logic from a container to a Lambda function

In this example, the team working on this workload originally wrap all the functionality inside a container using AWS Fargate with Amazon ECS. In this case, the developers define a route for the GET method for retrieving the stock value:

// This web application uses the Fastify framework 
  fastify.get('/stock/:StockID', async (request, reply) => {
    try{
        const stockID = request.params.StockID;
        const response = await getStocksRequest(stockID);
        return response
    } 
})

In this case, the route’s entry point is exactly the same for the Lambda function. The team does not need to change anything else in the code base, thanks to the characteristics provided by the hexagonal architecture.

This pattern can help you more easily refactor code from containers or virtual machines to multiple Lambda functions. It introduces a level of code portability that can be more challenging with other solutions.

Benefits and drawbacks

As with any pattern, there are benefits and drawbacks to using hexagonal architecture.

The main benefits are:

  • The domain logic is agnostic and independent from the outside world.
  • The separation of concerns increases code testability.
  • It may help reduce technical debt in workloads.

The drawbacks are:

  • The pattern requires an upfront investment of time.
  • The domain logic implementation is not opinionated.

Whether you should use this architecture for developing Lambda functions depends upon the needs of your application. With an evolving workload, the extra implementation effort may be worthwhile.

The pattern can help improve code testability because of the encapsulation and separation of concerns provided. This approach can also be used with compute solutions other than Lambda, which may be useful in code migration projects.

Conclusion

This post shows how you can evolve a workload using hexagonal architecture. It explains how to add new functionality, change underlying infrastructure, or port the code base between different compute solutions. The main characteristics enabling this are loose coupling and strong encapsulation.

To learn more about hexagonal architecture and similar patterns, read:

For more serverless learning resources, visit Serverless Land.

Translating content dynamically by using Amazon S3 Object Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/translating-content-dynamically-by-using-amazon-s3-object-lambda/

This post is written by Sandeep Mohanty, Senior Solutions Architect.

The recent launch of Amazon S3 Object Lambda creates many possibilities to transform data in S3 buckets dynamically. S3 Object Lambda can be used with other AWS serverless services to transform content stored in S3 in many creative ways. One example is using S3 Object Lambda with Amazon Translate to translate and serve content from S3 buckets on demand.

Amazon Translate is a serverless machine translation service that delivers fast and customizable language translation. With Amazon Translate, you can localize content such as websites and applications to serve a diverse set of users.

Using S3 Object Lambda with Amazon Translate, you do not need to translate content in advance for all possible permutations of source to target languages. Instead, you can transform content in near-real time using a data driven model. This can serve multiple language-specific applications simultaneously.

S3 Object Lambda enables you to process and transform data using Lambda functions as objects are being retrieved from S3 by a client application. S3 GET object requests invoke the Lambda function and you can customize it to transform the content to meet specific requirements.

For example, if you run a website or mobile application with global visitors, you must provide translations in multiple languages. Artifacts such as forms, disclaimers, or product descriptions can be translated to serve a diverse global audience using this approach.

Solution architecture

This is the high-level architecture diagram for the example application that translates dynamic content on demand:

Solution architecture

In this example, you create an S3 Object Lambda that intercepts S3 GET requests for an object. It then translates the file to a target language, passed as an argument appended to the S3 object key. At a high level, the steps can be summarized as follows:

  1. Create a Lambda function to translate data from a source language to a target language using Amazon Translate.
  2. Create an S3 Object Lambda Access Point from the S3 console.
  3. Select the Lambda function created in step 1.
  4. Provide a supporting S3 Access Point to give S3 Object Lambda access to the original object.
  5. Retrieve a file from S3 by invoking the S3 GetObject API, and pass the Object Lambda Access Point ARN as the bucket name instead of the actual S3 bucket name.

Creating the Lambda function

In the first step, you create the Lambda function, DynamicFileTranslation. This Lambda function is invoked by an S3 GET Object API call and it translates the requested object. The target language is passed as an argument appended to the S3 object key, corresponding to the object being retrieved.

For example, for the object key passed in the S3 GetObject API call is customized to look something like “ContactUs/contact-us.txt#fr”, the characters after the pound sign represent the code for the target language. In this case, ‘fr’ is French. The full list of supported languages and language codes can be found here.

This Lambda function dynamically translates the content of an object in S3 to a target language:

import json
import boto3
from urllib.parse import urlparse, unquote
from pathlib import Path
def lambda_handler(event, context):
    print(event)

   # Extract the outputRoute and outputToken from the object context
    object_context = event["getObjectContext"]
    request_route = object_context["outputRoute"]
    request_token = object_context["outputToken"]

   # Extract the user requested URL and the supporting access point arn
    user_request_url = event["userRequest"]["url"]
    supporting_access_point_arn = event["configuration"]["supportingAccessPointArn"]

    print("USER REQUEST URL: ", user_request_url)
   
   # The S3 object key is after the Host name in the user request URL.
   # The user request URL looks something like this, 
   # https://<User Request Host>/ContactUs/contact-us.txt#fr.
   # The target language code in the S3 GET request is after the "#"
    
    user_request_url = unquote(user_request_url)
    result = user_request_url.split("#")
    user_request_url = result[0]
    targetLang = result[1]
    
   # Extract the S3 Object Key from the user requested URL
    s3Key = str(Path(urlparse(user_request_url).path).relative_to('/'))
       
   # Get the original object from S3
    s3 = boto3.resource('s3')
    
   # To get the original object from S3,use the supporting_access_point_arn 
    s3Obj = s3.Object(supporting_access_point_arn, s3Key).get()
    srcText = s3Obj['Body'].read()
    srcText = srcText.decode('utf-8')

   # Translate original text
    translateClient = boto3.client('translate')
    response = translateClient.translate_text(
                                                Text = srcText,
                                                SourceLanguageCode='en',
                                                TargetLanguageCode=targetLang)
    
  # Write object back to S3 Object Lambda
    s3client = boto3.client('s3')
    s3client.write_get_object_response(
                                        Body=response['TranslatedText'],
                                        RequestRoute=request_route,
                                        RequestToken=request_token )
    
    return { 'statusCode': 200 }

The code in the Lambda function:

  • Extracts the outputRoute and outputToken from the object context. This defines where the WriteGetObjectResponse request is delivered.
  • Extracts the user-requested url from the event object.
  • Parses the S3 object key and the target language that is appended to the S3 object key.
  • Calls S3 GetObject to fetch the raw text of the source object.
  • Invokes Amazon Translate with the raw text extracted.
  • Puts the translated output back to S3 using the WriteGetObjectResponse API.

Configuring the Lambda IAM role

The Lambda function needs permissions to call back to the S3 Object Lambda access point with the WriteGetObjectResponse. It also needs permissions to call S3 GetObject and Amazon Translate. Add the following permissions to the Lambda execution role:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "S3ObjectLambdaAccess",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3-object-lambda:WriteGetObjectResponse"
            ],
            "Resource": [
                "<arn of your S3 access point/*>”,
                "<arn of your Object Lambda accesspoint>"
            ]
        },
        {
            "Sid": "AmazonTranslateAccess",
            "Effect": "Allow",
            "Action": "translate:TranslateText",
            "Resource": "*"
        }
    ]
}

Deploying the Lambda using an AWS SAM template

Alternatively, deploy the Lambda function with the IAM role by using an AWS SAM template. The code for the Lambda function and the AWS SAM template is available for download from GitHub.

Creating the S3 Access Point

  1. Navigate to the S3 console and create a bucket with a unique name.
  2. In the S3 console, select “Access Points” and choose “Create access point”. Enter a name for the access point.
  3. For Bucket name, enter the S3 bucket name you entered in step 1.Create access point

This access point is the supporting access point for the Object Lambda Access Point you create in the next step. Keep all other settings on this page as default.

After creating the S3 access point, create the S3 Object Lambda Access Point using the supporting Access Point. The Lambda function you created earlier uses the supporting Access Point to download the original untransformed objects from S3.

Create Object Lambda Access Point

In the S3 console, go to the Object Lambda Access Point configuration and create an Object Lambda Access Point. Enter a name.

Create Object Lambda Access Point

For the Lambda function configurations, associate this with the Lambda function created earlier. Select the latest version of the Lambda function and keep all other settings as default.

Select Lambda function

To understand how to use the other settings of the S3 Object Lambda configuration, refer to the product documentation.

Testing dynamic content translation

In this section, you create a Python script and invoke the S3 GetObject API twice. First, against the S3 bucket and then against the Object Lambda Access Point. You can then compare the output to see how content is transformed using Object Lambda:

  1. Upload a text file to the S3 bucket using the Object Lambda Access Point you configured. For example, upload a sample “Contact Us” file in English to S3.
  2. To use the object Lambda Access Point, locate its ARN from the Properties tab of the Object Lambda Access Point.Object Lambda Access Point
  3. Create a local file called s3ol_client.py that contains the following Python script:
    import json
    import boto3
    import sys, getopt
      
    def main(argv):
    	
      try:	
        targetLang = sys.argv[1]
        print("TargetLang = ", targetLang)
        
        s3 = boto3.client('s3')
        s3Bucket = "my-s3ol-bucket"
        s3Key = "ContactUs/contact-us.txt"
    
        # Call get_object using the S3 bucket name
        response = s3.get_object(Bucket=s3Bucket,Key=s3Key)
        print("Original Content......\n")
        print(response['Body'].read().decode('utf-8'))
    
        print("\n")
    
        # Call get_object using the S3 Object Lambda access point ARN
        s3Bucket = "arn:aws:s3-object-lambda:us-west-2:123456789012:accesspoint/my-s3ol-access-point"
        s3Key = "ContactUs/contact-us.txt#" + targetLang
        response = s3.get_object(Bucket=s3Bucket,Key=s3Key)
        print("Transformed Content......\n")
        print(response['Body'].read().decode('utf-8'))
    
        return {'Success':200}             
      except:
        print("\n\nUsage: s3ol_client.py <Target Language Code>")
    
    #********** Program Entry Point ***********
    if __name__ == '__main__':
        main(sys.argv[1: ])
    
  4. Run the client program from the command line, passing the target language code as an argument. The full list of supported languages and codes in Amazon Translate can be found here.python s3ol_client.py "fr"

The output looks like this:

Example output

The first output is the original content that was retrieved when calling GetObject with the S3 bucket name. The second output is the transformed content when calling GetObject against the Object Lambda access point. The content is transformed by the Lambda function as it is being retrieved, and translated to French in near-real time.

Conclusion

This blog post shows how you can use S3 Object Lambda with Amazon Translate to simplify dynamic content translation by using a data driven approach. With user-provided data as arguments, you can dynamically transform content in S3 and generate a new object.

It is not necessary to create a copy of this new object in S3 before returning it to the client. We also saw that it is not necessary for the object with the same name to exist in the S3 bucket when using S3 Object Lambda. This pattern can be used to address several real world use cases that can benefit from the ability to transform and generate S3 objects on the fly.

For more serverless learning resources, visit Serverless Land.

Implementing a LIFO task queue using AWS Lambda and Amazon DynamoDB

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/implementing-a-lifo-task-queue-using-aws-lambda-and-amazon-dynamodb/

This post was written by Diggory Briercliffe, Senior IoT Architect.

When implementing a task queue, you can use Amazon SQS standard or FIFO (First-In-First-Out) queue types. Both queue types give priority to tasks created earlier over tasks that are created later. However, there are use cases where you need a LIFO (Last-In-First-Out) queue.

This post shows how to implement a serverless LIFO task queue. This uses AWS Lambda, Amazon DynamoDB, AWS Serverless Application Model (AWS SAM), and other AWS Serverless technologies.

The LIFO task queue gives priority to newer queue tasks over earlier tasks. Under heavy load, earlier tasks are deprioritized and eventually removed. This is useful when your workload must communicate with a system that is throughput-constrained and newer tasks should have priority.

To help understand the approach, consider the following use case. As part of optimizing the responsiveness of a mobile application, an IoT application validates device IP addresses after connecting to AWS IoT Core. Users open the application soon after the device connects so the most recent connection events should take priority for the validation work.

If the validation work is not done at connection time, it can be done later. A legacy system validates the IP addresses, but its throughput capacity cannot match the peak connection rate of the IoT devices. A LIFO queue can manage this load, by prioritizing validation of newer connection events. It can buffer or load shed earlier connection event validation.

For a more detailed discussion around insurmountable queue backlogs and queuing theory, read “Avoiding insurmountable queue backlogs” in the Amazon Builders’ Library.

Example application

An example application implementing the LIFO queue approach is available at https://github.com/aws-samples/serverless-lifo-queue-demonstration.

The application uses AWS SAM and the Lambda functions are written in Node.js. The AWS SAM template describes AWS resources required by the application. These include a DynamoDB table, Lambda functions, and Amazon SNS topics.

The README file contains instructions on deploying and testing the application, with detailed information on how it works.

Overview

The example application has the following queue characteristics:

  1. Newer queue tasks are prioritized over earlier tasks.
  2. Queue tasks are buffered if they cannot be processed.
  3. Queue tasks are eventually deleted if they are never processed, such as when the queue is under insurmountable load.
  4. Correct queue task state transition is maintained (such as PENDING to TAKEN, but not PENDING to SUCCESS).

A DynamoDB table stores queue task items. It uses the following DynamoDB features:

  • A global secondary index (GSI) sorts queue task items by a created timestamp, in reverse chronological (LIFO) order.
  • Update expressions and condition expressions provide atomic and exclusive queue task item updates. This prevents duplicate processing of queue tasks and ensures that the queue task state transitions are valid.
  • Time to live (TTL) deletes queue task items once they expire. Under insurmountable load, this ensures that tasks are deleted if they are never processed from the queue. It also deletes queue task items once they have been processed.
  • DynamoDB Streams invoke a Lambda function when new queue task items are inserted into the table and must be processed.

The application consists of the following resources defined in the AWS SAM template:

  • QueueTable: A DynamoDB table containing queue task items, which is configured for DynamoDB Streams to invoke a TriggerFunction.
  • TriggerFunction: A Lambda function, which governs triggering of queue task processing. Source code: app/trigger.js
  • ProcessTasksFunction: A Lambda function, which processes queue tasks and ensures consistent queue task state flow. Source code: app/process_tasks.js
  • CreateTasksFunction: A Lambda function, which inserts queue task items into the QueueTable. Source code: app/create_tasks.js
  • TriggerTopic: An SNS topic which TriggerFunction subscribes to.
  • ProcessTasksTopic: An SNS topic which ProcessTasksFunction subscribes to.

The following diagram illustrates how those resources interact to implement the LIFO queue.

LIFO Architecture diagram

LIFO Architecture diagram

  1. CreateTasksFunction inserts queue task items into QueueTable with PENDING state.
  2. A DynamoDB stream invokes TriggerFunction for all queue task item activity in QueueTable.
  3. TriggerFunction publishes a notification on ProcessTasksTopic if queue tasks should be processed.
  4. ProcessTasksFunction subscribes to ProcessTasksTopic.
  5. ProcessTasksFunction queries for PENDING queue task items in QueueTable for up to 1 minute, or until no PENDING queue task items remain.
  6. ProcessTasksFunction processes each PENDING queue task by calling the throughput constrained legacy system.
  7. ProcessTasksFunction updates each queue task item during processing to reflect state (first to TAKEN, and then to SUCCESS, FAILURE, or PENDING).
  8. ProcessTasksFunction publishes an SNS notification on TriggerTopic if PENDING tasks remain in the queue.
  9. TriggerFunction subscribes to TriggerTasksTopic.

Application activity continues while DynamoDB Streams receives QueueTable events (2) or TriggerTasksTopic receives notifications (9).

LIFO queue DynamoDB table

A DynamoDB table stores the LIFO queue task items. The AWS SAM template defines this resource (named QueueTable):

  • Each item in the table represents a queue task. It has the item attributes taskId (hash key), taskStatus, taskCreated, and taskUpdated.
  • The table has a single global secondary index (GSI) with taskStatus as the hash key and taskCreated as the range key. This GSI is fundamental to LIFO queue characteristics. It allows you to query for PENDING queue tasks, in reverse chronological order, so that the newest tasks can be processed first.
  • The DynamoDB TTL attribute causes earlier queue tasks to expire and be deleted. This prevents the queue from growing indefinitely if there is insurmountable load.
  • DynamoDB Streams invokes the TriggerFunction Lambda function for all changes in QueueTable.

Triggering queue task processing

The application continuously processes all PENDING queue tasks until there is none remaining. With no PENDING queue tasks, the application will be idle.

As the application is serverless, task processing is triggered by events. If a single Lambda function cannot process the volume of PENDING tasks, the application notifies itself so that processing can continue in another invocation. This is a tail call, which is an SNS notification sent by ProcessTasksFunction to TriggerTopic.

The Lambda functions, which collaborate on managing the LIFO queue are:

  • TriggerFunction is a proxy to ProcessTasksFunction and decides if task processing should be triggered. This function is invoked by DynamoDB Streams events on item changes in QueueTable or by a tail call SNS notification received from TriggerTopic.
  • ProcessTasksFunction performs the processing of queue tasks and implements the LIFO queue behavior. An SNS notification published on ProcessTasksTopic invokes this function.

Processing queue task items

The ProcessTasksFunction function processes queue tasks:

  1. The function is invoked by an SNS notification on ProcessTasksTopic.
  2. While the function runs, it polls QueueTable for PENDING queue tasks.
  3. The function processes each queue task and then updates the item.
  4. The function stops polling after 1 minute or if there are no PENDING queue tasks remaining.
  5. If there are more PENDING tasks in the queue, the function triggers another task. It sends a tail call SNS notification to TriggerTopic.

This uses DynamoDB expressions to ensure that tasks are not processed more than once during periods of concurrent function invocations. To prevent higher concurrency, the reserved concurrent executions attribute is set to 1.

Before processing a queue task, the taskStatus item attribute is transitioned from PENDING to TAKEN. Following queue task processing, the taskStatus item attribute is transitioned from TAKEN to SUCCESS or FAILURE.

If a queue task cannot be processed (for example, an external system has reached capacity), the item taskStatus attribute is set to PENDING again. Any aging PENDING queue tasks that cannot be processed are buffered. They are eventually deleted once they expire, due to the TTL configuration.

Querying for queue task items

To get the most recently created PENDING queue tasks, query the task-status-created-index GSI. The following shows the DynamoDB query action request parameters for the task-status-created-index. By using a Limit of 10 and setting ScanIndexForward to false, it retrieves the 10 most recently created queue task items:

{
  "TableName": "QueueTable",
  "IndexName": "task-status-created-index",
  "ExpressionAttributeValues": {
    ":taskStatus": {
      "S": "PENDING"
    }
  },
  "KeyConditionExpression": "taskStatus = :taskStatus",
  "Limit": 10,
  "ScanIndexForward": false
}

Updating queue tasks items

The following code shows request parameters for the DynamoDB UpdateItem action. This sets the taskStatus attribute of a queue task item (to TAKEN from PENDING). The update expression and condition expression ensure that the taskStatus is set (to TAKEN) only if the current value is as expected (from PENDING). It also ensures that the update is atomic. This prevents more-than-once processing of a queue task.

{
  "TableName": "QueueTable",
  "Key": {
    "taskId": {
      "S": "task-123"
    }
  },
  "UpdateExpression": "set taskStatus = :toTaskStatus, taskUpdated = :taskUpdated",
  "ConditionExpression": "taskStatus = :fromTaskStatus",
  "ExpressionAttributeValues": {
    ":fromTaskStatus": {
      "S": "PENDING"
    },
    ":toTaskStatus": {
      "S": "TAKEN"
    },
    ":taskUpdated": {
      "N": "1623241938151"
    }
  }
}

Conclusion

This post describes how to implement a LIFO queue with AWS Serverless technologies, using an example application as an example. Newer tasks in the queue are prioritized over earlier tasks. Tasks that cannot be processed are buffered and eventually load shed. This helps for use cases with heavy load and where newer queue tasks must take priority.

For more serverless learning resources, visit Serverless Land.

Using GitHub Actions to deploy serverless applications

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/using-github-actions-to-deploy-serverless-applications/

This post is written by Gopi Krishnamurthy, Senior Solutions Architect.

Continuous integration and continuous deployment (CI/CD) is one of the major DevOps components. This allows you to build, test, and deploy your applications rapidly and reliably, while improving quality and reducing time to market.

GitHub is an AWS Partner Network (APN) with the AWS DevOps Competency. GitHub Actions is a GitHub feature that allows you to automate tasks within your software development lifecycle. You can use GitHub Actions to run a CI/CD pipeline to build, test, and deploy software directly from GitHub.

The AWS Serverless Application Model (AWS SAM) is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. With a few lines per resource, you can define the application you want and model it using YAML.

During deployment, AWS SAM transforms and expands the AWS SAM syntax into AWS CloudFormation syntax, enabling you to build serverless applications faster. The AWS SAM CLI allows you to build, test, and debug applications locally, defined by AWS SAM templates. You can also use the AWS SAM CLI to deploy your applications to AWS. For AWS SAM example code, see the serverless patterns collection.

In this post, you learn how to create a sample serverless application using AWS SAM. You then use GitHub Actions to build, and deploy the application in your AWS account.

New GitHub action setup-sam

A GitHub Actions runner is the application that runs a job from a GitHub Actions workflow. You can use a GitHub hosted runner, which is a virtual machine hosted by GitHub with the runner application installed. You can also host your own runners to customize the environment used to run jobs in your GitHub Actions workflows.

AWS has released a GitHub action called setup-sam to install AWS SAM, which is pre-installed on GitHub hosted runners. You can use this action to install a specific, or the latest AWS SAM version.

This demo uses AWS SAM to create a small serverless application using one of the built-in templates. When the code is pushed to GitHub, a GitHub Actions workflow triggers a GitHub CI/CD pipeline. This builds, and deploys your code directly from GitHub to your AWS account.

Prerequisites

  1. A GitHub account: This post assumes you have the required permissions to configure GitHub repositories, create workflows, and configure GitHub secrets.
  2. Create a new GitHub repository and clone it to your local environment. For this example, create a repository called github-actions-with-aws-sam.
  3. An AWS account with permissions to create the necessary resources.
  4. Install AWS Command Line Interface (CLI) and AWS SAM CLI locally. This is separate from using the AWS SAM CLI in a GitHub Actions runner. If you use AWS Cloud9 as your integrated development environment (IDE), AWS CLI and AWS SAM are pre-installed.
  5. Create an Amazon S3 bucket in your AWS account to store the build package for deployment.
  6. An AWS user with access keys, which the GitHub Actions runner uses to deploy the application. The user also write requires access to the S3 bucket.

Creating the AWS SAM application

You can create a serverless application by defining all required resources in an AWS SAM template. AWS SAM provides a number of quick-start templates to create an application.

  1. From the CLI, open a terminal, navigate to the parent of the cloned repository directory, and enter the following:
  2. sam init -r python3.8 -n github-actions-with-aws-sam --app-template "hello-world"
  3. When asked to select package type (zip or image), select zip.

This creates an AWS SAM application in the root of the repository named github-actions-with-aws-sam, using the default configuration. This consists of a single AWS Lambda Python 3.8 function invoked by an Amazon API Gateway endpoint.

To see additional runtimes supported by AWS SAM and options for sam init, enter sam init -h.

Local testing

AWS SAM allows you to test your applications locally. AWS SAM provides a default event in events/event.json that includes a message body of {\"message\": \"hello world\"}.

    1. Invoke the HelloWorldFunction Lambda function locally, passing the default event:
    2. sam local invoke HelloWorldFunction -e events/event.json
    3. The function response is:
    4. {"message": "hello world"}

    5. Test the API Gateway functionality in front of the Lambda function by first starting the API locally:
    6. sam local start-api
    7. AWS SAM launches a Docker container with a mock API Gateway endpoint listening on localhost:3000.
    8. Use curl to call the hello API:
    curl http://127.0.0.1:3000/hello

    The API response should be:

    {"message": "hello world"}

    Creating the sam-pipeline.yml file

    GitHub CI/CD pipelines are configured using a YAML file. This file configures what specific action triggers a workflow, such as push on main, and what workflow steps are required.

    In the root of the repository containing the files generated by sam init, create the directory: .github/workflows.

    1. Create a new file called sam-pipeline.yml under the .github/workflows directory.
    2. sam-pipeline.yml file

      sam-pipeline.yml file

    3. Edit the sam-pipeline.yml file and add the following:
    4. on:
        push:
          branches:
            - main
      jobs:
        build-deploy:
          runs-on: ubuntu-latest
          steps:
            - uses: actions/[email protected]
            - uses: actions/[email protected]
            - uses: aws-actions/[email protected]
            - uses: aws-actions/[email protected]
              with:
                aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
                aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
                aws-region: ##region##
            # sam build 
            - run: sam build --use-container
      
      # Run Unit tests- Specify unit tests here 
      
      # sam deploy
            - run: sam deploy --no-confirm-changeset --no-fail-on-empty-changeset --stack-name sam-hello-world --s3-bucket ##s3-bucket## --capabilities CAPABILITY_IAM --region ##region## 
      
    5. Replace ##s3-bucket## with the name of the S3 bucket previously created to store the deployment package.
    6. Replace both ##region## with your AWS Region.

    The configuration triggers the GitHub Actions CI/CD pipeline when code is pushed to the main branch. You can amend this if you are using another branch. For a full list of supported events, refer to GitHub documentation page.

    You can further customize the sam build –use-container command if necessary. By default the Docker image used to create the build artifact is pulled from Amazon ECR Public. The default Python 3.8 image in this example is based on the language specified during sam init. To pull a different container image, use the --build-image option as specified in the documentation.

    The AWS CLI and AWS SAM CLI are installed in the runner using the GitHub action setup-sam. To install a specific version, use the version parameter.

    uses: aws-actions/[email protected]
    with:
      version: 1.23.0

    As part of the CI/CD process, we recommend you scan your code for quality and vulnerabilities in bundled libraries. You can find these security offerings from our AWS Lambda Technology Partners.

    Configuring AWS credentials in GitHub

    The GitHub Actions CI/CD pipeline requires AWS credentials to access your AWS account. The credentials must include AWS Identity and Access Management (IAM) policies that provide access to Lambda, API Gateway, AWS CloudFormation, S3, and IAM resources.

    These credentials are stored as GitHub secrets within your GitHub repository, under Settings > Secrets. For more information, see “GitHub Actions secrets”.

    In your GitHub repository, create two secrets named AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and enter the key values. We recommend following IAM best practices for the AWS credentials used in GitHub Actions workflows, including:

    • Do not store credentials in your repository code. Use GitHub Actions secrets to store credentials and redact credentials from GitHub Actions workflow logs.
    • Create an individual IAM user with an access key for use in GitHub Actions workflows, preferably one per repository. Do not use the AWS account root user access key.
    • Grant least privilege to the credentials used in GitHub Actions workflows. Grant only the permissions required to perform the actions in your GitHub Actions workflows.
    • Rotate the credentials used in GitHub Actions workflows regularly.
    • Monitor the activity of the credentials used in GitHub Actions workflows.

    Deploying your application

    Add all the files to your local git repository, commit the changes, and push to GitHub.

    git add .
    git commit -am "Add AWS SAM files"
    git push

    Once the files are pushed to GitHub on the main branch, this automatically triggers the GitHub Actions CI/CD pipeline as configured in the sam-pipeline.yml file.

    The GitHub actions runner performs the pipeline steps specified in the file. It checks out the code from your repo, sets up Python, and configures the AWS credentials based on the GitHub secrets. The runner uses the GitHub action setup-sam to install AWS SAM CLI.

    The pipeline triggers the sam build process to build the application artifacts, using the default container image for Python 3.8.

    sam deploy runs to configure the resources in your AWS account using the securely stored credentials.

    To view the application deployment progress, select Actions in the repository menu. Select the workflow run and select the job name build-deploy.

    GitHub Actions progress

    GitHub Actions progress

    If the build fails, you can view the error message. Common errors are:

    • Incompatible software versions such as the Python runtime being different from the Python version on the build machine. Resolve this by installing the proper software versions.
    • Credentials could not be loaded. Verify that AWS credentials are stored in GitHub secrets.
    • Ensure that your AWS account has the necessary permissions to deploy the resources in the AWS SAM template, in addition to the S3 deployment bucket.

    Testing the application

    1. Within the workflow run, expand the Run sam deploy section.
    2. Navigate to the AWS SAM Outputs section. The HelloWorldAPI value shows the API Gateway endpoint URL deployed in your AWS account.
    AWS SAM outputs

    AWS SAM outputs

  1. Use curl to test the API:
curl https://<api-id>.execute-api.us-east-1.amazonaws.com/Prod/hello/

The API response should be:
{"message": "hello world"}

Cleanup

To remove the application resources, navigate to the CloudFormation console and delete the stack. Alternatively, you can use an AWS CLI command to remove the stack:

aws cloudformation delete-stack --stack-name sam-hello-world

Empty, and delete the S3 deployment bucket.

Conclusion

GitHub Actions is a GitHub feature that allows you to run a CI/CD pipeline to build, test, and deploy software directly from GitHub. AWS SAM is an open-source framework for building serverless applications.

In this post, you use GitHub Actions CI/CD pipeline functionality and AWS SAM to create, build, test, and deploy a serverless application. You use sam init to create a serverless application and tested the functionality locally. You create a sam-pipeline.yml file to define the pipeline steps for GitHub Actions.

The GitHub action setup-sam installed AWS SAM on the GitHub hosted runner. The GitHub Actions workflow uses sam build to create the application artifacts and sam deploy to deploy them to your AWS account.

For more serverless learning resources, visit https://serverlessland.com.

Announcing migration of the Java 8 runtime in AWS Lambda to Amazon Corretto

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/announcing-migration-of-the-java-8-runtime-in-aws-lambda-to-amazon-corretto/

This post is written by Jonathan Tuliani, Principal Product Manager, AWS Lambda.

What is happening?

Beginning July 19, 2021, the Java 8 managed runtime in AWS Lambda will migrate from the current Open Java Development Kit (OpenJDK) implementation to the latest Amazon Corretto implementation.

To reflect this change, the AWS Management Console will change how Java 8 runtimes are displayed. The display name for runtimes using the ‘java8’ identifier will change from ‘Java 8’ to ‘Java 8 on Amazon Linux 1’. The display name for runtimes using the ‘java8.al2’ identifier will change from ‘Java 8 (Corretto)’ to ‘Java 8 on Amazon Linux 2’. The ‘java8’ and ‘java8.al2’ identifiers themselves, as used by tools such as the AWS CLI, CloudFormation, and AWS SAM, will not change.

Why are you making this change?

This change enables customers to benefit from the latest innovations and extended support of the Amazon Corretto JDK distribution. Amazon Corretto is a no-cost, multiplatform, production-ready distribution of the OpenJDK. Corretto is certified as compatible with the Java SE standard and used internally at Amazon for many production services.

Amazon is committed to Corretto, and provides regular updates that include security fixes and performance enhancements. With this change, these benefits are available to all Lambda customers. For more information on improvements provided by Amazon Corretto 8, see Amazon Corretto 8 change logs.

How does this affect existing Java 8 functions?

Amazon Corretto 8 is designed as a drop-in replacement for OpenJDK 8. Most functions benefit seamlessly from the enhancements in this update without any action from you.

In rare cases, switching to Amazon Corretto 8 introduces compatibility issues. See below for known issues and guidance on how to verify compatibility in advance of this change.

When will this happen?

This migration to Amazon Corretto takes place in several stages:

  • June 15, 2021: Availability of Lambda layers for testing the compatibility of functions with the Amazon Corretto runtime. Start of AWS Management Console changes to java8 and java8.al2 display names.
  • July 19, 2021: Any new functions using the java8 runtime will use Amazon Corretto. If you update an existing function, it will transition to Amazon Corretto automatically. The public.ecr.aws/lambda/java:8 container base image is updated to use Amazon Corretto.
  • August 16, 2021: For functions that have not been updated since June 28, AWS will begin an automatic transition to the new Corretto runtime.
  • September 10, 2021: Migration completed.

These changes are only applied to functions not using the arn:aws:lambda:::awslayer:Java8Corretto or arn:aws:lambda:::awslayer:Java8OpenJDK layers described below.

Which of my Lambda functions are affected?

Lambda supports two versions of the Java 8 managed runtime: the java8 runtime, which runs on Amazon Linux 1, and the java8.al2 runtime, which runs on Amazon Linux 2. This change only affects functions using the java8 runtime. Functions the java8.al2 runtime are already using the Amazon Corretto implementation of Java 8 and are not affected.

The following command shows how to use the AWS CLI to list all functions in a specific Region using the java8 runtime. To find all such functions in your account, repeat this command for each Region:

aws lambda list-functions --function-version ALL --region us-east-1 --output text --query "Functions[?Runtime=='java8'].FunctionArn"

What do I need to do?

If you are using the java8 runtime, your functions will be updated automatically. For production workloads, we recommend that you test functions in advance for compatibility with Amazon Corretto 8.

For Lambda functions using container images, the existing public.ecr.aws/lambda/java:8 container base image will be updated to use the Amazon Corretto Java implementation. You must manually update your functions to use the updated container base image.

How can I test for compatibility with Amazon Corretto 8?

If you are using the java8 managed runtime, you can test functions with the new version of the runtime by adding the layer reference arn:aws:lambda:::awslayer:Java8Corretto to the function configuration. This layer instructs the Lambda service to use the Amazon Corretto implementation of Java 8. It does not contain any data or code.

If you are using container images, update the JVM in your image to Amazon Corretto for testing. Here is an example Dockerfile:

FROM public.ecr.aws/lambda/java:8

# Update the JVM to the latest Corretto version
## Import the Corretto public key
rpm --import https://yum.corretto.aws/corretto.key

## Add the Corretto yum repository to the system list
curl -L -o /etc/yum.repos.d/corretto.repo https://yum.corretto.aws/corretto.repo

## Install the latest version of Corretto 8
yum install -y java-1.8.0-amazon-corretto-devel

# Copy function code and runtime dependencies from Gradle layout
COPY build/classes/java/main ${LAMBDA_TASK_ROOT}
COPY build/dependency/* ${LAMBDA_TASK_ROOT}/lib/

# Set the CMD to your handler
CMD [ "com.example.LambdaHandler::handleRequest" ]

Can I continue to use the OpenJDK version of Java 8?

You can continue to use the OpenJDK version of Java 8 by adding the layer reference arn:aws:lambda:::awslayer:Java8OpenJDK to the function configuration. This layer tells the Lambda service to use the OpenJDK implementation of Java 8. It does not contain any data or code.

This option gives you more time to address any code incompatibilities with Amazon Corretto 8. We do not recommend that you use this option to continue to use Lambda’s OpenJDK Java implementation in the long term. Following this migration, it will no longer receive bug fix and security updates. After addressing any compatibility issues, remove this layer reference so that the function uses the Lambda-Amazon Corretto managed implementation of Java 8.

What are the known differences between OpenJDK 8 and Amazon Corretto 8 in Lambda?

Amazon Corretto caches TCP sessions for longer than OpenJDK 8. Functions that create new connections (for example, new AWS SDK clients) on each invoke without closing them may experience an increase in memory usage. In the worst case, this could cause the function to consume all the available memory, which results in an invoke error and a subsequent cold start.

We recommend that you do not create AWS SDK clients in your function handler on every function invocation. Instead, create SDK clients outside the function handler as static objects that can be used by multiple invocations. For more information, see static initialization in the Lambda Operator Guide.

If you must use a new client on every invocation, make sure it is shut down at the end of every invocation. This avoids TCP session caches using unnecessary resources.

What if I need additional help?

Contact AWS Support, the AWS Lambda discussion forums, or your AWS account team if you have any questions or concerns.

For more serverless learning resources, visit Serverless Land.

Extending SaaS products with serverless functions

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/extending-saas-products-with-serverless-functions/

This post was written by Santiago Cardenas, Sr Partner SA. and Nir Mashkowski, Principal Product Manager.

Increasingly, customers turn to software as a service (SaaS) solutions for the potential of lowering the total cost of ownership (TCO). This enables customers to focus their teams on business priorities instead of managing and maintaining software and infrastructure. Startups are building SaaS products for a wide variety of common application types to take advantage of these market needs.

As SaaS accelerates adoption, enterprise customers expect the same capabilities that are available with traditional, on-premises software. They want the ability to customize system behavior and use rich integrations that can help build solutions rapidly.

For customization and extensibility, many independent software vendors (ISVs) are building application programming interfaces (APIs) and integration hooks. To extend these capabilities, many SaaS builders expose a common set of APIs:

  • Event APIs emit events when SaaS entities change. Synchronous event APIs block the SaaS action until the API completes a request. Asynchronous are non-blocking and use mechanisms like pub/sub and webhooks to inform the caller of updates. Event APIs are used for many purposes, such as enriching incoming data or triggering workflows.
  • CRUD APIs allow developers to interact with entities within the SaaS product. They can be used by mobile or web clients to add, update, and remove records, for example.
  • Schema APIs allow developers to create data entities in the SaaS product, such as tables, key-value stores, or document repositories.
  • User experience (UX) components. Many SaaS products include an SDK that helps provide a consistent look-and-feel and built-in support for common functions, such as authentication. Components are sometimes delivered as code libraries or as an online API that renders the UI.

Business systems expose different subsets of the APIs based on the application domain. Extensibility models are built on top of those APIs and can take various different forms. ISVs use these APIs to build features such as “no code” workflow engines, UX, and report generators. In those cases, the SaaS product runs a domain-specific language (DSL) where it controls compute, storage, and memory consumption.

Figure 1: Example of various APIs providing extensibility within a SaaS app

This level of customization is acceptable for many business users. However, for more sophisticated customization, this requires the ability to write custom code. When coding is needed, some business systems choose to provide sandboxing for the user code within the service. Others choose to ask developers to host the extensibility model themselves.

The growth of vendor-hosted SaaS extensions

First-generation SaaS products essentially “lift and shift” on-premises enterprise software, where each customer has a copy of the entire stack. This single tenant model offers simplicity, a smaller blast radius, and faster time to market.

Newer, born-in-the-cloud SaaS products implement a multi-tenant approach, where all resources are shared across customers. This model may be easier to maintain but can present challenges for handling security, isolation, and resource allocation.

Multi-tenancy challenges are harder when customers can run custom code inside the SaaS infrastructure. To solve this, SaaS builders may start with a customer hosted approach, where customers implement their own extensions by consuming SaaS APIs. This means customers must learn and install an SDK, deploy, and maintain an app in their cloud. This often results in higher cost and slower time to market.

To simplify this model, SaaS builders are finding ways to allow developers to write code directly within the SaaS product. The event driven, pay-per-execution, and polyglot nature of serverless functions provides new capabilities for implementing SaaS extensibility. This model is called vendor hosted SaaS extensions.

SaaS builders are using AWS Lambda for serverless functions to provide flexible compute options to their customers. The goal is to abstract away and simplify the consumption model. AWS provides SaaS builders with features and controls to customize the execution environments as part of their own SaaS product. This allows SaaS owners more flexibility when deciding on isolation models, usability, and cost considerations.

Isolating tenant requests

Isolation of customer requests is important both at the product level and at the tenant level. Product-level isolation focuses on controlling and enforcing the access to data between tenants. It ensures that one tenant is separated from another tenant’s functions. Tenant-level isolation focuses on resources allocated to serve requests. These may include identity, network and internet access, file system access, and memory/CPU allocation.

Figure 2: Example of hierarchical levels of abstraction

Usability

SaaS product owners can allow customers to use familiar programming languages within the serverless functions. This allows customers to grow with the service and potentially host and scale independently, using their own infrastructure.

Usability considers the domain and industry of the product. For example, if the SaaS product enables data processing, it may enable invocation of serverless functions during these workflows. Additionally, these functions may provide the customer the context of the user, application, tenant, and the domain. A streamlined, opinionated deployment workflow that abstracts away initial configuration can also aid customer adoption.

Managing costs

Cost is an important factor in driving adoption. It’s an important differentiator to pay only for the resources used, while being able to scale in response to events. This can help reduce costs that are passed on to SaaS customers.

Examples of SaaS product extensibility

Multiple AWS Partners are extending their SaaS product using Lambda for on-demand scalable compute. This enables them to focus on enriching the customer experience that is associated with their business domain. Examples include:

  • Segment Functions, which seamlessly integrates as a source or destination. The service uses code snippets to allow customers to enrich data, enforce consistency, and connect to APIs and services that power their workflows.
  • Freshworks’ Neo platform provides extensibility using the concept of apps. These are powered by Lambda functions hosting the core business logic and backends. Apps are triggered by unplanned and scheduled Freshworks events (customer support tickets, IT service cases, contacts, and deal updates), in addition to app-specific and external events.
  • Netlify Functions enables customers to supercharge frontend code with functions in their development workflow. These can power automated triggers, connect to third-party APIs, or provide user authentication.

All of these SaaS partners abstract away the deployment, versioning, and configuration of custom code using Lambda.

Conclusion

As customers increasingly use SaaS solutions in their businesses, they want the same customization and extensibility available in on-premises solutions. SaaS partners have developed APIs and integration hooks to help address this need. For more sophisticated customization, products enable custom code to run within their SaaS workflows.

This presents SaaS partners with isolation, usability, and cost challenges and many of them are now using serverless functions to address these challenges. Lambda provides a pay-per-value compute service that scales automatically to meet customer demand. Segment Functions, Freshworks, and Netlify Functions have all used Lambda to provide extensibility to their customers.

Lambda continues to develop features and functionality to power the extensibility of SaaS products. We look forward to seeing the new ways you use Lambda to extend your SaaS product for your customers. Share your Lambda extensibility story with us at [email protected].

For more serverless learning resources, visit Serverless Land.

Building server-side rendering for React in AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-server-side-rendering-for-react-in-aws-lambda/

This post is courtesy of Roman Boiko, Solutions Architect.

React is a popular front-end framework used to create single-page applications (SPAs). It is rendered and run on the client-side in the browser. However, for SEO or performance reasons, you may need to render parts of a React application on the server. This is where the server-side rendering (SSR) is useful.

This post introduces the concepts and demonstrates rendering a React application with AWS Lambda. To deploy this solution and to provision the AWS resources, I use the AWS Cloud Development Kit (CDK). This is an open-source framework, which helps you reduce the amount of code required to automate deployment.

Overview

This solution uses Amazon S3, Amazon CloudFront, Amazon API Gateway, AWS Lambda, and [email protected]. It creates a fully serverless SSR implementation, which automatically scales according to the workload. This solution addresses three scenarios.

1. A static React app hosted in an S3 bucket with a CloudFront distribution in front of the website. The backend is running behind API Gateway, implemented as a Lambda function. Here, the application is fully downloaded to the client and rendered in a web browser. It sends requests to the backend.

SSR app 1

2. The React app is rendered with a Lambda function. The CloudFront distribution is configured to forward requests from the /ssr path to the API Gateway endpoint. This calls the Lambda function where the rendering is happening. While rendering the requested page, the Lambda function calls the backend API to fetch the data. It returns a static HTML page with all the data. This page may be cached in CloudFront to optimize subsequent requests.

SSR app 2

 

3. The React app is rendered with a [email protected] function. This scenario is similar but rendering happens at edge locations. The requests to /edgessr are handled by the [email protected] function. This sends requests to the backend and returns a static HTML page.

SSR app 3

 

Walkthrough

The example application shows how the preceding scenarios are implemented with the AWS CDK. This solution requires:

This solution deploys a [email protected] function so it must be provisioned in the US East (N. Virginia) Region.

To get started, download and configure the sample:

  1. From a terminal, clone the GitHub repository:
    git clone https://github.com/aws-samples/react-ssr-lambda
  2. Provide a unique name for the S3 bucket, which is created by the stack and used for React application hosting. Change the placeholder <your bucket name> to your bucket name. To install the solution, run:
    cd react-ssr-lambda
    cd ./cdk
    npm install
    npm run build
    cdk bootstrap
    cdk deploy SSRApiStack --outputs-file ../simple-ssr/src/config.json
    
    cd ../simple-ssr
    npm install
    npm run build-all
    cd ../cdk
    cdk deploy SSRAppStack --parameters mySiteBucketName=<your bucket name>
  3. Note the following values from the output:
    • SSRAppStack.CFURL – the URL of the CloudFront distribution. Its root path returns the React application stored in S3.
    • SSRAppStack.LambdaSSRURL – the URL of the CloudFront /ssr distribution, which returns a page rendered by the Lambda function.
    • SSRAppStack.LambdaEdgeSSRURL – the URL of the CloudFront /edgessr distribution, which returns a page rendered by [email protected] function.Stack outputs
  4. In a browser, open each of the URLs from step 3. You see the same page with a different footer, indicating how it is rendered.Comparing the served pages

Understanding the React app

The application is created by the create-react-app utility. You can run and test this application locally by navigating to the simple-ssr directory and running the npm start command.

This small application consists of two components that render the list of products received from the backend. The App.js file sends the request, parses the result, and passes it to the component.

import React, { useEffect, useState } from "react";
import ProductList from "./components/ProductList";
import config from "./config.json";
import axios from "axios";

const App = ({ isSSR, ssrData }) => {
  const [err, setErr] = useState(false);
  const [result, setResult] = useState({ loading: true, products: null });
  useEffect(() => {
    const getData = async () => {
      const url = config.SSRApiStack.apiurl;
      try {
        let result = await axios.get(url);
        setResult({ loading: false, products: result.data });
      } catch (error) {
        setErr(error);
      }
    };
    getData();
  }, []);
  if (err) {
    return <div>Error {err}</div>;
  } else {
    return (
      <div>
        <ProductList result={result} />
      </div>
    );
  }
};

export default App;

Adding server-side rendering

To support SSR, I change the preceding application using several Lambda functions with the implementation. As I change the way data is retrieved from the backend, I remove this code from App.js. Instead, the data is retrieved in the Lambda function and injected into the application during the rendering process.

The new file SSRApp.js reflects these changes:

import React, { useState } from "react";
import ProductList from "./components/ProductList";

const SSRApp = ({ data }) => {
  const [result, setResult] = useState({ loading: false, products: data });
  return (
    <div>
      <ProductList result={result} />
    </div>
  );
};

export default SSRApp;

Next, I implement SSR logic in the Lambda function. For simplicity, I use React’s built-in renderToString method, which returns an HTML string. You can find the corresponding file in the simple-ssr/src/server/index.js. The handler function fetches data from the backend, renders the React components, and injects them into the HTML template. It returns the response to API Gateway, which responds to the client.

const handler = async function (event) {
  try {
    const url = config.SSRApiStack.apiurl;
    const result = await axios.get(url);
    const app = ReactDOMServer.renderToString(<SSRApp data={result.data} />);
    const html = indexFile.replace(
      '<div id="root"></div>',
      `<div id="root">${app}</div>`
    );
    return {
      statusCode: 200,
      headers: { "Content-Type": "text/html" },
      body: html,
    };
  } catch (error) {
    console.log(`Error ${error.message}`);
    return `Error ${error}`;
  }
};

For rendering the same code on [email protected], I change the code to work with CloudFront events and also modify the response format. This function searches for a specific path (/edgessr). All other logic stays the same. You can view the full code at simple-ssr/src/edge/index.js:

const handler = async function (event) {
  try {
    const request = event.Records[0].cf.request;
    if (request.uri === "/edgessr") {
      const url = config.SSRApiStack.apiurl;
      const result = await axios.get(url);
      const app = ReactDOMServer.renderToString(<SSRApp data={result.data} />);
      const html = indexFile.replace(
        '<div id="root"></div>',
        `<div id="root">${app}</div>`
      );
      return {
        status: "200",
        statusDescription: "OK",
        headers: {
          "cache-control": [
            {
              key: "Cache-Control",
              value: "max-age=100",
            },
          ],
          "content-type": [
            {
              key: "Content-Type",
              value: "text/html",
            },
          ],
        },
        body: html,
      };
    } else {
      return request;
    }
  } catch (error) {
    console.log(`Error ${error.message}`);
    return `Error ${error}`;
  }
};

The create-react-app utility configures tools such as Babel and webpack for the client-side React application. However, it is not designed to work with SSR. To make the functions work as expected, I transpile these into CommonJS format in addition to transpiling React JSX files. The standard tool for this task is Babel. To add it to this project, I create the configuration file .babelrc.json with instructions to transpile the functions into Node.js v12 format:

{
  "presets": [
    [
      "@babel/preset-env",
      {
        "targets": {
          "node": 12
        }
      }
    ],
    "@babel/preset-react"
  ]
}

I also include all the dependencies. I use the popular frontend tool webpack, which also works with Lambda functions. It adds only the necessary dependencies and minimizes the package size. For this purpose, I create configurations for both functions. You can find these in the webpack.edge.js and webpack.server.js files:

const path = require("path");

module.exports = {
  entry: "./src/edge/index.js",

  target: "node",

  externals: [],

  output: {
    path: path.resolve("edge-build"),
    filename: "index.js",
    library: "index",
    libraryTarget: "umd",
  },

  module: {
    rules: [
      {
        test: /\.js$/,
        use: "babel-loader",
      },
      {
        test: /\.css$/,
        use: "css-loader",
      },
    ],
  },
};

The result of running webpack is one file for each build. I use these files to deploy the Lambda and [email protected] functions. To automate the build process, I add several scripts to package.json.

"build-server": "webpack --config webpack.server.js --mode=development",
"build-edge": "webpack --config webpack.edge.js --mode=development",
"build-all": "npm-run-all --parallel build build-server build-edge"

Launch the build by running npm run build-all.

Deploying the application

After the application successfully builds, I deploy to the AWS Cloud. I use AWS CDK for an infrastructure as code approach. The code is located in cdk/lib/ssr-stack.ts.

First, I create an S3 bucket for storing the static content and I pass the name of the bucket as a parameter. To ensure only CloudFront can access my S3 bucket, I use an access identity configuration:

const mySiteBucketName = new CfnParameter(this, "mySiteBucketName", {
      type: "String",
      description: "The name of S3 bucket to upload react application"
    });

const mySiteBucket = new s3.Bucket(this, "ssr-site", {
      bucketName: mySiteBucketName.valueAsString,
      websiteIndexDocument: "index.html",
      websiteErrorDocument: "error.html",
      publicReadAccess: false,
      //only for demo not to use in production
      removalPolicy: cdk.RemovalPolicy.DESTROY
    });

new s3deploy.BucketDeployment(this, "Client-side React app", {
      sources: [s3deploy.Source.asset("../simple-ssr/build/")],
      destinationBucket: mySiteBucket
    });

const originAccessIdentity = new cloudfront.OriginAccessIdentity(
      this,
      "ssr-oia"
    );
    mySiteBucket.grantRead(originAccessIdentity);

I deploy the Lambda function from the build directory and configure an integration with API Gateway. I also note the API Gateway domain name for later use in the CloudFront distribution.

const ssrFunction = new lambda.Function(this, "ssrHandler", {
      runtime: lambda.Runtime.NODEJS_12_X,
      code: lambda.Code.fromAsset("../simple-ssr/server-build"),
      memorySize: 128,
      timeout: Duration.seconds(5),
      handler: "index.handler"
    });

const ssrApi = new apigw.LambdaRestApi(this, "ssrEndpoint", {
      handler: ssrFunction
    });

const apiDomainName = `${ssrApi.restApiId}.execute-api.${this.region}.amazonaws.com`;

I configure the [email protected] function. It’s important to create a function version explicitly to use with CloudFront:

const ssrEdgeFunction = new lambda.Function(this, "ssrEdgeHandler", {
      runtime: lambda.Runtime.NODEJS_12_X,
      code: lambda.Code.fromAsset("../simple-ssr/edge-build"),
      memorySize: 128,
      timeout: Duration.seconds(5),
      handler: "index.handler"
    });

const ssrEdgeFunctionVersion = new lambda.Version(
      this,
      "ssrEdgeHandlerVersion",
      { lambda: ssrEdgeFunction }
    );

Finally, I configure the CloudFront distribution to communicate with all the origins:

const distribution = new cloudfront.CloudFrontWebDistribution(
      this,
      "ssr-cdn",
      {
        originConfigs: [
          {
            s3OriginSource: {
              s3BucketSource: mySiteBucket,
              originAccessIdentity: originAccessIdentity
            },
            behaviors: [
              {
                isDefaultBehavior: true,
                lambdaFunctionAssociations: [
                  {
                    eventType: cloudfront.LambdaEdgeEventType.ORIGIN_REQUEST,
                    lambdaFunction: ssrEdgeFunctionVersion
                  }
                ]
              }
            ]
          },
          {
            customOriginSource: {
              domainName: apiDomainName,
              originPath: "/prod",
              originProtocolPolicy: cloudfront.OriginProtocolPolicy.HTTPS_ONLY
            },
            behaviors: [
              {
                pathPattern: "/ssr"
              }
            ]
          }
        ]
      }
    );

The template is now ready for deployment. This approach allows you to use this code in your Continuous Integration and Continuous Delivery/Deployment (CI/CD) pipelines to automate deployments of your SSR applications. Also, you can create a CDK construct to reuse this code in different applications.

Cleaning up

To delete all the resources used in this solution, run:

cd react-ssr-lambda/cdk
cdk destroy SSRApiStack
cdk destroy SSRAppStack

Conclusion

This post demonstrates two ways you can implement and deploy a solution for server-side rendering in React applications, by using Lambda or [email protected]

It also shows how to use open-source tools and AWS CDK to automate the building and deployment of such applications.

For more serverless learning resources, visit Serverless Land.

Building a Jenkins Pipeline with AWS SAM

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/building-a-jenkins-pipeline-with-aws-sam/

This post is courtesy of Tarun Kumar Mall, SDE at AWS.

This post shows how to set up a multi-stage pipeline on a Jenkins host for a serverless application, using the AWS Serverless Application Model (AWS SAM).

Overview

This tutorial uses Jenkins Pipeline plugin. A commit to the main branch of the repository starts and deploys the application, using the AWS SAM CLI. This tutorial deploys a small serverless API application called HelloWorldApi.

The pipeline consists of stages to build and deploy the application. Jenkins first ensures that the build environment is set up and installs any necessary tools. Next, Jenkins prepares the build artifacts. It promotes the artifacts to the next stage, where they are deployed to a beta environment using the AWS SAM CLI. Integration tests are run after deployment. If the tests pass, the application is deployed to the production environment.

CICD workflow diagram

CICD workflow diagram

The following prerequisites are required:

Setting up the backend application and development stack

Using AWS CloudFormation to define the infrastructure, you can create multiple environments or stacks from the same infrastructure definition. A “dev stack” is a copy of production infrastructure deployed to a developer account for testing purposes.

As serverless services use a pay-for-value model, it can be cost effective to use a high-fidelity copy of your production stack. Dev stacks are created by each developer as needed and deleted without having any negative impact on production.

For complex applications, it may not be feasible for every developer to have their own stack. However, for this tutorial, setting up the dev stack first for testing is recommended. Setting up a dev stack takes you through a manual process of how a stack is created. Later, this process is used to automate the setup using Jenkins.

To create a dev stack:

  1. Clone backend application repository https://github.com/aws-samples/aws-sam-jenkins-pipeline-tutorial
    git clone https://github.com/aws-samples/aws-sam-jenkins-pipeline-tutorial.git
  2. Build the application and run the guided deploy command:
    cd aws-sam-jenkins-pipeline-tutorial
    sam build
    sam deploy --guided

    AWS SAM guided deploy output

    AWS SAM guided deploy output

This sets up a development stack and saves the settings in the samconfig.toml file with configuration environment specific to a user. This also triggers a deployment.

  1. After deployment, make a small code change. For example, in the file hello-world/app.js change the message Hello world to Hello world from user <your name>.
  2. Deploy the updated code:
    sam build
    sam deploy -–config-env <your_username>

With this command, each developer can create their own configuration environment. They can use this for deploying to their development stack and testing changes before pushing changes to the repository.

Once deployment finishes, the API endpoint is displayed in the console output. You can use this endpoint to make GET requests and test the API manually.

Deployment output

Deployment output

To update and run the integration test:

  1. Open the hello-world/tests/integ/test-integ-api.js file.
  2. Update the assert statement in line 32 to include <your name> from the previous step:
    it("verifies if response contains my username", async () => {
      assert.include(apiResponse.data.message, "<your name>");
    });
  3. Open package.json and add the line in bold:
    {
      ...
      "scripts": {
        "test": "mocha tests/unit/",
        "integ-test": "mocha tests/integ/"
      }
      ...
    }
  4. From the terminal, run the following commands:
    cd hello-world
    npm install
    AWS_REGION=us-west-2 STACK_NAME=sam-app-user1-dev-stack npm run integ-test
    If you are using Microsoft Windows, instead run:
    cd hello-world
    npm install
    set AWS_REGION=us-west-2
    set STACK_NAME=sam-app-user1-dev-stack
    npm run integ-test

    Test results

    Test results

You have deployed a fully configured development stack with working integration tests. To push the code to GitHub:

  1. Create a new repository in GitHub.
    1. From the GitHub account homepage, choose New.
    2. Enter a repository name and choose Create Repository.
    3. Copy the repository URL.
  2. From the root directory of the AWS SAM project, run:
    git init
    git commit -am “first commit”
    git remote add origin <your-repository-url>
    git push -u origin main

Creating an IAM user for Jenkins

To create an IAM user for the Jenkins deployment:

  1. Sign in to the AWS Management Console and navigate to IAM.
  2. Select Users from side navigation and choose Add user.
  3. Enter the User name as sam-jenkins-demo-credentials and grant Programmatic access to this user.
  4. On the next page, select Attach existing policies directly and choose Create Policy.
  5. Select the JSON tab and enter the following policy. Replace <YOUR_ACCOUNT_ID> with your AWS account ID:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "CloudFormationTemplate",
                "Effect": "Allow",
                "Action": [
                    "cloudformation:CreateChangeSet"
                ],
                "Resource": [
                    "arn:aws:cloudformation:*:aws:transform/Serverless-2016-10-31"
                ]
            },
            {
                "Sid": "CloudFormationStack",
                "Effect": "Allow",
                "Action": [
                    "cloudformation:CreateChangeSet",
                    "cloudformation:DeleteStack",
                    "cloudformation:DescribeChangeSet",
                    "cloudformation:DescribeStackEvents",
                    "cloudformation:DescribeStacks",
                    "cloudformation:ExecuteChangeSet",
                    "cloudformation:GetTemplateSummary"
                ],
                "Resource": [
                    "arn:aws:cloudformation:*:<YOUR_ACCOUNT_ID>:stack/*"
                ]
            },
            {
                "Sid": "S3",
                "Effect": "Allow",
                "Action": [
                    "s3:CreateBucket",
                    "s3:GetObject",
                    "s3:PutObject"
                ],
                "Resource": [
                    "arn:aws:s3:::*/*"
                ]
            },
            {
                "Sid": "Lambda",
                "Effect": "Allow",
                "Action": [
                    "lambda:AddPermission",
                    "lambda:CreateFunction",
                    "lambda:DeleteFunction",
                    "lambda:GetFunction",
                    "lambda:GetFunctionConfiguration",
                    "lambda:ListTags",
                    "lambda:RemovePermission",
                    "lambda:TagResource",
                    "lambda:UntagResource",
                    "lambda:UpdateFunctionCode",
                    "lambda:UpdateFunctionConfiguration"
                ],
                "Resource": [
                    "arn:aws:lambda:*:<YOUR_ACCOUNT_ID>:function:*"
                ]
            },
            {
                "Sid": "IAM",
                "Effect": "Allow",
                "Action": [
                    "iam:AttachRolePolicy",
                    "iam:CreateRole",
                    "iam:DeleteRole",
                    "iam:DetachRolePolicy",
                    "iam:GetRole",
                    "iam:PassRole",
                    "iam:TagRole"
                ],
                "Resource": [
                    "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/*"
                ]
            },
            {
                "Sid": "APIGateway",
                "Effect": "Allow",
                "Action": [
                    "apigateway:DELETE",
                    "apigateway:GET",
                    "apigateway:PATCH",
                    "apigateway:POST",
                    "apigateway:PUT"
                ],
                "Resource": [
                    "arn:aws:apigateway:*::*"
                ]
            }
        ]
    }
  6. Choose Review Policy and add a policy name on the next page.
  7. Choose Create Policy button.
  8. Return to the previous tab to continue creating the IAM user. Choose Refresh and search for the policy name you created. Select the policy.
  9. Choose Next Tags and then Review.
  10. Choose Create user and save the Access key ID and Secret access key.

Configuring Jenkins

To configure AWS credentials in Jenkins:

  1. On the Jenkins dashboard, go to Manage Jenkins > Manage Plugins in the Available tab. Search for the Pipeline: AWS Steps plugin and choose Install without restart.
  2. Navigate to Manage Jenkins > Manage Credentials > Jenkins (global) > Global Credentials > Add Credentials.
  3. Select Kind as AWS credentials and use the ID sam-jenkins-demo-credentials.
  4. Enter the access key ID and secret access key and choose OK.

    Jenkins credential configuration

    Jenkins credential configuration

  5. Create Amazon S3 buckets for each Region in the pipeline. S3 bucket names must be unique within a partition:
    aws s3 mb s3://sam-jenkins-demo-us-west-2-<your_name> --region us-west-2
    aws s3 mb s3://sam-jenkins-demo-us-east-1-<your_name> --region us-east-1
  6. Create a file named Jenkinsfile at the root of the project and add:
    pipeline {
      agent any
     
      stages {
        stage('Install sam-cli') {
          steps {
            sh 'python3 -m venv venv && venv/bin/pip install aws-sam-cli'
            stash includes: '**/venv/**/*', name: 'venv'
          }
        }
        stage('Build') {
          steps {
            unstash 'venv'
            sh 'venv/bin/sam build'
            stash includes: '**/.aws-sam/**/*', name: 'aws-sam'
          }
        }
        stage('beta') {
          environment {
            STACK_NAME = 'sam-app-beta-stage'
            S3_BUCKET = 'sam-jenkins-demo-us-west-2-user1'
          }
          steps {
            withAWS(credentials: 'sam-jenkins-demo-credentials', region: 'us-west-2') {
              unstash 'venv'
              unstash 'aws-sam'
              sh 'venv/bin/sam deploy --stack-name $STACK_NAME -t template.yaml --s3-bucket $S3_BUCKET --capabilities CAPABILITY_IAM'
              dir ('hello-world') {
                sh 'npm ci'
                sh 'npm run integ-test'
              }
            }
          }
        }
        stage('prod') {
          environment {
            STACK_NAME = 'sam-app-prod-stage'
            S3_BUCKET = 'sam-jenkins-demo-us-east-1-user1'
          }
          steps {
            withAWS(credentials: 'sam-jenkins-demo-credentials', region: 'us-east-1') {
              unstash 'venv'
              unstash 'aws-sam'
              sh 'venv/bin/sam deploy --stack-name $STACK_NAME -t template.yaml --s3-bucket $S3_BUCKET --capabilities CAPABILITY_IAM'
            }
          }
        }
      }
    }
  7. Commit and push the code to the GitHub repository by running following commands:
    git commit -am “Adding Jenkins pipeline config.”
    git push origin -u main

Next, create a Jenkins Pipeline project:

  1. From the Jenkins dashboard, choose New Item, select Pipeline, and enter the project name sam-jenkins-demo-pipeline.

    Jenkins Pipeline creation wizard

    Jenkins Pipeline creation wizard

  2. Under Build Triggers, select Poll SCM and enter * * * * *. This polls the repository for changes every minute.

    Jenkins build triggers configuration

    Jenkins build triggers configuration

  3. Under the Pipeline section, select Definition as Pipeline script from SCM.
    • Select GIT under SCM and enter the repository URL.
    • Set Branches to build to */main.
    • Set the Script Path to Jenkinsfile.

      Jenkins pipeline configuration

      Jenkins pipeline configuration

  4. Save the project.

After the build finishes, you see the pipeline:

Jenkins pipeline stages

Jenkins pipeline stages

Review the logs for the beta stage to check that the integration test is completed successfully.

Jenkins stage logs

Jenkins stage logs

Conclusion

This tutorial uses a Jenkins Pipeline to add an automated CI/CD pipeline to an AWS SAM-generated example application. Jenkins automatically builds, tests, and deploys the changes after each commit to the repository.

Using Jenkins, developers can gain the benefits of continuous integration and continuous deployment of serverless applications to the AWS Cloud with minimal configuration.

For more information, see the Jenkins Pipeline and AWS Serverless Application Model documentation.

We want to hear your feedback! Is this approach useful for your organization? Do you want to see another implementation? Contact us on Twitter @edjgeek or via comments!