All posts by Sascha Moellering

Field Notes: Optimize your Java application for Amazon ECS with Quarkus

Post Syndicated from Sascha Moellering original

In this blog post, I show you an interesting approach to implement a Java-based application and compile it to a native image using Quarkus. This native image is the main application, which is containerized, and runs in an Amazon Elastic Container Service and Amazon Elastic Kubernetes Service cluster on AWS Fargate.

Amazon ECS is a fully managed container orchestration service, Amazon EKS is a fully managed Kubernetes service, both services support Fargate to provide serverless compute for containers. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. AWS Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you.

Quarkus is a Supersonic Subatomic Java framework that uses OpenJDK HotSpot as well as GraalVM and over fifty different libraries like RESTEasy, Vertx, Hibernate, and Netty. In a previous blog post, I demonstrated how GraalVM can be used to optimize the size of Docker images. GraalVM is an open source, high-performance polyglot virtual machine from Oracle. I use it to compile native images ahead of time to improve startup performance, and reduce the memory consumption and file size of Java Virtual Machine (JVM)-based applications. The framework that allows ahead-of-time-compilation (AOT) is called Substrate.

Application Architecture

First, review the GitHub repository containing the demo application.

Our application is a simple REST-based Create Read Update Delete (CRUD) service that implements basic user management functionalities. All data is persisted in an Amazon DynamoDB table. Quarkus offers an extension for Amazon DynamoDB that is based on AWS SDK for Java V2. This Quarkus extension supports two different programming models: blocking access and asynchronous programming. For local development, DynamoDB Local is also supported. DynamoDB Local is the downloadable version of DynamoDB that lets you write and test applications without accessing the DynamoDB service. Instead, the database is self-contained on your computer. When you are ready to deploy your application in production, you can make a few minor changes to the code so that it uses the DynamoDB service.

The REST-functionality is located in the class UserResource which uses the JAX-RS implementation RESTEasy. This class invokes the UserService that implements the functionalities to access a DynamoDB table with the AWS SDK for Java. All user-related information is stored in a Plain Old Java Object (POJO) called User.

Building the application

To create a Docker container image that can be used in the task definition of my ECS cluster, follow these three steps: build the application, create the Docker Container Image, and push the created image to my Docker image registry.

To build the application, I used Maven with different profiles. The first profile (default profile) uses a standard build to create an uber JAR – a self-contained application with all dependencies. This is very useful if you want to run local tests with your application, because the build time is much shorter compared to the native-image build. When you run the package command, it also execute all tests, which means you need DynamoDB Local running on your workstation.

$ docker run -p 8000:8000 amazon/dynamodb-local -jar DynamoDBLocal.jar -inMemory -sharedDb

$ mvn package

The second profile uses GraalVM to compile the application into a native image. In this case, you use the native image as base for a Docker container. The Dockerfile can be found under src/main/docker/Dockerfile.native and uses a build-pattern called multi-stage build.

$ mvn package -Pnative -Dquarkus.native.container-build=true

An interesting aspect of multi-stage builds is that you can use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base image, and begins a new stage of the build. You can pick the necessary files and copy them from one stage to another, thereby limiting the number of files you have to copy. Use this feature to build your application in one stage and copy your compiled artifact and additional files to your target image. In this case, you use ubi-quarkus-native-image:20.1.0-java11 as base image and copy the necessary TLS-files (SunEC library and the certificates) and point your application to the necessary files with JVM properties.

FROM as nativebuilder
RUN mkdir -p /tmp/ssl-libs/lib \
  && cp /opt/graalvm/lib/security/cacerts /tmp/ssl-libs \
  && cp /opt/graalvm/lib/ /tmp/ssl-libs/lib/

WORKDIR /work/
COPY target/*-runner /work/application
COPY --from=nativebuilder /tmp/ssl-libs/ /work/
RUN chmod 775 /work
CMD ["./application", "", "-Djava.library.path=/work/lib", ""]

In the second and third steps, I have to build and push the Docker image to a Docker registry of my choice which is straight forward:

$ docker build -f src/main/docker/Dockerfile.native -t

$ docker push <repo/image:tag>

Setting up the infrastructure

You’ve compiled the application to a native-image and have built a Docker image. Now, you set up the basic infrastructure consisting of an Amazon Virtual Private Cloud (VPC), an Amazon ECS or Amazon EKS cluster with on AWS Fargate launch type, an Amazon DynamoDB table, and an Application Load Balancer.

Figure 1: Architecture of the infrastructure (for Amazon ECS)

Figure 1: Architecture of the infrastructure (for Amazon ECS)

Codifying your infrastructure allows you to treat your infrastructure just as code. In this case, you use the AWS Cloud Development Kit (AWS CDK), an open source software development framework, to model and provision your cloud application resources using familiar programming languages. The code for the CDK application can be found in the demo application’s code repository under eks_cdk/lib/ecs_cdk-stack.ts or ecs_cdk/lib/ecs_cdk-stack.ts. Set up the infrastructure in the AWS Region us-east-1:

$ npm install -g aws-cdk // Install the CDK
$ cd ecs_cdk
$ npm install // retrieves dependencies for the CDK stack
$ npm run build // compiles the TypeScript files to JavaScript
$ cdk deploy  // Deploys the CloudFormation stack

The output of the AWS CloudFormation stack is the load balancer’s DNS record. The heart of our infrastructure is an Amazon ECS or Amazon EKS cluster with AWS Fargate launch type. The Amazon ECS cluster is set up as follows:

const cluster = new ecs.Cluster(this, "quarkus-demo-cluster", {
      vpc: vpc
    const logging = new ecs.AwsLogDriver({
      streamPrefix: "quarkus-demo"

    const taskRole = new iam.Role(this, 'quarkus-demo-taskRole', {
      roleName: 'quarkus-demo-taskRole',
      assumedBy: new iam.ServicePrincipal('')
    const taskDef = new ecs.FargateTaskDefinition(this, "quarkus-demo-taskdef", {
      taskRole: taskRole
    const container = taskDef.addContainer('quarkus-demo-web', {
      image: ecs.ContainerImage.fromRegistry("<repo/image:tag>"),
      memoryLimitMiB: 256,
      cpu: 256,
      containerPort: 8080,
      hostPort: 8080,
      protocol: ecs.Protocol.TCP

    const fargateService = new ecs_patterns.ApplicationLoadBalancedFargateService(this, "quarkus-demo-service", {
      cluster: cluster,
      taskDefinition: taskDef,
      publicLoadBalancer: true,
      desiredCount: 3,
      listenerPort: 8080

Cleaning up

After you are finished, you can easily destroy all of these resources with a single command to save costs.

$ cdk destroy


In this post, I described how Java applications can be implemented using Quarkus, compiled to a native-image, and ran using Amazon ECS or Amazon EKS on AWS Fargate. I also showed how AWS CDK can be used to set up the basic infrastructure. I hope I’ve given you some ideas on how you can optimize your existing Java application to reduce startup time and memory consumption. Feel free to submit enhancements to the sample template in the source repository or provide feedback in the comments.

We also encourage you to explore how you to optimize your Java application for AWS Lambda with Quarkus.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

Field Notes: Optimize your Java application for AWS Lambda with Quarkus

Post Syndicated from Sascha Moellering original

This blog post is a continuation of an existing article about optimizing your Java application for Amazon ECS with Quarkus. In this blog post, we examine the benefits of Quarkus in the context of AWS Lambda. Quarkus is a framework that uses the Open Java Development Kit (OpenJDK) with GraalVM and over 50 libraries like RESTEasy, Vertx, Hibernate, and Netty. This blog post shows you an effective approach for implementing a Java-based application and compiling it into a native-image through Quarkus. You can find the demo application code on GitHub.

Getting started

To build and deploy this application, you will need the AWS CLI, the AWS Serverless Application Model (AWS SAM), Git, Maven, OpenJDK 11, and Docker. AWS Cloud9 makes the setup easy. AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. It comes with the AWS tools, Git, and Docker installed.

Create a new AWS Cloud9 EC2 environment based on Amazon Linux. Because the compilation process is very memory intensive, it is recommended to select an instance with at least 8 GiB of RAM (for example, m5.large).

AWS Cloud9 environment

Figure 1 – AWS Cloud9 environment

Launching the AWS Cloud9 environment from the AWS Management Console, you select the instance type. Pick an instance type with at least 8 GiB of RAM.

After creation, you are redirected automatically to your AWS Cloud9 environment’s IDE. You can navigate back to your IDE at any time through the AWS Cloud9 console.

All code blocks in this blog post refer to commands you enter into the terminal provided by the AWS Cloud9 IDE. AWS Cloud9 executes the commands on an underlying EC2 instance. If necessary, you can open a new Terminal in AWS Cloud9 by selecting Window → New Terminal.

Modify the EBS volume of the AWS Cloud9 EC2 instance to at least 20 GB to have enough space for the compilation of your application. Then, reboot the instance using the following command in the AWS Cloud9 IDE terminal, and wait for the AWS Cloud9 IDE to automatically reconnect to your instance.

sudo reboot

To satisfy the OpenJDK 11 requirement, run the following commands in the AWS Cloud9 IDE terminal to install Amazon Corretto 11. Amazon Corretto is a no-cost, multiplatform, production-ready distribution of the OpenJDK.

sudo curl -L -o /etc/yum.repos.d/corretto.repo
sudo yum install -y java-11-amazon-corretto-devel

You will build this application using Apache Maven. You must install it via the AWS Cloud9 IDE terminal by executing the following code.

sudo wget \
    -O /etc/yum.repos.d/epel-apache-maven.repo
sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo
sudo yum install -y apache-maven

After you clone the demo application code, you can then build the application. Compiling the application to a self-contained JAR is straight forward. Navigate to the aws-quarkus-demo/lambda directory and kick off the Apache Maven build.

git clone
cd aws-quarkus-demo/lambda/
mvn clean install

To compile the application to a native binary, you must add the parameter used by the Apache Maven build to run the necessary steps.

mvn clean install -Dnative-image.docker-build=true

AWS Lambda layers and custom runtimes

Create an AWS Lambda custom runtime from the application. A runtime is a program that runs an AWS Lambda function’s handler method when the function is invoked. Include a runtime in your function’s deployment package as an executable file named bootstrap.

A runtime is responsible for running the function’s setup code, reading the handler name from an environment variable, and reading invocation events from the AWS Lambda runtime API. The runtime passes the event data to the function handler, and posts the response from the handler back to AWS Lambda. Your custom runtime can be a shell script, a script in a language that’s included in Amazon Linux, or a binary executable file that’s compiled in Amazon Linux.

Application architecture

The application architecture is similar to the architecture we described in “Optimize your Java application for Amazon ECS with Quarkus”.

Architecture of the application

Figure 2 – Architecture of the application

The architecture of the application is simple and consists of a few classes that implement a REST-service that stores all information in an Amazon DynamoDB-table. Quarkus offers an extension for Amazon DynamoDB that is based on the AWS SDK for Java V2.

Setting up the infrastructure

After you build the AWS Lambda function (as a regular build or native build), package your application and deploy it. The command sam deploy creates a zip file of your code and dependencies, uploads it to an Amazon S3 bucket, creates an AWS CloudFormation template, and deploys its resources.

The following command guides you through all necessary steps for packaging and deployment.

sam deploy --template-file sam.jvm.yaml \
    --stack-name APIGatewayQuarkusDemo --capabilities


If you want to deploy the native version of the application, you must use a different AWS SAM template.

sam deploy --template-file sam.native.yaml \
    --stack-name APIGatewayQuarkusDemo --capabilities

During deployment, the AWS CloudFormation template creates the AWS Lambda function, an Amazon DynamoDB table, an Amazon API Gateway REST-API, and all necessary IAM roles. The output of the AWS CloudFormation stack is the API Gateway’s DNS record.

aws cloudformation describe-stacks \
  --stack-name APIGatewayQuarkusDemo \
"Stacks[].Outputs[?OutputKey=='ApiUrl'].OutputValue" \
  --output text

A following code is a typical example output.


Testing the application

After the resources have been created successfully, you can start testing.

1.      Create a user:

curl -d '{"userName":"jdoe", "firstName":"John", "lastName":"Doe", "age":"35"}' \
    -H "Content-Type: application/json" \
    -X POST https://<your-api-gateway-endpoint>/prod/users

2.      List all the users that you created:

curl https://<your-api-gateway-url>/prod/users

3.      You can get a specific user by userId:

curl -X GET 'https://<your-api-gateway-url>/prod/users/<userId>'

4.      If you want to delete the user that you’ve created recently, send a DELETE request to the specific userId:

curl -X DELETE 'https://<your-api-gateway-url>/prod/users/<userId>'

Performance considerations

Let’s investigate the impact of using a native build in comparison to the regular build of our sample Java application. In this benchmark, we focus on the performance of the application. We want to get the AWS services and architecture out of the equation as much as possible, so we measure the duration of the AWS Lambda function executions of a function including its downstream calls to Amazon DynamoDB. This duration is provided in the Amazon CloudWatch Logs of the function.

The following two charts illustrate 40 Create (POST), Read (GET), and Delete (DELETE) call iterations for a user with the execution durations plotted on the vertical axis. The first graph shows the development of the duration over time observing a single JVM instance. In a second graph, exclusively reports on the performance of the iterations each hitting a fresh JVM.

This is an example application to demonstrate the use of a native build. When you start the optimization of your application make sure to read the best practices for working with AWS Lambda Functions first. Verify the effect of all your optimizations.

The “single cold call” -graph starts with a cold call and each consecutive call hits the same AWS Lambda function container and thus the same JVM. This graph shows both the regular build and the native build (denoted with *) of our application as an AWS Lambda function with 256 MB of memory on Java 11. The native build has been compiled with Quarkus version 1.2.1.

single cold call, followed by warm calls only

Figure 3 – Single cold call, followed by warm calls only

The first executions of the regular build have long durations (the vertical axis has a logarithmic scale) but quickly drop below 100 ms. Still, you can observe an ongoing fluctuation between 10 and 100 ms. For the native build you can observe a consistent execution duration, except for the first call and one outlier in iteration 20. The first call is slow because it is a cold call. The outlier occurs because the Substrate VM still needs garbage collector pauses. Only the first call is slower than the calls to a warmed up regular build.

Let’s dive deeper into the cold calls of the application and their duration. The following chart shows 40 cold calls for both the regular build and the native build.

each of the 40 Create-Read-Delete iterations start with one cold call

Figure 3 – Each of the 40 Create-Read-Delete iterations start with one cold call

You can observe a consistent and predictable duration of the first calls. In this example, the execution of the AWS Lambda function of the native build takes just 0.6–5% of the duration of the regular build.


GraalVM assumes that all code is known at build time of the image, which means that no new code will be loaded at runtime. This means that not all applications can be optimized using GraalVM. Read a list of limitations in the GraalVM documentation. If the native image build for your application fails, a fallback image is created that requires a full JVM for execution.

Cleaning up

After you are finished, you can easily destroy all of these resources with a single command to save costs.

aws cloudformation delete-stack --stack-name <your_stack_name>

Also delete your AWS Cloud9 IDE from the AWS Cloud9 console.


In this post, we described how Java applications are compiled to a native image through Quarkus and run using AWS Lambda. Testing the demo application, we’ve seen a performance improvement of more than 95% compared to a regular build. Keep in mind that the potential performance benefits vary depending on your application and its downstream calls to other services. Due to the limitations of GraalVM, your application may not be a candidate for optimization.

We also demonstrated how AWS SAM deploys the native image as an AWS Lambda function with a custom runtime behind an Amazon API Gateway.We hope we’ve given you some ideas on how you can optimize your existing Java application to reduce startup time and memory consumption. Feel free to submit enhancements to the sample application in the source repository.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

Reactive Microservices Architecture on AWS

Post Syndicated from Sascha Moellering original

Microservice-application requirements have changed dramatically in recent years. These days, applications operate with petabytes of data, need almost 100% uptime, and end users expect sub-second response times. Typical N-tier applications can’t deliver on these requirements.

Reactive Manifesto, published in 2014, describes the essential characteristics of reactive systems including: responsiveness, resiliency, elasticity, and being message driven.

Being message driven is perhaps the most important characteristic of reactive systems. Asynchronous messaging helps in the design of loosely coupled systems, which is a key factor for scalability. In order to build a highly decoupled system, it is important to isolate services from each other. As already described, isolation is an important aspect of the microservices pattern. Indeed, reactive systems and microservices are a natural fit.

Implemented Use Case
This reference architecture illustrates a typical ad-tracking implementation.

Many ad-tracking companies collect massive amounts of data in near-real-time. In many cases, these workloads are very spiky and heavily depend on the success of the ad-tech companies’ customers. Typically, an ad-tracking-data use case can be separated into a real-time part and a non-real-time part. In the real-time part, it is important to collect data as fast as possible and ask several questions including:,  “Is this a valid combination of parameters?,””Does this program exist?,” “Is this program still valid?”

Because response time has a huge impact on conversion rate in advertising, it is important for advertisers to respond as fast as possible. This information should be kept in memory to reduce communication overhead with the caching infrastructure. The tracking application itself should be as lightweight and scalable as possible. For example, the application shouldn’t have any shared mutable state and it should use reactive paradigms. In our implementation, one main application is responsible for this real-time part. It collects and validates data, responds to the client as fast as possible, and asynchronously sends events to backend systems.

The non-real-time part of the application consumes the generated events and persists them in a NoSQL database. In a typical tracking implementation, clicks, cookie information, and transactions are matched asynchronously and persisted in a data store. The matching part is not implemented in this reference architecture. Many ad-tech architectures use frameworks like Hadoop for the matching implementation.

The system can be logically divided into the data collection partand the core data updatepart. The data collection part is responsible for collecting, validating, and persisting the data. In the core data update part, the data that is used for validation gets updated and all subscribers are notified of new data.

Components and Services

Main Application
The main application is implemented using Java 8 and uses Vert.x as the main framework. Vert.x is an event-driven, reactive, non-blocking, polyglot framework to implement microservices. It runs on the Java virtual machine (JVM) by using the low-level IO library Netty. You can write applications in Java, JavaScript, Groovy, Ruby, Kotlin, Scala, and Ceylon. The framework offers a simple and scalable actor-like concurrency model. Vert.x calls handlers by using a thread known as an event loop. To use this model, you have to write code known as “verticles.” Verticles share certain similarities with actors in the actor model. To use them, you have to implement the verticle interface. Verticles communicate with each other by generating messages in  a single event bus. Those messages are sent on the event bus to a specific address, and verticles can register to this address by using handlers.

With only a few exceptions, none of the APIs in Vert.x block the calling thread. Similar to Node.js, Vert.x uses the reactor pattern. However, in contrast to Node.js, Vert.x uses several event loops. Unfortunately, not all APIs in the Java ecosystem are written asynchronously, for example, the JDBC API. Vert.x offers a possibility to run this, blocking APIs without blocking the event loop. These special verticles are called worker verticles. You don’t execute worker verticles by using the standard Vert.x event loops, but by using a dedicated thread from a worker pool. This way, the worker verticles don’t block the event loop.

Our application consists of five different verticles covering different aspects of the business logic. The main entry point for our application is the HttpVerticle, which exposes an HTTP-endpoint to consume HTTP-requests and for proper health checking. Data from HTTP requests such as parameters and user-agent information are collected and transformed into a JSON message. In order to validate the input data (to ensure that the program exists and is still valid), the message is sent to the CacheVerticle.

This verticle implements an LRU-cache with a TTL of 10 minutes and a capacity of 100,000 entries. Instead of adding additional functionality to a standard JDK map implementation, we use Google Guava, which has all the features we need. If the data is not in the L1 cache, the message is sent to the RedisVerticle. This verticle is responsible for data residing in Amazon ElastiCache and uses the Vert.x-redis-client to read data from Redis. In our example, Redis is the central data store. However, in a typical production implementation, Redis would just be the L2 cache with a central data store like Amazon DynamoDB. One of the most important paradigms of a reactive system is to switch from a pull- to a push-based model. To achieve this and reduce network overhead, we’ll use Redis pub/sub to push core data changes to our main application.

Vert.x also supports direct Redis pub/sub-integration, the following code shows our subscriber-implementation:

vertx.eventBus().<JsonObject>consumer(REDIS_PUBSUB_CHANNEL_VERTX, received -> {

JsonObject value = received.body().getJsonObject("value");

String message = value.getString("message");

JsonObject jsonObject = new JsonObject(message);



redis.subscribe(Constants.REDIS_PUBSUB_CHANNEL, res -> {

if (res.succeeded()) {"Subscribed to " + Constants.REDIS_PUBSUB_CHANNEL);

} else {;



The verticle subscribes to the appropriate Redis pub/sub-channel. If a message is sent over this channel, the payload is extracted and forwarded to the cache-verticle that stores the data in the L1-cache. After storing and enriching data, a response is sent back to the HttpVerticle, which responds to the HTTP request that initially hit this verticle. In addition, the message is converted to ByteBuffer, wrapped in protocol buffers, and send to an Amazon Kinesis Data Stream.

The following example shows a stripped-down version of the KinesisVerticle:

public class KinesisVerticle extends AbstractVerticle {

private static final Logger LOGGER = LoggerFactory.getLogger(KinesisVerticle.class);

private AmazonKinesisAsync kinesisAsyncClient;

private String eventStream = "EventStream";


public void start() throws Exception {

EventBus eb = vertx.eventBus();

kinesisAsyncClient = createClient();

eventStream = System.getenv(STREAM_NAME) == null ? "EventStream" : System.getenv(STREAM_NAME);

eb.consumer(Constants.KINESIS_EVENTBUS_ADDRESS, message -> {

try {

TrackingMessage trackingMessage = Json.decodeValue((String)message.body(), TrackingMessage.class);

String partitionKey = trackingMessage.getMessageId();

byte [] byteMessage = createMessage(trackingMessage);

ByteBuffer buf = ByteBuffer.wrap(byteMessage);

sendMessageToKinesis(buf, partitionKey);



catch (KinesisException exc) {





Kinesis Consumer
This AWS Lambda function consumes data from an Amazon Kinesis Data Stream and persists the data in an Amazon DynamoDB table. In order to improve testability, the invocation code is separated from the business logic. The invocation code is implemented in the class KinesisConsumerHandler and iterates over the Kinesis events pulled from the Kinesis stream by AWS Lambda. Each Kinesis event is unwrapped and transformed from ByteBuffer to protocol buffers and converted into a Java object. Those Java objects are passed to the business logic, which persists the data in a DynamoDB table. In order to improve duration of successive Lambda calls, the DynamoDB-client is instantiated lazily and reused if possible.

Redis Updater
From time to time, it is necessary to update core data in Redis. A very efficient implementation for this requirement is using AWS Lambda and Amazon Kinesis. New core data is sent over the AWS Kinesis stream using JSON as data format and consumed by a Lambda function. This function iterates over the Kinesis events pulled from the Kinesis stream by AWS Lambda. Each Kinesis event is unwrapped and transformed from ByteBuffer to String and converted into a Java object. The Java object is passed to the business logic and stored in Redis. In addition, the new core data is also sent to the main application using Redis pub/sub in order to reduce network overhead and converting from a pull- to a push-based model.

The following example shows the source code to store data in Redis and notify all subscribers:

public void updateRedisData(final TrackingMessage trackingMessage, final Jedis jedis, final LambdaLogger logger) {

try {

ObjectMapper mapper = new ObjectMapper();

String jsonString = mapper.writeValueAsString(trackingMessage);

Map<String, String> map = marshal(jsonString);

String statusCode = jedis.hmset(trackingMessage.getProgramId(), map);


catch (Exception exc) {

if (null == logger)






public void notifySubscribers(final TrackingMessage trackingMessage, final Jedis jedis, final LambdaLogger logger) {

try {

ObjectMapper mapper = new ObjectMapper();

String jsonString = mapper.writeValueAsString(trackingMessage);

jedis.publish(Constants.REDIS_PUBSUB_CHANNEL, jsonString);


catch (final IOException e) {

log(e.getMessage(), logger);



Similarly to our Kinesis Consumer, the Redis-client is instantiated somewhat lazily.

Infrastructure as Code
As already outlined, latency and response time are a very critical part of any ad-tracking solution because response time has a huge impact on conversion rate. In order to reduce latency for customers world-wide, it is common practice to roll out the infrastructure in different AWS Regions in the world to be as close to the end customer as possible. AWS CloudFormation can help you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS.

You create a template that describes all the AWS resources that you want (for example, Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. Our reference architecture can be rolled out in different Regions using an AWS CloudFormation template, which sets up the complete infrastructure (for example, Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Container Service (Amazon ECS) cluster, Lambda functions, DynamoDB table, Amazon ElastiCache cluster, etc.).

In this blog post we described reactive principles and an example architecture with a common use case. We leveraged the capabilities of different frameworks in combination with several AWS services in order to implement reactive principles—not only at the application-level but also at the system-level. I hope I’ve given you ideas for creating your own reactive applications and systems on AWS.

About the Author

Sascha Moellering is a Senior Solution Architect. Sascha is primarily interested in automation, infrastructure as code, distributed computing, containers and JVM. He can be reached at [email protected]