Tag Archives: ruby

Announcing Ruby Support for AWS Lambda

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/announcing-ruby-support-for-aws-lambda/

This post is courtesy of Xiang Shen Senior – AWS Solutions Architect and Alex Wood Software Development Engineer – AWS SDKs and Tools

Ruby remains a popular programming language for AWS customers. In the summer of 2011, AWS introduced the initial release of AWS SDK for Ruby, which has helped Ruby developers to better integrate and use AWS resources. The SDK is now in its third major version and it continues to improve and deliver AWS API updates.

Today, AWS is excited to announce Ruby as a supported language for AWS Lambda.

Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default. That makes it easy to interact with the AWS resources directly from your functions. In this post, we walk you through how it works, using examples:

  • Creating a Hello World example
  • Including dependencies
  • Migrating a Sinatra application

Creating a Hello World example

If you are new to Lambda, it’s simple to create a function using the console.

  1. Open the Lambda console.
  2. Choose Create function.
  3. Select Author from scratch.
  4. Name your function something like hello_ruby.
  5. For Runtime, choose Ruby 2.5.
  6. For Role, choose Create a new role from one or more templates.
  7. Name your role something like hello_ruby_role.
  8. Choose Create function.

Your function is created and you are directed to your function’s console page.

You can modify all aspects of your function, such editing the function’s code, assigning one or more triggering services, or configuring additional services that your function can interact with. From the Monitoring tab, you can view various metrics about your function’s usage as well as a link to CloudWatch Logs.

As you can see in the code editor, the Ruby code for this Hello World example is basic. It has a single handler function named lambda_handler and returns an HTTP status code of 200 and the text “Hello from Lambda!” in a JSON structure. You can learn more about the programming model for Lambda functions.

Next, test this Lambda function and confirm that it is working.

  1. On your function console page, choose Test.
  2. Name the test HelloRubyTest and clear out the data in the brackets. This function takes no input.
  3. Choose Save.
  4. Choose Test.

You should now see the results of a success invocation of your Ruby Lambda function.

Including dependencies

When developing Lambda functions with Ruby, you probably need to include other dependencies in your code. To achieve this, use the tool bundle to download the needed RubyGems to a local directory and create a deployable application package. All dependencies need to be included in either this package or in a Lambda layer.

Do this with a Lambda function that is using the gem aws-record to save data into an Amazon DynamoDB table.

  1. Create a directory for your new Ruby application in your development environment:
    mkdir hello_ruby
    cd hello_ruby
  2. Inside of this directory, create a file Gemfile and add aws-record to it:
    source 'https://rubygems.org'
    gem 'aws-record', '~> 2'
  3. Create a hello_ruby_record.rb file with the following code. In the code, put_item is the handler method, which expects an event object with a body attribute. After it’s invoked, it saves the value of the body attribute along with a UUID to the table.
    # hello_ruby_record.rb
    require 'aws-record'
    
    class DemoTable
      include Aws::Record
      set_table_name ENV[‘DDB_TABLE’]
      string_attr :id, hash_key: true
      string_attr :body
    end
    
    def put_item(event:,context:)
      body = event["body"]
      item = DemoTable.new(id: SecureRandom.uuid, body: body)
      item.save! # raise an exception if save fails
      item.to_h
    end 
  4. Next, bring in the dependencies for this application. Bundler is a tool used to manage RubyGems. From your application directory, run the following two commands. They create the Gemfile.lock file and download the gems to the local directory instead of to the local systems Ruby directory. This way, they ensure that all your dependencies are included in the function deployment package.
    bundle install
    bundle install --deployment
  5. AWS SAM is a templating tool that you can use to create and manage serverless applications. With it, you can define the structure of your Lambda application, define security policies and invocation sources, and manage or create almost any AWS resource. Use it now to help define the function and its policy, create your DynamoDB table and then deploy the application.
    Create a new file in your hello_ruby directory named template.yaml with the following contents:

    AWSTemplateFormatVersion: '2010-09-09'
    Transform: AWS::Serverless-2016-10-31
    Description: 'sample ruby application'
    
    Resources:
      HelloRubyRecordFunction:
        Type: AWS::Serverless::Function
        Properties:
          Handler: hello_ruby_record.put_item
          Runtime: ruby2.5
          Policies:
          - DynamoDBCrudPolicy:
              TableName: !Ref RubyExampleDDBTable 
          Environment:
            Variables:
              DDB_TABLE: !Ref RubyExampleDDBTable
    
      RubyExampleDDBTable:
        Type: AWS::Serverless::SimpleTable
        Properties:
          PrimaryKey:
            Name: id
            Type: String
    
    Outputs:
      HelloRubyRecordFunction:
        Description: Hello Ruby Record Lambda Function ARN
        Value:
          Fn::GetAtt:
          - HelloRubyRecordFunction
          - Arn

    In this template file, you define Serverless::Function and Serverless::SimpleTable as resources, which correspond to a Lambda function and DynamoDB table.

    The line Policies in the function and the following line DynamoDBCrudPolicy refer to an AWS SAM policy template, which greatly simplifies granting permissions to Lambda functions. The DynamoDBCrudPolicy allows you to create, read, update, and delete DynamoDB resources and items in tables.

    In this example, you limit permissions by specifying TableName and passing a reference to Serverless::SimpleTable that the template creates. Next in importance is the Environment section of this template, where you create a variable named DDB_TABLE and also pass it a reference to Serverless::SimpleTable. Lastly, the Outputs section of the template allows you to easily find the function that was created.

    The directory structure now should look like the following:

    $ tree -L 2 -a
    .
    ├── .bundle
    │   └── config
    ├── Gemfile
    ├── Gemfile.lock
    ├── hello_ruby_record.rb
    ├── template.yaml
    └── vendor
        └── bundle
  6. Now use the template file to package and deploy your application. An AWS SAM template can be deployed using the AWS CloudFormation console, AWS CLI, or AWS SAM CLI. The AWS SAM CLI is a tool that simplifies serverless development across the lifecycle of your application. That includes the initial creation of a serverless project, to local testing and debugging, to deployment up to AWS. Follow the steps for your platform to get the AWS SAM CLI Installed.
  7. Create an Amazon S3 bucket to store your application code. Run the following AWS CLI command to create an S3 bucket with a custom name:
    aws s3 mb s3://<bucketname>
  8. Use the AWS SAM CLI to package your application:
    sam package --template-file template.yaml \
    --output-template-file packaged-template.yaml \
    --s3-bucket <bucketname>

    This creates a new template file named packaged-template.yaml.

  9. Use the AWS SAM CLI to deploy your application. Use any stack-name value:
    sam deploy --template-file packaged-template.yaml \
    --stack-name helloRubyRecord \
    --capabilities CAPABILITY_IAM
    
    Waiting for changeset to be created...
    Waiting for stack create/update to complete
    Successfully created/updated stack - helloRubyRecord

    This can take a few moments to create all of the resources from the template. After you see the output “Successfully created/updated stack,” it has completed.

  10. In the Lambda Serverless Applications console, you should see your application listed:

  11. To see the application dashboard, choose the name of your application stack. Because this application was deployed with either AWS SAM or AWS CloudFormation, the dashboard allows you to manage the resources as a single group with a number of features. You can view the stack resources, its template, recent deployments, metrics, including any custom dashboards you might make.

Now test the Lambda function and confirm that it is working.

  1. Choose Overview. Under Resources, select the Lambda function created in this application:

  2. In the Lambda function console, configure a test as you did earlier. Use the following JSON:
    {"body": "hello lambda"}
  3. Execute the test and you should see a success message:

  4. In the Lambda Serverless Applications console for this stack, select the DynamoDB table created:

  5. Choose Items

The id and body should match the output from the Lambda function test run.

You just created a Ruby-based Lambda application using AWS SAM!

Migrating a Sinatra application

Sinatra is a popular open source framework for Ruby that launched over a decade ago. It allows you to quickly create powerful web applications with minimal effort. Until today, you still would have needed servers to run those applications. Now, you can just deploy a Sinatra app to Lambda and move to a serverless world!

Thanks to Rack, a Ruby webserver interface, you only need to create a simple Lambda function to bridge the gap between the HTTP requests and the serverless Sinatra application. You don’t need to make additional changes to other Sinatra files at all. Paired with Amazon API Gateway and DynamoDB, your Sinatra application runs completely serverless!

For this post, take an existing Sinatra application and make it function in Lambda.

  1. Clone the serverless-sinatra-sample GitHub repository into your local environment.
    Under the app directory, find the Sinatra application files. The files enable you to specify routes to return either JSON or HTML that is generated from ERB templates in the server.rb file.

    ├── app
    │   ├── config.ru
    │   ├── server.rb
    │   └── views
    │       ├── feedback.erb
    │       ├── index.erb
    │       └── layout.erb

    In the root of the directory, you also find the template.yaml and lambda.rb files. The template.yaml includes four resources:

    • Serverless::Function
    • Serverless::API
    • Serverless::SimpleTable
    • Lambda::Permission

    In the lambda.rb file, you find the main handler for this function, which calls Rack to interface with the Sinatra application.

  2. This application has several dependencies, so use bundle to install them:
    bundle install
    bundle install --deployment
  3. Package this Lambda function and the related application components using the AWS SAM CLI:
    sam package --template-file template.yaml \
    --output-template-file packaged-template.yaml \
    --s3-bucket <bucketname>
  4. Next, deploy the application:
    sam deploy --template-file packaged-template.yaml \
    --stack-name LambdaSinatra \
    --capabilities CAPABILITY_IAM                                                                                                        
    
    Waiting for changeset to be created..
    Waiting for stack create/update to complete
    Successfully created/updated stack - LambdaSinatra
  5. In the Lambda Serverless Applications console, select your application:

  6. Choose Overview. Under Resources, find the ApiGateway RestApi entry and select the Logical ID. Below it is SinatraAPI:

  7. In the API Gateway console, in the left navigation pane, choose Dashboard. Copy the URL from Invoke this API and paste it in another browser tab:

  8. Add on to the URL a route from the Sinatra application, as seen in the server.rb.

For example, this is the hello-world GET route:

And this is the /feedback route:

Congratulations, you’ve just successfully deployed a Sinatra-based Ruby application inside of a Lambda function!

Conclusion

As you’ve seen in this post, getting started with Ruby on Lambda is made easy via either the AWS Management Console or the AWS SAM CLI.

You might even be able to easily port existing applications to Lambda without needing to change your code base. The new support for Ruby allows you to benefit from the greatly reduced operational overhead, scalability, availability, and pay–per-use pricing of Lambda.

If you are excited about this feature as well, there is even more information on writing Lambda functions in Ruby in the AWS Lambda Developer Guide.

Happy coding!

XXEinjector – Automatic XXE Injection Tool For Exploitation

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/05/xxeinjector-automatic-xxe-injection-tool-for-exploitation/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

XXEinjector – Automatic XXE Injection Tool For Exploitation

XXEinjector is a Ruby-based XXE Injection Tool that automates retrieving files using direct and out of band methods. Directory listing only works in Java applications and the brute forcing method needs to be used for other applications.

Usage of XXEinjector XXE Injection Tool

XXEinjector actually has a LOT of options, so do have a look through to see how you can best leverage this type of attack. Obviously Ruby is a prequisite to run the tool.

Read the rest of XXEinjector – Automatic XXE Injection Tool For Exploitation now! Only available at Darknet.

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/752544/rss

Security updates have been issued by Debian (gunicorn, libreoffice, libsdl2-image, ruby1.8, and ruby1.9.1), Fedora (java-1.8.0-openjdk, jgraphx, memcached, nghttp2, perl, perl-Module-CoreList, and roundcubemail), Gentoo (clamav, librelp, mbedtls, quagga, tenshi, and unadf), Mageia (freeplane, libcdio, libtiff, thunderbird, and zsh), openSUSE (cfitsio, chromium, mbedtls, and nextcloud), and Red Hat (chromium-browser, kernel, and rh-perl524-perl).

Security updates for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/752183/rss

Security updates have been issued by Debian (freeplane and jruby), Fedora (kernel and python-bleach), Gentoo (evince, gdk-pixbuf, and ncurses), openSUSE (kernel), Oracle (gcc, glibc, kernel, krb5, ntp, openssh, openssl, policycoreutils, qemu-kvm, and xdg-user-dirs), Red Hat (corosync, glusterfs, kernel, and kernel-rt), SUSE (openssl), and Ubuntu (openssl and perl).

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/751947/rss

Security updates have been issued by Arch Linux (lib32-openssl and zsh), Debian (patch, perl, ruby-loofah, squirrelmail, tiff, and tiff3), Fedora (gnupg2), Gentoo (go), Mageia (firefox, flash-player-plugin, nxagent, puppet, python-paramiko, samba, and thunderbird), Red Hat (flash-plugin), Scientific Linux (python-paramiko), and Ubuntu (patch, perl, and ruby).

Security updates for Friday

Post Syndicated from jake original https://lwn.net/Articles/751146/rss

Security updates have been issued by Debian (sharutils), Fedora (firefox, httpd, and mod_http2), openSUSE (docker-distribution, graphite2, libidn, and postgresql94), Oracle (libvorbis and thunderbird), Red Hat (libvorbis, python-paramiko, and thunderbird), Scientific Linux (libvorbis and thunderbird), SUSE (apache2), and Ubuntu (firefox, linux-lts-xenial, linux-aws, and ruby1.9.1, ruby2.0, ruby2.3).

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/750759/rss

Security updates have been issued by Debian (dovecot, irssi, libevt, libvncserver, mercurial, mosquitto, openssl, python-django, remctl, rubygems, and zsh), Fedora (acpica-tools, dovecot, firefox, ImageMagick, mariadb, mosquitto, openssl, python-paramiko, rubygem-rmagick, and thunderbird), Mageia (flash-player-plugin and squirrelmail), Slackware (php), and Ubuntu (dovecot).

Security updates for Friday

Post Syndicated from jake original https://lwn.net/Articles/750573/rss

Security updates have been issued by Debian (memcached, openssl, openssl1.0, php5, thunderbird, and xerces-c), Fedora (python-notebook, slf4j, and unboundid-ldapsdk), Mageia (kernel, libvirt, mailman, and net-snmp), openSUSE (aubio, cacti, cacti-spine, firefox, krb5, LibVNCServer, links, memcached, and tomcat), Slackware (ruby), SUSE (kernel and python-paramiko), and Ubuntu (intel-microcode).

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/750150/rss

Security updates have been issued by Arch Linux (bchunk, thunderbird, and xerces-c), Debian (freeplane, icu, libvirt, and net-snmp), Fedora (monitorix, php-simplesamlphp-saml2, php-simplesamlphp-saml2_1, php-simplesamlphp-saml2_3, puppet, and qt5-qtwebengine), openSUSE (curl, libmodplug, libvorbis, mailman, nginx, opera, python-paramiko, and samba, talloc, tevent), Red Hat (python-paramiko, rh-maven35-slf4j, rh-mysql56-mysql, rh-mysql57-mysql, rh-ruby22-ruby, rh-ruby23-ruby, and rh-ruby24-ruby), Slackware (thunderbird), SUSE (clamav, kernel, memcached, and php53), and Ubuntu (samba and tiff).

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/749087/rss

Security updates have been issued by CentOS (389-ds-base, dhcp, kernel, libreoffice, php, quagga, and ruby), Debian (ming, util-linux, vips, and zsh), Fedora (community-mysql, php, ruby, and transmission), Gentoo (newsbeuter), Mageia (libraw and mbedtls), openSUSE (php7 and python-Django), Red Hat (MRG Realtime 2.5), and SUSE (kernel).

Security updates for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/747711/rss

Security updates have been issued by Arch Linux (libmspack), Debian (zziplib), Fedora (ca-certificates, firefox, freetype, golang, krb5, libreoffice, monit, patch, plasma-workspace, ruby, sox, tomcat, and zziplib), openSUSE (dovecot22, glibc, GraphicsMagick, libXcursor, mbedtls, p7zip, SDL_image, SDL2_image, sox, and transfig), Red Hat (chromium-browser), and Ubuntu (cups, libvirt, and qemu).

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/747120/rss

Security updates have been issued by Arch Linux (go, go-pie, and plasma-workspace), Debian (audacity, exim4, libreoffice, librsvg, ruby-omniauth, tomcat-native, and uwsgi), Fedora (tomcat-native), Gentoo (virtualbox), Mageia (kernel), openSUSE (freetype2, ghostscript, jhead, and libxml2), and SUSE (freetype2 and kernel).

Reactive Microservices Architecture on AWS

Post Syndicated from Sascha Moellering original https://aws.amazon.com/blogs/architecture/reactive-microservices-architecture-on-aws/

Microservice-application requirements have changed dramatically in recent years. These days, applications operate with petabytes of data, need almost 100% uptime, and end users expect sub-second response times. Typical N-tier applications can’t deliver on these requirements.

Reactive Manifesto, published in 2014, describes the essential characteristics of reactive systems including: responsiveness, resiliency, elasticity, and being message driven.

Being message driven is perhaps the most important characteristic of reactive systems. Asynchronous messaging helps in the design of loosely coupled systems, which is a key factor for scalability. In order to build a highly decoupled system, it is important to isolate services from each other. As already described, isolation is an important aspect of the microservices pattern. Indeed, reactive systems and microservices are a natural fit.

Implemented Use Case
This reference architecture illustrates a typical ad-tracking implementation.

Many ad-tracking companies collect massive amounts of data in near-real-time. In many cases, these workloads are very spiky and heavily depend on the success of the ad-tech companies’ customers. Typically, an ad-tracking-data use case can be separated into a real-time part and a non-real-time part. In the real-time part, it is important to collect data as fast as possible and ask several questions including:,  “Is this a valid combination of parameters?,””Does this program exist?,” “Is this program still valid?”

Because response time has a huge impact on conversion rate in advertising, it is important for advertisers to respond as fast as possible. This information should be kept in memory to reduce communication overhead with the caching infrastructure. The tracking application itself should be as lightweight and scalable as possible. For example, the application shouldn’t have any shared mutable state and it should use reactive paradigms. In our implementation, one main application is responsible for this real-time part. It collects and validates data, responds to the client as fast as possible, and asynchronously sends events to backend systems.

The non-real-time part of the application consumes the generated events and persists them in a NoSQL database. In a typical tracking implementation, clicks, cookie information, and transactions are matched asynchronously and persisted in a data store. The matching part is not implemented in this reference architecture. Many ad-tech architectures use frameworks like Hadoop for the matching implementation.

The system can be logically divided into the data collection partand the core data updatepart. The data collection part is responsible for collecting, validating, and persisting the data. In the core data update part, the data that is used for validation gets updated and all subscribers are notified of new data.

Components and Services

Main Application
The main application is implemented using Java 8 and uses Vert.x as the main framework. Vert.x is an event-driven, reactive, non-blocking, polyglot framework to implement microservices. It runs on the Java virtual machine (JVM) by using the low-level IO library Netty. You can write applications in Java, JavaScript, Groovy, Ruby, Kotlin, Scala, and Ceylon. The framework offers a simple and scalable actor-like concurrency model. Vert.x calls handlers by using a thread known as an event loop. To use this model, you have to write code known as “verticles.” Verticles share certain similarities with actors in the actor model. To use them, you have to implement the verticle interface. Verticles communicate with each other by generating messages in  a single event bus. Those messages are sent on the event bus to a specific address, and verticles can register to this address by using handlers.

With only a few exceptions, none of the APIs in Vert.x block the calling thread. Similar to Node.js, Vert.x uses the reactor pattern. However, in contrast to Node.js, Vert.x uses several event loops. Unfortunately, not all APIs in the Java ecosystem are written asynchronously, for example, the JDBC API. Vert.x offers a possibility to run this, blocking APIs without blocking the event loop. These special verticles are called worker verticles. You don’t execute worker verticles by using the standard Vert.x event loops, but by using a dedicated thread from a worker pool. This way, the worker verticles don’t block the event loop.

Our application consists of five different verticles covering different aspects of the business logic. The main entry point for our application is the HttpVerticle, which exposes an HTTP-endpoint to consume HTTP-requests and for proper health checking. Data from HTTP requests such as parameters and user-agent information are collected and transformed into a JSON message. In order to validate the input data (to ensure that the program exists and is still valid), the message is sent to the CacheVerticle.

This verticle implements an LRU-cache with a TTL of 10 minutes and a capacity of 100,000 entries. Instead of adding additional functionality to a standard JDK map implementation, we use Google Guava, which has all the features we need. If the data is not in the L1 cache, the message is sent to the RedisVerticle. This verticle is responsible for data residing in Amazon ElastiCache and uses the Vert.x-redis-client to read data from Redis. In our example, Redis is the central data store. However, in a typical production implementation, Redis would just be the L2 cache with a central data store like Amazon DynamoDB. One of the most important paradigms of a reactive system is to switch from a pull- to a push-based model. To achieve this and reduce network overhead, we’ll use Redis pub/sub to push core data changes to our main application.

Vert.x also supports direct Redis pub/sub-integration, the following code shows our subscriber-implementation:

vertx.eventBus().<JsonObject>consumer(REDIS_PUBSUB_CHANNEL_VERTX, received -> {

JsonObject value = received.body().getJsonObject("value");

String message = value.getString("message");

JsonObject jsonObject = new JsonObject(message);

eb.send(CACHE_REDIS_EVENTBUS_ADDRESS, jsonObject);

});

redis.subscribe(Constants.REDIS_PUBSUB_CHANNEL, res -> {

if (res.succeeded()) {

LOGGER.info("Subscribed to " + Constants.REDIS_PUBSUB_CHANNEL);

} else {

LOGGER.info(res.cause());

}

});

The verticle subscribes to the appropriate Redis pub/sub-channel. If a message is sent over this channel, the payload is extracted and forwarded to the cache-verticle that stores the data in the L1-cache. After storing and enriching data, a response is sent back to the HttpVerticle, which responds to the HTTP request that initially hit this verticle. In addition, the message is converted to ByteBuffer, wrapped in protocol buffers, and send to an Amazon Kinesis Data Stream.

The following example shows a stripped-down version of the KinesisVerticle:

public class KinesisVerticle extends AbstractVerticle {

private static final Logger LOGGER = LoggerFactory.getLogger(KinesisVerticle.class);

private AmazonKinesisAsync kinesisAsyncClient;

private String eventStream = "EventStream";

@Override

public void start() throws Exception {

EventBus eb = vertx.eventBus();

kinesisAsyncClient = createClient();

eventStream = System.getenv(STREAM_NAME) == null ? "EventStream" : System.getenv(STREAM_NAME);

eb.consumer(Constants.KINESIS_EVENTBUS_ADDRESS, message -> {

try {

TrackingMessage trackingMessage = Json.decodeValue((String)message.body(), TrackingMessage.class);

String partitionKey = trackingMessage.getMessageId();

byte [] byteMessage = createMessage(trackingMessage);

ByteBuffer buf = ByteBuffer.wrap(byteMessage);

sendMessageToKinesis(buf, partitionKey);

message.reply("OK");

}

catch (KinesisException exc) {

LOGGER.error(exc);

}

});

}

Kinesis Consumer
This AWS Lambda function consumes data from an Amazon Kinesis Data Stream and persists the data in an Amazon DynamoDB table. In order to improve testability, the invocation code is separated from the business logic. The invocation code is implemented in the class KinesisConsumerHandler and iterates over the Kinesis events pulled from the Kinesis stream by AWS Lambda. Each Kinesis event is unwrapped and transformed from ByteBuffer to protocol buffers and converted into a Java object. Those Java objects are passed to the business logic, which persists the data in a DynamoDB table. In order to improve duration of successive Lambda calls, the DynamoDB-client is instantiated lazily and reused if possible.

Redis Updater
From time to time, it is necessary to update core data in Redis. A very efficient implementation for this requirement is using AWS Lambda and Amazon Kinesis. New core data is sent over the AWS Kinesis stream using JSON as data format and consumed by a Lambda function. This function iterates over the Kinesis events pulled from the Kinesis stream by AWS Lambda. Each Kinesis event is unwrapped and transformed from ByteBuffer to String and converted into a Java object. The Java object is passed to the business logic and stored in Redis. In addition, the new core data is also sent to the main application using Redis pub/sub in order to reduce network overhead and converting from a pull- to a push-based model.

The following example shows the source code to store data in Redis and notify all subscribers:

public void updateRedisData(final TrackingMessage trackingMessage, final Jedis jedis, final LambdaLogger logger) {

try {

ObjectMapper mapper = new ObjectMapper();

String jsonString = mapper.writeValueAsString(trackingMessage);

Map<String, String> map = marshal(jsonString);

String statusCode = jedis.hmset(trackingMessage.getProgramId(), map);

}

catch (Exception exc) {

if (null == logger)

exc.printStackTrace();

else

logger.log(exc.getMessage());

}

}

public void notifySubscribers(final TrackingMessage trackingMessage, final Jedis jedis, final LambdaLogger logger) {

try {

ObjectMapper mapper = new ObjectMapper();

String jsonString = mapper.writeValueAsString(trackingMessage);

jedis.publish(Constants.REDIS_PUBSUB_CHANNEL, jsonString);

}

catch (final IOException e) {

log(e.getMessage(), logger);

}

}

Similarly to our Kinesis Consumer, the Redis-client is instantiated somewhat lazily.

Infrastructure as Code
As already outlined, latency and response time are a very critical part of any ad-tracking solution because response time has a huge impact on conversion rate. In order to reduce latency for customers world-wide, it is common practice to roll out the infrastructure in different AWS Regions in the world to be as close to the end customer as possible. AWS CloudFormation can help you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS.

You create a template that describes all the AWS resources that you want (for example, Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. Our reference architecture can be rolled out in different Regions using an AWS CloudFormation template, which sets up the complete infrastructure (for example, Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Container Service (Amazon ECS) cluster, Lambda functions, DynamoDB table, Amazon ElastiCache cluster, etc.).

Conclusion
In this blog post we described reactive principles and an example architecture with a common use case. We leveraged the capabilities of different frameworks in combination with several AWS services in order to implement reactive principles—not only at the application-level but also at the system-level. I hope I’ve given you ideas for creating your own reactive applications and systems on AWS.

About the Author

Sascha Moellering is a Senior Solution Architect. Sascha is primarily interested in automation, infrastructure as code, distributed computing, containers and JVM. He can be reached at [email protected]