All posts by Chris Munns

The attendee’s guide to the AWS re:Invent 2023 Compute track

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/the-attendees-guide-to-the-aws-reinvent-2023-compute-track/

This post by Art Baudo – Principal Product Marketing Manager – AWS EC2, and Pranaya Anshu – Product Marketing Manager – AWS EC2

We are just a few weeks away from AWS re:Invent 2023, AWS’s biggest cloud computing event of the year. This event will be a great opportunity for you to meet other cloud enthusiasts, find productive solutions that can transform your company, and learn new skills through 2000+ learning sessions.

Even if you are not able to join in person, you can catch-up with many of the sessions on-demand and even watch the keynote and innovation sessions live.

If you’re able to join us, just a reminder we offer several types of sessions which can help maximize your learning in a variety of AWS topics. Breakout sessions are lecture-style 60-minute informative sessions presented by AWS experts, customers, or partners. These sessions are recorded and uploaded a few days after to the AWS Events YouTube channel.

re:Invent attendees can also choose to attend chalk-talks, builder sessions, workshops, or code talk sessions. Each of these are live non-recorded interactive sessions.

  • Chalk-talk sessions: Attendees will interact with presenters, asking questions and using a whiteboard in session.
  • Builder Sessions: Attendees participate in a one-hour session and build something.
  • Workshops sessions: Attendees join a two-hour interactive session where they work in a small team to solve a real problem using AWS services.
  • Code talk sessions: Attendees participate in engaging code-focused sessions where an expert leads a live coding session.

To start planning your re:Invent week, check-out some of the Compute track sessions below. If you find a session you’re interested in, be sure to reserve your seat for it through the AWS attendee portal.

Explore the latest compute innovations

This year AWS compute services have launched numerous innovations: From the launch of over 100 new Amazon EC2 instances, to the general availability of Amazon EC2 Trn1n instances powered by AWS Trainium and Amazon EC2 Inf2 instances powered by AWS Inferentia2, to a new way to reserve GPU capacity with Amazon EC2 Capacity Blocks for ML. There’s a lot of exciting launches to take in.

Explore some of these latest and greatest innovations in the following sessions:

  • CMP102 | What’s new with Amazon EC2
    Provides an overview on the latest Amazon EC2 innovations. Hear about recent Amazon EC2 launches, learn how about differences between Amazon EC2 instances families, and how you can use a mix of instances to deliver on your cost, performance, and sustainability goals.
  • CMP217 | Select and launch the right instance for your workload and budget
    Learn how to select the right instance for your workload and budget. This session will focus on innovations including Amazon EC2 Flex instances and the new generation of Intel, AMD, and AWS Graviton instances.
  • CMP219-INT | Compute innovation for any application, anywhere
    Provides you with an understanding of the breadth and depth of AWS compute offerings and innovation. Discover how you can run any application, including enterprise applications, HPC, generative artificial intelligence (AI), containers, databases, and games, on AWS.

Customer experiences and applications with machine learning

Machine learning (ML) has been evolving for decades and has an inflection point with generative AI applications capturing widespread attention and imagination. More customers, across a diverse set of industries, choose AWS compared to any other major cloud provider to build, train, and deploy their ML applications. Learn about the generative AI infrastructure at Amazon or get hands-on experience building ML applications through our ML focused sessions, such as the following:

Discover what powers AWS compute

AWS has invested years designing custom silicon optimized for the cloud to deliver the best price performance for a wide range of applications and workloads using AWS services. Learn more about the AWS Nitro System, processors at AWS, and ML chips.

Optimize your compute costs

At AWS, we focus on delivering the best possible cost structure for our customers. Frugality is one of our founding leadership principles. Cost effective design continues to shape everything we do, from how we develop products to how we run our operations. Come learn of new ways to optimize your compute costs through AWS services, tools, and optimization strategies in the following sessions:

Check out workload-specific sessions

Amazon EC2 offers the broadest and deepest compute platform to help you best match the needs of your workload. More SAP, high performance computing (HPC), ML, and Windows workloads run on AWS than any other cloud. Join sessions focused around your specific workload to learn about how you can leverage AWS solutions to accelerate your innovations.

Hear from AWS customers

AWS serves millions of customers of all sizes across thousands of use cases, every industry, and around the world. Hear customers dive into how AWS compute solutions have helped them transform their businesses.

Ready to unlock new possibilities?

The AWS Compute team looks forward to seeing you in Las Vegas. Come meet us at the Compute Booth in the Expo. And if you’re looking for more session recommendations, check-out additional re:Invent attendee guides curated by experts.

It’s About Time: Microsecond-Accurate Clocks on Amazon EC2 Instances

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/its-about-time-microsecond-accurate-clocks-on-amazon-ec2-instances/

This post is written by Josh Levinson, AWS Principal Product Manager and Julien Ridoux, AWS Principal Software Engineer

Today, we announced that we improved the Amazon Time Sync Service to microsecond-level clock accuracy on supported Amazon EC2 instances. This new capability adds a local reference clock to your EC2 instance and is designed to deliver clock accuracy in the low double-digit microsecond range within your instance’s guest OS software. This post shows you how to connect to the improved clocks on your EC2 instances. This post also demonstrates how you can measure your clock accuracy and easily generate and compare timestamps from your EC2 instances with ClockBound, an open source daemon and library.

In general, it’s hard to achieve high-fidelity clock synchronization due to hardware limitations and network variability. While customers have depended on the Amazon Time Sync Service to provide one millisecond clock accuracy, workloads that need microsecond-range accuracy, such as financial trading and broadcasting, required customers to maintain their own time infrastructure, which is a significant operational burden, and expensive. Other clock-sensitive applications that run on the cloud, including distributed databases and storage, have to incorporate message exchange delays with wait periods, data locks, or transaction journaling to maintain consistency at scale.

With global and reliable microsecond-range clock accuracy, you can now migrate and modernize your most time-sensitive applications in the cloud and retire your burdensome on-premises time infrastructure. You can also simplify your applications and increase their throughput by leveraging the high-accuracy timestamps to determine the ordering of events and transactions on workloads across instances, Availability Zones, and Regions. Additionally, you can audit the improved Amazon Time Sync Service to measure and monitor the expected microsecond-range accuracy.

New improvements to Amazon Time Sync Service

The new local clock source can be accessed over the existing Amazon Time Sync Service’s Network Time Protocol (NTP) IPv4 and IPv6 endpoints, or by configuring a new Precision Time Protocol (PTP) reference clock device, to get the best accuracy possible. It’s important to note that both NTP and the new PTP Hardware Clock (PHC) device share the same highly accurate source of time. The new PHC device is part of the AWS Nitro System, so it is directly accessible on supported bare metal and virtualized Amazon EC2 instances without using any customer resources.

A quick note about Leap Seconds

Leap seconds, introduced in 1972, are occasional one-second adjustments to UTC time to factor in irregularities in Earth’s rotation to UTC time in order to accommodate differences between International Atomic Time (TAI) and solar time (Ut1). To manage leap seconds on behalf of customers, we designed leap second smearing within the Amazon Time Sync Service (details on smearing time in “Look Before You Leap”).

Leap seconds are going away, and we are in full support of the decision made at the 27th General Conference on Weights and Measures to abandon leap seconds by or before 2035.

To support this transition, we still plan on smearing time when accessing the Amazon Time Sync Service over the local NTP connection or our Public NTP pools (time.aws.com). The new PHC device, however, will not provide a smeared time option. In the event of a leap seconds, PHC would add the leap seconds following UTC standards. Leap smeared and leap second time sources are the same in most cases. But, since they differ during a leap second event, we do not recommend mixing smeared and non-smeared time sources in your time client configuration during a leap second event.

Connect using NTP (automatic for most customers)

You can connect to the new, microsecond-accurate clocks over NTP the same way you use the Amazon Time Sync Service today at the 169.254.169.123 IPv4 address or the fd00:ec2::123 IPv6 address. This is already the default configuration on all Amazon AMIs and many partner AMIs, including RHEL, Ubuntu, and SUSE. You can verify this connection in your NTP daemon. The below example, using the chrony daemon, verifies that chrony is using the 169.254.169.123 IPv4 address of the Amazon Time Sync Service to synchronize the time:

[ec2-user@ ~]$ chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- pacific.latt.net              3  10   377    69  +5630us[+5632us] +/-   90ms
^- edge-lax.txryan.com           2   8   377   224   -691us[ -689us] +/-   33ms
^* 169.254.169.123               1   4   377     2  -4487ns[-5914ns] +/-   85us
^- blotch.image1tech.net         2   9   377   327  -1710us[-1720us] +/-   64ms
^- 44.190.40.123                 2   9   377   161  +3057us[+3060us] +/-   84ms

The 169.254.169.123 IPv4 address of the Amazon Time Sync Service is designated with a *, showing it is the source of synchronization on this instance. See the EC2 User Guide for more details on configuring the Amazon Time Sync Service if it is not already configured by default.

Connect using the PTP Hardware Clock

First, you need to install the latest Elastic Network Adapter (ENA) driver. This driver will allow you to connect directly to the PHC. Connect to your instance and install the Linux kernel driver for Elastic Network Adapter (ENA) version 2.10.0 or later. For the installation instructions, see Linux kernel driver for Elastic Network Adapter (ENA) family on GitHub. To enable PTP support in the driver follow the instructions in the section “PTP Hardware Clock (PHC)“.

Once the driver is installed, you need to configure your NTP daemon to connect to the PHC. Below is an example on how to change the configuration in chrony by adding the PHC to your chrony configuration file. Then restart chrony for the change to take place:

[ec2-user ~]$ sudo sh -c 'echo "refclock PHC /dev/ptp0 poll 0 delay 0.000010 prefer" >> /etc/chrony.conf'
[ec2-user ~]$ sudo systemctl restart chronyd

This example uses a +/-5 microsecond range in receiving the reference signal from the PHC. These 10 microseconds are needed to account for operating system latency.

After changing your configuration, you can validate your daemon is correctly syncing to the PHC. Below is an example of output from the chronyc command. An asterisk will appear next to the PHC0 source indicating that you are now syncing to the PHC:

[ec2-user@ ~]$ chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample
=============================================================================
#* PHC0                           0   0   377     1   +18ns[  +20ns] +/- 5032ns

The PHC0 device of the Amazon Time Sync Service is designated with a *, showing it is the source of synchronization on this instance

Your chrony tracking information will also show that you are syncing to the PHC:

[ec2-user@ ~]$ chronyc tracking
Reference ID    : 50484330 (PHC0)
Stratum         : 1
Ref time (UTC)  : Mon Nov 13 18:43:09 2023
System time     : 0.000000004 seconds fast of NTP time
Last offset     : -0.000000010 seconds
RMS offset      : 0.000000012 seconds
Frequency       : 7.094 ppm fast
Residual freq   : -0.000 ppm
Skew            : 0.004 ppm
Root delay      : 0.000010000 seconds
Root dispersion : 0.000001912 seconds
Update interval : 1.0 seconds
Leap status     : Normal

See the EC2 User Guide for more details on configuring the PHC.

Measuring your clock accuracy

Clock accuracy is a measure of clock error, typically defined as the offset to UTC. This clock error is the difference between the observed time on the computer and the reference time (also known as true time). If your instance is configured to use the Amazon Time Sync Service where the microsecond-accurate enhancement is available, you will typically see a clock error bound of under 100us using the NTP connection. When configured and synchronized correctly with the new PHC connection, you will typically see a clock error bound of under 40us.

We previously published a blog on measuring and monitoring clock accuracy over NTP, which still applies to the improved NTP connection.

If you are connected to the PHC, your time daemon, such as chronyd, will underestimate the clock error bound. This is because inherently, a PTP hardware clock device in Linux does not pass any “error bound” information to chrony, the way the NTP would. As a result, your clock synchronization daemon assumes the clock itself is accurate to UTC and thus has an “error bound” of 0. To get around this issue, the Nitro System calculates the error bound of the PTP Hardware Clock itself, and exposes it to your EC2 instance over the ENA driver sysfs filesystem. You can read this directly as a value in nanoseconds with the command cat /sys/devices/pci0000:00/0000:00:05.0/phc_error_bound. To get your clock error bound at some instant, you would need to add the clock error bound from chrony or ClockBound at the time that chronyd polls the PTP Hardware Clock and add it to this phc_error_bound value.

Below is how you would calculate the clock error incorporating the PHC clock error to get your true clock error bound:

CLOCK ERROR BOUND = SYSTEM TIME + (.5 * ROOT DELAY) + ROOTDISPERSION + PHC Error Bound

For the values in the example:

PHC Error Bound = cat /sys/devices/pci0000:00/0000:00:05.0/phc_error_bound

The System Time, Root Delay, and Root Dispersion are values taken from the chrony tracking information.

ClockBound

However accurate, a clock is never perfect. Instead of providing an estimate of the clock error, ClockBound provides a reliable confidence interval by automatically calculating the clock accuracy, using the calculations in which the reference time (true time) does exist. The open source ClockBound daemon provides a convenient way to retrieve this confidence interval, and work is continuing to make it easier to integrate into high performance workloads.

Conclusion

The Amazon Time Sync Service’s new microsecond-accurate clocks can be leveraged to migrate and modernize your most clock-sensitive applications in the cloud. In this post, we showed you how to can connect to the improved clocks on supported Amazon EC2 instances, how to measure your clock accuracy, and how to easily generate and compare timestamps from your Amazon EC2 instances with ClockBound. Launch a supported instance and get started today to build using this new capability.

To learn more about the Amazon Time Sync Service, see the EC2 UserGuide for Linux and Windows.

If you have questions about this post, start a new thread on the AWS Compute re:Post or contact AWS Support.

Hear about the Amazon Time Sync Service at re:Invent

We will speak in more detail about the Amazon Time Sync Service during re:invent 2023. Look for Session ID CMP220 in the AWS re:Invent session catalog to register.

An attendee’s guide to hybrid cloud and edge computing at AWS re:Invent 2023

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/an-attendees-guide-to-hybrid-cloud-and-edge-computing-at-aws-reinvent-2023/

This post is written by Savitha Swaminathan, AWS Sr. Product Marketing Manager

AWS re:Invent 2023 starts on Nov 27th in Las Vegas, Nevada. The event brings technology business leaders, AWS partners, developers, and IT practitioners together to learn about the latest innovations, meet AWS experts, and network among their peer attendees.

This year, AWS re:Invent will once again have a dedicated track for hybrid cloud and edge computing. The sessions in this track will feature the latest innovations from AWS to help you build and run applications securely in the cloud, on premises, and at the edge – wherever you need to. You will hear how AWS customers are using our cloud services to innovate on premises and at the edge. You will also be able to immerse yourself in hands-on experiences with AWS hybrid and edge services through innovative demos and workshops.

At re:Invent there are several session types, each designed to provide you with a way to learn however fits you best:

  • Innovation Talks provide a comprehensive overview of how AWS is working with customers to solve their most important problems.
  • Breakout sessions are lecture style presentations focused on a topic or area of interest and are well liked by business leaders and IT practitioners, alike.
  • Chalk talks deep dive on customer reference architectures and invite audience members to actively participate in the white boarding exercise.
  • Workshops and builder sessions popular with developers and architects, provide the most hands-on experience where attendees can build real-time solutions with AWS experts.

The hybrid edge track will include one leadership overview session and 15 other sessions (4 breakouts, 6 chalk talks, and 5 workshops). The sessions are organized around 4 key themes: Low latency, Data residency, Migration and modernization, and AWS at the far edge.

Hybrid Cloud & Edge Overview

HYB201 | AWS wherever you need it

Join Jan Hofmeyr, Vice President, Amazon EC2, in this leadership session where he presents a comprehensive overview of AWS hybrid cloud and edge computing services, and how we are helping customers innovate on AWS wherever they need it – from Regions, to metro centers, 5G networks, on premises, and at the far edge. Jun Shi, CEO and President of Accton, will also join Jan on stage to discuss how Accton enables smart manufacturing across its global manufacturing sites using AWS hybrid, IoT, and machine learning (ML) services.

Low latency

Many customer workloads require single-digit millisecond latencies for optimal performance. Customers in every industry are looking for ways to run these latency sensitive portions of their applications in the cloud while simplifying operations and optimizing for costs. You will hear about customer use cases and how AWS edge infrastructure is helping companies like Riot Games meet their application performance goals and innovate at the edge.

Breakout session

HYB305 | Delivering low-latency applications at the edge

Chalk talk

HYB308 | Architecting for low latency and performance at the edge with AWS

Workshops

HYB302 | Architecting and deploying applications at the edge

HYB303 | Deploying a low-latency computer vision application at the edge

Data residency

As cloud has become main stream, governments and standards bodies continue to develop security, data protection, and privacy regulations. Having control over digital assets and meeting data residency regulations is becoming increasingly important for public sector customers and organizations operating in regulated industries. The data residency sessions deep dive into the challenges, solutions, and innovations that customers are addressing with AWS to meet their data residency requirements.

Breakout session

HYB309 | Navigating data residency and protecting sensitive data

Chalk talk

HYB307 | Architecting for data residency and data protection at the edge

Workshops

HYB301 | Addressing data residency requirements with AWS edge services

Migration and modernization

Migration and modernization in industries that have traditionally operated with on-premises infrastructure or self-managed data centers is helping customers achieve scale, flexibility, cost savings, and performance. We will dive into customer stories and real-world deployments, and share best practices for hybrid cloud migrations.

Breakout session

HYB203 | A migration strategy for edge and on-premises workloads

Chalk talk

HYB313 | Real-world analysis of successful hybrid cloud migrations

AWS at the far edge

Some customers operate in what we call the far edge: remote oil rigs, military and defense territories, and even space! In these sessions we cover customer use cases and explore how AWS brings cloud services to the far edge and helps customers gain the benefits of the cloud regardless of where they operate.

Breakout session

HYB306 | Bringing AWS to remote edge locations

Chalk talk

HYB312 | Deploying cloud-enabled applications starting at the edge

Workshops

HYB304 | Generative AI for robotics: Race for the best drone control assistant

In addition to the sessions across the 4 themes listed above, the track includes two additional chalk talks covering topics that are applicable more broadly to customers operating hybrid workloads. These chalk talks were chosen based on customer interest and will have repeat sessions, due to high customer demand.

HYB310 | Building highly available and fault-tolerant edge applications

HYB311 | AWS hybrid and edge networking architectures

Learn through interactive demos

In addition to breakout sessions, chalk talks, and workshops, make sure you check out our interactive demos to see the benefits of hybrid cloud and edge in action:

Drone Inspector: Generative AI at the Edge

Location: AWS Village | Venetian Level 2, Expo Hall, Booth 852 | AWS for Every App activation

Embark on a competitive adventure where generative artificial intelligence (AI) intersects with edge computing. Experience how drones can swiftly respond to chat instructions for a time-sensitive object detection mission. Learn how you can deploy foundation models and computer vision (CV) models at the edge using AWS hybrid and edge services for real-time insights and actions.

AWS Hybrid Cloud & Edge kiosk

Location: AWS Village | Venetian Level 2, Expo Hall, Booth 852 | Kiosk #9 & 10

Stop by and chat with our experts about AWS Local Zones, AWS Outposts, AWS Snow Family, AWS Wavelength, AWS Private 5G, AWS Telco Network Builder, and Integrated Private Wireless on AWS. Check out the hardware innovations inside an AWS Outposts rack up close and in person. Learn how you can set up a reliable private 5G network within days and live stream video content with minimal latency.

AWS Next Gen Infrastructure Experience

Location: AWS Village | Venetian Level 2, Expo Hall, Booth 852

Check out demos across Global Infrastructure, AWS for Hybrid Cloud & Edge, Compute, Storage, and Networking kiosks, share on social, and win prizes!

The Future of Connected Mobility

Location: Venetian Level 4, EBC Lounge, wall outside of Lando 4201B

Step into the driver’s seat and experience high fidelity 3D terrain driving simulation with AWS Local Zones. Gain real-time insights from vehicle telemetry with AWS IoT Greengrass running on AWS Snowcone and a broader set of AWS IoT services and Amazon Managed Grafana in the Region. Learn how to combine local data processing with cloud analytics for enhanced safety, performance, and operational efficiency. Explore how you can rapidly deliver the same experience to global users in 75+ countries with minimal application changes using AWS Outposts.

Immersive tourism experience powered by 5G and AR/VR

Location: Venetian, Level 2 | Expo Hall | Telco demo area

Explore and travel to Chichen Itza with an augmented reality (AR) application running on a private network fully built on AWS, which includes the Radio Access Network (RAN), the core, security, and applications, combined with services for deployment and operations. This demo features AWS Outposts.

AWS unplugged: A real time remote music collaboration session using 5G and MEC

Location: Venetian, Level 2 | Expo Hall | Telco demo area

We will demonstrate how musicians in Los Angeles and Las Vegas can collaborate in real time with AWS Wavelength. You will witness songwriters and musicians in Los Angeles and Las Vegas in a live jam session.

Disaster relief with AWS Snowball Edge and AWS Wickr

Location: AWS for National Security & Defense | Venetian, Casanova 606

The hurricane has passed leaving you with no cell coverage and you have a slim chance of getting on the internet. You need to set up a situational awareness and communications network for your team, fast. Using Wickr on Snowball Edge Compute, you can rapidly deploy a platform that provides both secure communications with rich collaboration functionality, as well as real time situational awareness with the Wickr ATAK integration. Allowing you to get on with what’s important.


We hope this guide to the Hybrid Cloud and Edge track at AWS re:Invent 2023 helps you plan for the event and we hope to see you there!

Coming soon: Expansion of AWS Lambda states to all functions

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/coming-soon-expansion-of-aws-lambda-states-to-all-functions/

In November of 2019, we announced AWS Lambda function state attributes, a capability to track the current “state” of a function throughout its lifecycle.

Since launch, states have been used in two primary use-cases. First, to move the blocking setup of VPC resources out of the path of function invocation. Second, to allow the Lambda service to optimize new or updated container images for container-image based functions, also before invocation. By moving this additional work out of the path of the invocation, customers see lower latency and better consistency in their function performance. Soon, we will be expanding states to apply to all Lambda functions.

This post outlines the upcoming change, any impact, and actions to take during the roll out of function states to all Lambda functions. Most customers experience no impact from this change.

As functions are created or updated, or potentially fall idle due to low usage, they can transition to a state associated with that lifecycle event. Previously any function that was zip-file based and not attached to a VPC would only show an Active state. Updates to the application code and modifications of the function configuration would always show the Successful value for the LastUpdateStatus attribute. Now all functions will follow the same function state lifecycles described in the initial announcement post and in the documentation for Monitoring the state of a function with the Lambda API.

All AWS CLIs and SDKs have supported monitoring Lambda function states transitions since the original announcement in 2019. Infrastructure as code tools such as AWS CloudFormation, AWS SAM, Serverless Framework, and Hashicorp Terraform also already support states. Customers using these tools do not need to take any action as part of this, except for one recommended service role policy change for AWS CloudFormation customers (see Updating CloudFormation’s service role below).

However, there are some customers using SDK-based automation workflows, or calling Lambda’s service APIs directly, that must update those workflows for this change. To allow time for testing this change, we are rolling it out in a phased model, much like the initial rollout for VPC attached functions. We encourage all customers to take this opportunity to move to the latest SDKs and tools available.

Change details

Nothing is changing about how functions are created, updated, or operate as part of this. However, this change may impact certain workflows that attempt to invoke or modify a function shortly after a create or an update action. Before making API calls to a function that was recently created or modified, confirm it is first in the Active state, and that the LastUpdateStatus is Successful.

For a full explanation of both the create and update lifecycles, see Tracking the state of AWS Lambda functions.

Create function state lifecycle

Create function state lifecycle

Update function state lifecycle

Update function state lifecycle

Change timeframe

We are rolling out this change over a multiple phase period, starting with the Begin Testing phase today, July 12, 2021. The phases allow you to update tooling for deploying and managing Lambda functions to account for this change. By the end of the update timeline, all accounts transition to using the create/update Lambda lifecycle.

July 12 2021– Begin Testing: You can now begin testing and updating any deployment or management tools you have to account for the upcoming lifecycle change. You can also use this time to update your function configuration to delay the change until the End of Delayed Update.

September 6 2021 – General Update (with optional delayed update configuration): All customers without the delayed update configuration begin seeing functions transition through the lifecycles for create and update. Customers that have used the delay update configuration as described below will not see any change.

October 01 2021 – End of Delayed Update: The delay mechanism expires and customers now see the Lambda states lifecycle applied during function create or update.

Opt-in and delayed update configurations

Starting today, we are providing a mechanism for an opt-in. This allows you to update and test your tools and developer workflow processes for this change. We are also providing a mechanism to delay this change until the End of Delayed Update date. After the End of Delayed Update date, all functions will begin using the Lambda states lifecycle.

This mechanism operates on a function-by-function basis, so you can test and experiment individually without impacting your whole account. Once the General Update phase begins, all functions in an account that do not have the delayed update mechanism in place see the new lifecycle for their functions.

Both mechanisms work by adding a special string in the “Description” parameter of Lambda functions. You can add this string anywhere in this parameter. You can opt to add it to the prefix or suffix, or set the entire contents of the field. This parameter is processed at create or update in accordance with the requested action.

To opt in:

aws:states:opt-in

To delay the update:

aws:states:opt-out

NOTE: Delay configuration mechanism has no impact after the end of the Delayed Update.

Here is how this looks in the console:

I add the opt-in configuration to my function’s Description. You can find this under Configuration -> General Configuration in the Lambda console. Choose Edit to change the value.

Edit basic settings

Edit basic settings

After choosing Save, you can see the value in the console:

Opt-in flag set

Opt-in flag set

Once the opt-in is set for a function, then updates on that function go through the preceding update flow.

Checking a function’s state

With this in place, you can now test your development workflow ahead of the General Update phase. Download the latest AWS CLI (version 2.2.18 or greater) or SDKs to see function state and related attribute information.

You can confirm the current state of a function using the AWS APIs or AWS CLI to perform the GetFunction or GetFunctionConfiguration API or command for a specified function:

$ aws lambda get-function --function-name MY_FUNCTION_NAME --query 'Configuration.[State, LastUpdateStatus]'
[
    "Active",
    "Successful"
]

This returns the State and LastUpdateStatus in order for a function.

Updating CloudFormation’s service role

CloudFormation allows customers to create an AWS Identity and Access Management (IAM) service role to make calls to resources in a stack on your behalf. Customers can use service roles to allow or deny the ability to create, update, or delete resources in a stack.

As part of the rollout of function states for all functions, we recommend that customers configure CloudFormation service roles with an Allow for the “lambda:GetFunction” API. This API allows CloudFormation to get the current state of a function, which is required to assist in the creation and deployment of functions.

Conclusion

With function states, you can have better clarity on how the resources required by your Lambda function are being created. This change does not impact the way that functions are invoked or how your code is run. While this is a minor change to when resources are created for your Lambda function, the result is even better consistency of working with the service.

For more serverless learning resources, visit Serverless Land.

Hosting Hugging Face models on AWS Lambda for serverless inference

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/hosting-hugging-face-models-on-aws-lambda/

This post written by Eddie Pick, AWS Senior Solutions Architect – Startups and Scott Perry, AWS Senior Specialist Solutions Architect – AI/ML

Hugging Face Transformers is a popular open-source project that provides pre-trained, natural language processing (NLP) models for a wide variety of use cases. Customers with minimal machine learning experience can use pre-trained models to enhance their applications quickly using NLP. This includes tasks such as text classification, language translation, summarization, and question answering – to name a few.

First introduced in 2017, the Transformer is a modern neural network architecture that has quickly become the most popular type of machine learning model applied to NLP tasks. It outperforms previous techniques based on convolutional neural networks (CNNs) or recurrent neural networks (RNNs). The Transformer also offers significant improvements in computational efficiency. Notably, Transformers are more conducive to parallel computation. This means that Transformer-based models can be trained more quickly, and on larger datasets than their predecessors.

The computational efficiency of Transformers provides the opportunity to experiment and improve on the original architecture. Over the past few years, the industry has seen the introduction of larger and more powerful Transformer models. For example, BERT was first published in 2018 and was able to get better benchmark scores on 11 natural language processing tasks using between 110M-340M neural network parameters. In 2019, the T5 model using 11B parameters achieved better results on benchmarks such as summarization, question answering, and text classification. More recently, the GPT-3 model was introduced in 2020 with 175B parameters and in 2021 the Switch Transformers are scaling to over 1T parameters.

One consequence of this trend toward larger and more powerful models is an increased barrier to entry. As the number of model parameters increases, as does the computational infrastructure that is necessary to train such a model. This is where the open-source Hugging Face Transformers project helps.

Hugging Face Transformers provides over 30 pretrained Transformer-based models available via a straightforward Python package. Additionally, there are over 10,000 community-developed models available for download from Hugging Face. This allows users to use modern Transformer models within their applications without requiring model training from scratch.

The Hugging Face Transformers project directly addresses challenges associated with training modern Transformer-based models. Many customers want a zero administration ML inference solution that allows Hugging Face Transformers models to be hosted in AWS easily. This post introduces a low touch, cost effective, and scalable mechanism for hosting Hugging Face models for real-time inference using AWS Lambda.

Overview

Our solution consists of an AWS Cloud Development Kit (AWS CDK) script that automatically provisions container image-based Lambda functions that perform ML inference using pre-trained Hugging Face models. This solution also includes Amazon Elastic File System (EFS) storage that is attached to the Lambda functions to cache the pre-trained models and reduce inference latency.Solution architecture

In this architectural diagram:

  1. Serverless inference is achieved by using Lambda functions that are based on container image
  2. The container image is stored in an Amazon Elastic Container Registry (ECR) repository within your account
  3. Pre-trained models are automatically downloaded from Hugging Face the first time the function is invoked
  4. Pre-trained models are cached within Amazon Elastic File System storage in order to improve inference latency

The solution includes Python scripts for two common NLP use cases:

  • Sentiment analysis: Identifying if a sentence indicates positive or negative sentiment. It uses a fine-tuned model on sst2, which is a GLUE task.
  • Summarization: Summarizing a body of text into a shorter, representative text. It uses a Bart model that was fine-tuned on the CNN / Daily Mail dataset.

For simplicity, both of these use cases are implemented using Hugging Face pipelines.

Prerequisites

The following is required to run this example:

Deploying the example application

  1. Clone the project to your development environment:
    git clone https://github.com/aws-samples/zero-administration-inference-with-aws-lambda-for-hugging-face.git
  2. Install the required dependencies:
    pip install -r requirements.txt
  3. Bootstrap the CDK. This command provisions the initial resources needed by the CDK to perform deployments:
    cdk bootstrap
  4. This command deploys the CDK application to its environment. During the deployment, the toolkit outputs progress indications:
    $ cdk deploy

Testing the application

After deployment, navigate to the AWS Management Console to find and test the Lambda functions. There is one for sentiment analysis and one for summarization.

To test:

  1. Enter “Lambda” in the search bar of the AWS Management Console:Console Search
  2. Filter the functions by entering “ServerlessHuggingFace”:Filtering functions
  3. Select the ServerlessHuggingFaceStack-sentimentXXXXX function:Select function
  4. In the Test event, enter the following snippet and then choose Test:Test function
{
   "text": "I'm so happy I could cry!"
}

The first invocation takes approximately one minute to complete. The initial Lambda function environment must be allocated and the pre-trained model must be downloaded from Hugging Face. Subsequent invocations are faster, as the Lambda function is already prepared and the pre-trained model is cached in EFS.Function test results

The JSON response shows the result of the sentiment analysis:

{
  "statusCode": 200,
  "body": {
    "label": "POSITIVE",
    "score": 0.9997532367706299
  }
}

Understanding the code structure

The code is organized using the following structure:

├── inference
│ ├── Dockerfile
│ ├── sentiment.py
│ └── summarization.py
├── app.py
└── ...

The inference directory contains:

  • The Dockerfile used to build a custom image to be able to run PyTorch Hugging Face inference using Lambda functions
  • The Python scripts that perform the actual ML inference

The sentiment.py script shows how to use a Hugging Face Transformers model:

import json
from transformers import pipeline

nlp = pipeline("sentiment-analysis")

def handler(event, context):
    response = {
        "statusCode": 200,
        "body": nlp(event['text'])[0]
    }
    return response

For each Python script in the inference directory, the CDK generates a Lambda function backed by a container image and a Python inference script.

CDK script

The CDK script is named app.py in the solution’s repository. The beginning of the script creates a virtual private cloud (VPC).

vpc = ec2.Vpc(self, 'Vpc', max_azs=2)

Next, it creates the EFS file system and an access point in EFS for the cached models:

        fs = efs.FileSystem(self, 'FileSystem',
                            vpc=vpc,
                            removal_policy=cdk.RemovalPolicy.DESTROY)
        access_point = fs.add_access_point('MLAccessPoint',
                                           create_acl=efs.Acl(
                                               owner_gid='1001', owner_uid='1001', permissions='750'),
                                           path="/export/models",
                                           posix_user=efs.PosixUser(gid="1001", uid="1001"))>

It iterates through the Python files in the inference directory:

docker_folder = os.path.dirname(os.path.realpath(__file__)) + "/inference"
pathlist = Path(docker_folder).rglob('*.py')
for path in pathlist:

And then creates the Lambda function that serves the inference requests:

            base = os.path.basename(path)
            filename = os.path.splitext(base)[0]
            # Lambda Function from docker image
            function = lambda_.DockerImageFunction(
                self, filename,
                code=lambda_.DockerImageCode.from_image_asset(docker_folder,
                                                              cmd=[
                                                                  filename+".handler"]
                                                              ),
                memory_size=8096,
                timeout=cdk.Duration.seconds(600),
                vpc=vpc,
                filesystem=lambda_.FileSystem.from_efs_access_point(
                    access_point, '/mnt/hf_models_cache'),
                environment={
                    "TRANSFORMERS_CACHE": "/mnt/hf_models_cache"},
            )

Adding a translator

Optionally, you can add more models by adding Python scripts in the inference directory. For example, add the following code in a file called translate-en2fr.py:

import json
from transformers 
import pipeline

en_fr_translator = pipeline('translation_en_to_fr')

def handler(event, context):
    response = {
        "statusCode": 200,
        "body": en_fr_translator(event['text'])[0]
    }
    return response

Then run:

$ cdk synth
$ cdk deploy

This creates a new endpoint to perform English to French translation.

Cleaning up

After you are finished experimenting with this project, run “cdk destroy” to remove all of the associated infrastructure.

Conclusion

This post shows how to perform ML inference for pre-trained Hugging Face models by using Lambda functions. To avoid repeatedly downloading the pre-trained models, this solution uses an EFS-based approach to model caching. This helps to achieve low-latency, near real-time inference. The solution is provided as infrastructure as code using Python and the AWS CDK.

We hope this blog post allows you to prototype quickly and include modern NLP techniques in your own products.

Improved failure recovery for Amazon EventBridge

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/improved-failure-recovery-for-amazon-eventbridge/

Today we’re announcing two new capabilities for Amazon EventBridgedead letter queues and custom retry policies. Both of these give you greater flexibility in how to handle any failures in the processing of events with EventBridge. You can easily enable them on a per target basis and configure them uniquely for each.

Dead letter queues (DLQs) are a common capability in queuing and messaging systems that allow you to handle failures in event or message receiving systems. They provide a way for failed events or messages to be captured and sent to another system, which can store them for future processing. With DLQs, you can have greater resiliency and improved recovery from any failure that happens.

You can also now configure a custom retry policy that can be set on your event bus targets. Today, there are two attributes that can control how events are retried: maximum number of retries and maximum event age. With these two settings, you could send events to a DLQ sooner and reduce the retries attempted.

For example, this could allow you to recover more quickly if an event bus target is overwhelmed by the number of events received, causing throttling to occur. The events are placed in a DLQ and then processed later.

Failures in event processing

Currently, EventBridge can fail to deliver an event to a target in certain scenarios. Events that fail to be delivered to a target due to client-side errors are dropped immediately. Examples of this are when EventBridge does not have permission to a target AWS service or if the target no longer exists. This can happen if the target resource is misconfigured or is deleted by the resource owner.

For service-side issues, EventBridge retries delivery of events for up to 24 hours. This can happen if the target service is unavailable or the target resource is not provisioned to handle the incoming event traffic and the target service is throttling the requests.

EventBridge failures

EventBridge failures

Previously, when all attempts to deliver an event to the target were exhausted, EventBridge published a CloudWatch metric indicating a failed target invocation. However, this provides no visibility into which events failed to be delivered and there was no way to recover the event that failed.

Dead letter queues

EventBridge’s DLQs are made possible today with Amazon Simple Queue Service (SQS) standard queues. With SQS, you get all of the benefits of a fully serverless queuing service: no servers to manage, automatic scalability, pay for what you consume, and high availability and security built in. You can configure the DLQs for your EventBridge bus and pay nothing until it is used, if and when a target experiences an issue. This makes it a great practice to follow and standardize on, and provides you with a safety net that’s active only when needed.

Optionally, you could later configure an AWS Lambda function to consume from that DLQ. The function is only invoked when messages exist in the queue, allowing you to maintain a serverless stack to recover from a potential failure.

DLQ configured per target

DLQ configured per target

With DLQ configured, the queue receives the event that failed in the message with important metadata that you can use to troubleshoot the issue. This can include: Error Code, Error Message, Exhausted Retry Condition, Retry Attempts, Rule ARN, and the Target ARN.

You can use this data to more easily troubleshoot what went wrong with the original delivery attempt and take action to resolve or prevent such failures in the future. You could also use the information such as Exhausted Retry Condition and Retry Attempts to further tweak your custom retry policy.

You can configure a DLQ when creating or updating rules via the AWS Management Console and AWS Command Line Interface (AWS CLI). You can also use infrastructure as code (IaC) tools such as AWS CloudFormation.

In the console, select the queue to be used for your DLQ configuration from the drop-down as shown here:

DLQ configuration

DLQ configuration

When configured via API, AWS CLI, or IaC tools, you must specify the ARN of the queue:

arn:aws:sqs:us-east-1:123456789012:orders-bus-shipping-service-dlq

When you configure a DLQ, the target SQS queue requires a resource-based policy that grants EventBridge access. One is created and applied automatically via the console when you create or update an EventBridge rule with a DLQ that exists in your own account.

For any queues created in other accounts, or via API, AWS CLI, or IaC tools, you must add a policy that allows SQS’s SendMessage permission to the EventBridge rule ARN, as shown below:

{
  "Sid": "Dead-letter queue permissions",
  "Effect": "Allow",
  "Principal": {
     "Service": "events.amazonaws.com"
  },
  "Action": "sqs:SendMessage",
  "Resource": "arn:aws:sqs:us-east-1:123456789012:orders-bus-shipping-service-dlq",
  "Condition": {
    "ArnEquals": {
      "aws:SourceArn": "arn:aws:events:us-east-1:123456789012:rule/MyTestRule"
    }
  }
}

You can read more about setting permissions for DLQ the documentation for “Granting permissions to the dead-letter queue”.

Once configured, you can monitor CloudWatch metrics for the DLQ queue. This shows both the successful delivery of messages via the InvocationsSentToDLQ metric, in addition to any failures via the InvocationsFailedToBeSentToDLQ. Note that these metrics do not exist if your queue is not considered “active”.

Retry policies

By default, EventBridge retries delivery of an event to a target so long as it does not receive a client-side error as described earlier. Retries occur with a back-off, for up to 185 attempts or for up to 24 hours, after which the event is dropped or sent to a DLQ, if configured. Due to the jitter of the back-off and retry process you may reach the 24-hour limit before reaching 185 retries.

For many workloads, this provides an acceptable way to handle momentary service issues or throttling that might occur. For some however, this model of back-off and retry can cause increased and on-going traffic to an already overloaded target system.

For example, consider an Amazon API Gateway target that has a resource constrained backend service behind it.

Constrained target service

Constrained target service

Under a consistently high load, the bus could end up generating too many API requests, tripping the API Gateway’s throttling configuration. This would cause API Gateway to respond with throttling errors back to EventBridge.

Throttled API reply

Throttled API reply

You may decide that allowing the failed events to retry for 24 hours puts too much load into this system and it may not properly recover from the load. This could lead to potential data loss unless a DLQ was configured.

Added DLQ

Added DLQ

With a DLQ, you could choose to process these events later, once the overwhelmed target service has recovered.

DLQ drained back to API

DLQ drained back to API

Or the events in question may no longer have the same value as they did previously. This can occur in systems where data loss is tolerated but the timeliness of data processing matters. In these situations, the DLQ would have less value and dropping the message is acceptable.

For either of these situations, configuring the maximum number of retries or the maximum age of the event could be useful.

Now with retry policies, you can configure per target the following two attributes:

  • MaximumEventAgeInSeconds: between 60 and 86400 seconds (86400, or 24 hours the default)
  • MaximumRetryAttempts: between 0 and 185 (185 is the default)

When either condition is met, the event fails. It’s then either dropped, which generates an increase to the FailedInvocations CloudWatch metric, or sent to a configured DLQ.

You can configure retry policy attributes when creating or updating rules via the AWS Management Console and AWS Command Line Interface (AWS CLI). You can also use infrastructure as code (IaC) tools such as AWS CloudFormation.

Retry policy

Retry policy

There is no additional cost for configuring either of these new capabilities. You only pay for the usage of the SQS standard queue configured as the dead letter queue during a failure and any application that handles the failed events. SQS pricing can be found here.

Conclusion

With dead letter queues and custom retry policies, you have improved handling and control over failure in distributed systems built with EventBridge. With DLQs you can capture failed events and then process them later, potentially saving yourself from data loss. With custom retry policies, you gain the improved ability to control the number of retries and for how long they can be retried.

I encourage you to explore how both of these new capabilities can help make your applications more resilient to failures, and to standardize on using them both in your infrastructure.

For more serverless learning resources, visit https://serverlessland.com.

Implementing FIFO message ordering with Amazon MQ for Apache ActiveMQ

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/implementing-fifo-message-ordering-with-amazon-mq-for-apache-activemq/

This post is contributed by Ravi Itha, Sr. Big Data Consultant

Messaging plays an important role in building distributed enterprise applications. Amazon MQ is a key offering within the AWS messaging services solution stack focused on enabling messaging services for modern application architectures. Amazon MQ is a managed message broker service for Apache ActiveMQ that simplifies setting up and operating message brokers in the cloud. Amazon MQ uses open standard APIs and protocols such as JMS, NMS, AMQP, STOMP, MQTT, and WebSocket. Using standards means that, in most cases, there’s no need to rewrite any messaging code when you migrate to AWS. This allows you to focus on your business logic and application architecture.

Message ordering via Message Groups

Sometimes it’s important to guarantee the order in which messages are processed. In ActiveMQ, there is no explicit distinction between a standard queue and a FIFO queue. However, a queue can be used to route messages in FIFO order. This ordering can be achieved via two different ActiveMQ features, either by implementing an Exclusive Consumer or using using Message Groups. This blog focuses on Message Groups, an enhancement to Exclusive Consumers. Message groups provide:

  • Guaranteed ordering of the processing of related messages across a single queue
  • Load balancing of the processing of messages across multiple consumers
  • High availability with automatic failover to other consumers if a JVM goes down

This is achieved programmatically as follows:

Sample producer code snippet:

TextMessage tMsg = session.createTextMessage("SampleMessage");
tMsg.setStringProperty("JMSXGroupID", "Group-A");
producer.send(tMsg);

Sample consumer code snippet:

Message consumerMessage = consumer.receive(50);
TextMessage txtMessage = (TextMessage) message.get();
String msgBody = txtMessage.getText();
String msgGroup = txtMessage.getStringProperty("JMSXGroupID")

This sample code highlights:

  • A message group is set by the producer during message ingestion
  • A consumer determines the message group once a message is consumed

Additionally, if a queue has messages for multiple message groups then it’s possible a consumer receives messages for multiple message groups. This depends on various factors such as the number of consumers of a queue and consumer start time.

Scenarios: Multiple producers and consumers with Message Groups

A FIFO queue in ActiveMQ supports multiple ordered message groups. Due to this, it’s common that a queue is used to exchange messages between multiple producer and consumer applications. By running multiple consumers to process messages from a queue, the message broker is able to partition messages across consumers. This improves the scalability and performance of your application.

In terms of scalability, commonly asked questions center on the ideal number of consumers and how messages are distributed across all consumers. To provide more clarity in this area, we provisioned an Amazon MQ broker and ran various test scenarios.

Scenario 1: All consumers started at the same time

Test setup

  • All producers and consumers have the same start time
  • Each test uses a different combination of number of producers, message groups, and consumers
Setup for Tests 1 to 5

Setup for Tests 1 to 5

Test results

Test # # producers # message groups

# messages sent

by each producer

# consumers Total messages # messages received by consumers
C1 C2 C3 C4
1 3 3 5000 1 15000

15000

(All Groups)

NA NA NA
2 3 3 5000 2 15000

5000

(Group-C)

10000

(Group-A and Group-B)

NA NA
3 3 3 5000 3 15000

5000

(Group-A)

5000

(Group-B)

5000

(Group C)

NA
4 3 3 5000 4 15000

5000

(Group-C)

5000

(Group-B)

5000

(Group-A)

0
5 4 4 5000 3 20000

5000

(Group-A)

5000

(Group-B)

10000

(Group-C and Group-D)

NA

Test conclusions

  • Test 3 – illustrates even message distribution across consumers when a one-to-one relationship exists between message groups and number of consumers
  • Test 4 – illustrates one of the four consumers did not receive any messages. This highlights that running more consumers than the available number of messages groups does not provide additional benefits
  • Tests 1, 2, 5 – indicate that a consumer can receive messages belonging to multiple message groups. The following table provides additional granularity to messages received by consumer C2 in test #2. As you can see, these messages belong to Group-A and Group-B message groups, and FIFO ordering is maintained at a message group level
consumer_id msg_id msg_group
Consumer C2 A-1 Group-A
Consumer C2 B-1 Group-B
Consumer C2 A-2 Group-A
Consumer C2 B-2 Group-B
Consumer C2 A-3 Group-A
Consumer C2 B-3 Group-B
Consumer C2 A-4999 Group-A
Consumer C2 B-4999 Group-B
Consumer C2 A-5000 Group-A
Consumer C2 B-5000 Group-B

Scenario 2a: All consumers not started at same time

Test setup

  • Three producers and one consumer started at the same time
  • The second and third consumers started after 30 seconds and 60 seconds respectively
  • 15,000 messages sent in total across three message groups
 Setup for Test 6

Setup for Test 6

Test results

Test # # producers # message groups

# messages sent

by each producer

# consumers Total messages # messages received by consumers
C1 C2 C3
6 3 3 5000 3 15000 15000 0 0

Test conclusion

Consumer C1 received all messages, while consumers C2 and C3 both ran idle and did not receive any messages. Key takeaway here is that results can be inefficient in real-world scenarios where consumers start at different times.

The last scenario (2b) illustrates this same scenario, while optimizing message distribution so that all consumers are used.

Scenario 2b: Utilization of all consumers when not started at same time

Test setup

  • Three producers and one consumer started at the same time
  • The second and third consumers started after 30 seconds and 60 seconds respectively
  • 15,000 messages sent in total across three message groups
  • After each producer message group sends its 2501st message, their message groups are closed after which message distribution is restarted by sending the remaining messages. Closing a message group can be done as in the following code example (specifically the -1 value set for the JMSXGroupSeq property):
TextMessage tMsg = session.createTextMessage("<foo>hey</foo>");
tMsg.setStringProperty("JMSXGroupID", "Group-A");
tMsg.setIntProperty("JMSXGroupSeq", -1);
producer.send(tMsg);
Setup for Test 7

Setup for Test 7

Test results

Test # # producers # message groups

# messages sent

by each producer

# consumers

Total

messages

# messages received by consumers
C1 C2 C3
7 3 3 5001 3 15003 10003 2500 2500

Distribution of messages received by message group

Consumer Group-A Group-B Group-C Consumer-wise total
Consumer 1 2501 2501 5001 10003
Consumer 2 2500 0 0 2500
Consumer 3 0 2500 0 2500
Group total 5001 5001 5001 NA
Total messages received 15003

Test conclusions

Message distribution is optimized with the closing and reopening of a message group when all consumers are not started at the same time. This mitigation step results in all consumers receiving messages.

  • After Group-A was closed, the broker assigned subsequent Group-A messages to consumer C2
  • After Group-B was closed, the broker assigned subsequent Group-B messages to consumer C3
  • After Group-C was closed, the broker continued to send Group-C messages to consumer C1. The assignment did not change because there was no other available consumer
Test 7 – Message distribution among consumers

Test 7 – Message distribution among consumers

Scalability techniques

Now that we understand how to use Message Groups to implement FIFO use cases within Amazon MQ, let’s look at how they scale. By default, a message queue supports a maximum of 1024 message groups. This means, if you use more than 1024 message groups per queue then message ordering is lost for the oldest message group. This is further explained in the ActiveMQ Message Groups documentation. This can be problematic for complex use cases involving stock exchanges or financial trading scenarios where thousands of ordered message groups are required. In the following table, are a couple of techniques to address this issue.

Scalability techniques Details
  1. Verify that the appropriate Amazon MQ broker instance type is used
  2. Increase number of message groups per message queue
  1. Select the appropriate broker instance type according to your use case. To learn more, refer to the Amazon MQ broker instance types documentation.
  2. Default max number of message groups can be increased via a custom configuration file at the time of launching a new broker. This default can also be increased by modifying an existing broker. Refer to the next section for an example (requirement #2)
Recycle the number of message groups when they are no longer needed A message group can be closed programmatically by a producer once it’s finished sending all messages to a queue. Following is a sample code snippet:

TextMessage tMsg = session.createTextMessage("<foo>hey</foo>");
tMsg.setStringProperty("JMSXGroupID", "GroupA");
tMsg.setIntProperty("JMSXGroupSeq", -1);
producer.send(tMsg);

In the preceding scenario 2b, we used this technique to improve the message distribution across consumers.

Customize message broker configuration

In the previous section, to improve scalability we suggested increasing the number of message groups per queue by updating the broker configuration. A broker configuration is essentially an XML file that contains all ActiveMQ settings for a given message broker. Let’s look at the following broker configuration settings for the purpose of achieving a specific requirement. For your reference, we’ve placed a copy of a broker configuration file with these settings, within a GitHub repository.

# Requirement Applicable broker configuration
1 Change message group implementation from default CachedMessageGroupMap default to MessageGroupHashBucket

<!–valid values: simple, bucket, cached. default is cached–>

<!–keyword simple represents SimpleMessageGroupMap–>

<!–keyword bucket represents MessageGroupHashBucket–>

<!–keyword cached represents CachedMessageGroupMap–>

<policyEntry messageGroupMapFactoryType=“bucket” queue=“&gt;”/>

2 Increase number of message groups per queue from 1024 to 2048 and increase cache size from 64 to 128

<!–default value for bucketCount is 1024 and for cacheSize is 64–>

<policyEntry queue=“&gt;”>

<messageGroupMapFactory>

<messageGroupHashBucketFactory bucketCount=“2048” cacheSize=“128”/>

</messageGroupMapFactory>

</policyEntry>

3 Wait for three consumers or 30 seconds before broker begins sending messages <policyEntry queue=”&gt;” consumersBeforeDispatchStarts=”3″ timeBeforeDispatchStarts=”30000″/>

When must default broker configurations be updated? This would only apply in scenarios where the default settings do not meet your requirements. Additional information on how to update your broker configuration file can be found here.

Amazon MQ starter kit

Want to get up and running with Amazon MQ quickly? Start building with Amazon MQ by cloning the starter kit available on GitHub. The starter kit includes a CloudFormation template to provision a message broker, sample broker configuration file, and source code related to the consumer scenarios in this blog.

Conclusion

In this blog post, you learned how Amazon MQ simplifies the setup and operation of Apache ActiveMQ in the cloud. You also learned how the Message Groups feature can be used to implement FIFO. Lastly, you walked through real scenarios demonstrating how ActiveMQ distributes messages with queues to exchange messages between multiple producers and consumers.