Tag Archives: messaging

Strategies for list management with Amazon Pinpoint and Amazon Simple Email Service

Post Syndicated from Heidi Gloudemans original https://aws.amazon.com/blogs/messaging-and-targeting/strategies-for-list-management-with-amazon-pinpoint-and-amazon-simple-email-service/

Managing customer lists is a large part of any outbound customer communication program. From customer acquisition to ongoing engagement, locating the best sources for subscribers and respecting their contact preferences is key to maintaining healthy customer lists. This article will discuss recommendations for list building using Amazon Pinpoint and Amazon Simple Email Service (SES). We will provide recommendations for a subscription process, what information to ask for, how to manage opt-outs, and optimize lists over time.

Customer acquisition

Customer acquisition is the first part of any list activity. There are a few guidelines that all outbound marketers should follow during the list building process. First, do not use 3rd party, leased, or purchased lists. The use of 3rd party lists for email risks complaints, impact to sender reputation, and inclusion in monitoring functions like spam traps. Email service providers like Amazon Pinpoint and Amazon SES will discontinue service for accounts with poor sending behavior that results from use of 3rd party lists.

Second, if you plan on contacting customers across channels, make sure you acquire permission to contact users on each channel. There are many places you can acquire customer contacts, ranging from your website, social media presence, or a QR code on a physical sign. Amazon Pinpoint also has a solution called the Amazon Pinpoint Preference Center that you can deploy to gather and manage customer contact preferences across channels.

There are a few items that you want to include in any customer acquisition form. First, tell the customer how often they can expect communication from you. Is it a weekly newsletter? Monthly? Even better if you give them the option to select how often you communicate with them. Next, tell them what value they can expect from registration. For example, special deals, early access to sales, or even just product and industry news. While you can provide some incentives, avoid providing high-value incentives for registration. Over-the-top enticements like free products will always cause low-quality registrations and resulting low-quality lists.

In addition, make your sign-up forms as concise as possible. Only put high-value content behind registration, and minimize the amount of information customers must provide to register. Having a full profile makes your life as a marketer trying to segment your customers easier. However, it potentially adds friction to the sign-up process which can result in lost customers.

If you can, allow the customer to indicate their content preferences later or during onboarding communications. If you use Amazon SES, you can support up to 20 list topics per account in the Amazon SES list management. For example, if you are a sportswear company, interests could include topics like hiking, biking, or running. You should then send customers emails only about the specific topics that the recipient is interested in receiving. Make sure you retain preference data. All countries are different, but some require you to prove that you received permission to contact a customer.

Managing your customer contacts

Onboarding communication is an essential first step once a customer has submitted their initial registration. Some organizations use the first communication as a registration confirmation step, called a “double opt-in.” In addition to driving engagement and initial calls-to-action (“Confirm your email”), double opt-in emails have the added benefit of verifying that a bot did not submit your customer email.

From the first message you send to a new customer to the last, you should always include the unsubscribe option. Every country has different requirements, of which you should research and educate your organization. Amazon SES now supports subscription management for custom URLs in the footers of your emails. Amazon SES also now supports contact preferences in a custom landing page, where customers can adjust their contact list preferences.

Removing unengaged users

Both Amazon Pinpoint and Amazon SES enable visibility to the success of outbound communications through open rates and click-through rates at the account or campaign level. If time passes with limited engagement from an individual customer, (i.e., they do not open your mails or engage with the content) there is a risk the recipient mailbox provider will start marking your messages as spam. Work with your business to determine the period of time after which you should automatically remove unengaged users from your contact list. Many organizations will remove customers from their contact lists after 60 to 90 days of non-engagement.  If you need the ability to quickly query customers that are not engaging with your communications from your data store of choice, enable event stream data in Amazon Pinpoint or Amazon SES using Amazon Kinesis. Amazon Pinpoint also has a solution, the Digital User Engagement Events Stream Database that creates a data store for that purpose.

Amazon SES and Amazon Pinpoint both also have the concept of global and account suppression lists. Global suppression lists are managed across AWS accounts, while account-based suppression lists are associated to your AWS account. If a customer explicitly unsubscribes from your list using Amazon SES list management, or complains through their inbox provider, they will automatically be added to the respective account suppression list. Customers that are part of suppression lists are no longer sent messages from your account. Respecting contact preferences like unsubscribe is an opportunity to earn trust with that specific customer, the market, and the recipient mailbox provider.

Conclusion

There are a number of additional best practices to drive customer engagement across communications channels. They can include message headlines, copy, graphics/images, and adaptive design across various endpoints and clients. However, nothing is as important as maintaining the trust of end customers with their contact information. Sourcing contact information with the customer permission, respecting contact preferences, and maintaining list hygiene is the cornerstone to a successful customer communications program. Learn more today about Amazon Pinpoint and Amazon SES and customer communications.

How Amazon Simple Email Service supported the growth of email in 2020

Post Syndicated from Simon Poile original https://aws.amazon.com/blogs/messaging-and-targeting/how-amazon-simple-email-service-supported-the-growth-of-email-in-2020/

Over the last 12 months, organizations of all types have increasingly needed to stay connected to their customers. With the move to virtual interactions accelerating across industries, email has remained a trusted channel for customer communications. Amazon Simple Email Service (SES) has seen record outbound email traffic in 2020, supporting critical customer communications during COVID and commercial moments like Black Friday and Cyber Monday.

The importance of email during COVID

Unlike real-time communications like voice or live chat, email is asynchronous. It can be read and consumed at the customer’s leisure. In some geographies like North America, email also represents an individual’s unique identity, persisting longer than mobile phone numbers or social networking accounts. Even with the importance of email established before 2020, it was important to most organizations to send only the right messages during the COVID crisis.

Many organizations chose to decrease promotional or marketing emails during the pandemic voluntarily. This decrease in sending was to recognize the increased stress most individuals were facing in their personal lives. However, even with the drop in marketing emails across organizational types, there was an increased need to communicate and maintain customer engagement. Most organizations went through three distinct customer communication phases with email in 2020: React, Respond, and Reimagine.

  • React – These were the initial emails sent to acknowledge the COVID crisis, occurring early in 2020. These emails included messages reinforcing commitment to customer health, employee safety, or communicating new cleaning protocols.
  • Respond – These messages often included communication on the status of the business or event. Most businesses needed to communicate their transition to remote work, temporary closures, and many in-person events canceled.
  • Reimagine – Throughout the crisis, organizations were reimagining how to do business. Healthcare started operating video consultations, and restaurants shifted to pick up/take out only. Email communication was vital to take customers on the journey into this “new normal,” even as some businesses started to reopen.

To send these customer communications at scale, many organizations worked with Amazon SES.

How Amazon SES scaled and supported customers in 2020

Amazon SES saw several sending spikes that aligned with organizations working to communicate with their customers during COVID. Nine times in 2020, transactions per second (TPS) in Amazon SES exceeding 150% of the previous record held by 2019 Black Friday. This over 150% TPS spike also occurred on 2020 Black Friday and Cyber Monday.

In addition to supporting those upsurges in throughput, the Amazon SES team also responded to customer feedback on increasing the global footprint of Amazon SES. Since January, Amazon SES increased the total number of regions supported from 7 to 14, including the US government cloud. These additional regions were deployed during 2020 as the team worked remotely. This regional expansion enabled customers to adhere to local data sovereignty requirements for email sending while also improving performance.

Customers also told us they needed tools to help them manage compliance with important governance laws like CAN-SPAM and GDPR. Amazon SES released list management to help organizations manage their customer’s contact information and preferences.

Looking forward

As we move into 2021, email will remain at the forefront of customer communication channels. Enterprise customers like Netflix and Duolingo rely on Amazon SES to deliver their email at scale. For more information on how you can use Amazon SES, visit our website.

Automate phone number validation with Amazon Pinpoint

Post Syndicated from Ilya Pupko original https://aws.amazon.com/blogs/messaging-and-targeting/automate-phone-number-validation-with-amazon-pinpoint/

Amazon Pinpoint allows you to engage with your customers across multiple messaging channels like SMS text, email, and voice messages. While planning and executing standard text (SMS) and voice-based campaigns, one of the challenges developers often run into is the need to verify if the phone numbers in their internal database are valid and conform to the standard E.164 format. You can attempt to verify the phone numbers manually one at a time, but it’s tedious. To overcome this issue, Amazon Pinpoint provides a phone number validation service that you can use to determine if a phone number is valid, have it automatically formatted, and obtain additional information about the phone number itself. For example, when you use the phone number validation service, it returns the following information:

  • The phone number in E.164 format.
  • The phone number type (such as mobile, landline, or VoIP).
  • The city and country where the phone number is based.
  • The service provider that is associated with the phone number.

This blog post aims to provide a step-by-step implementation guide and the necessary code to enable an integrated solution for number verification.

Process flows and architecture


This solution uses Amazon Simple Storage Service (Amazon S3), Amazon Pinpoint, AWS Step Functions, Amazon Simple Notification Service (SNS) and AWS Lambda. To initiate the process, you upload your source contact file in the CSV format to the dedicated Amazon S3 bucket. When the CSV file is uploaded, S3 triggers the associated tasks. Based on the optional configuration rules, the application code either runs the Phone Validate logic first or imports the contact information as-is into Amazon Pinpoint as a new imported segment and updates overall Amazon Pinpoint audience information. If Phone Validation is enabled, the system will first generate and save the new output file to Amazon S3 with the valid phone number, metadata, etc. and use this updated contact information during import. Additionally, the system will kick-off a scheduled campaign to all imported contacts.

This CloudFormation template will automatically create the following new resources on your first deploy:

  • AWS Lambda function: These functions contains the application code which validate the phone numbers. It also creates the segment for the uploaded contacts.
  • S3 event notification: When the CSV file is uploaded to the S3 bucket, the S3 Event Notification triggers the AWS Lambda function which initiate the AWS Step Functions State Machine. To learn more about the S3 Event Notification, check the documentation.
  • AWS Step Functions: This solution will set up an infrastructure to automatically trigger when a new file is placed in an S3 bucket. The process, managed by an AWS Step Functions state machine, will start a Pinpoint import process, wait for it to complete, and send notifications that the job started, successfully finished, or failed.
  • IAM role: The IAM role is used to make Amazon Pinpoint calls, to access S3, and interact with AWS Step Functions and Amazon SNS. You can check the IAM documentation to learn more about IAM roles.

Prerequisites and deployment steps

Step 1: Set up the Amazon Pinpoint project and the S3 bucket

In Amazon Pinpoint, a project (also sometimes referred to as “application”) is a collection of settings, customer information, segments, and campaigns. Setting up a Pinpoint project is the first step to deploy our solution. It holds the segment we will use in the later steps.

  1. Navigate to the Amazon Pinpoint from the services tab in the AWS Management Console and create a new Amazon Pinpoint project.
  2. Copy the Project ID from the Amazon Pinpoint console and save it in notepad. You will need it later.

In Amazon S3, create a new bucket to upload the files to. Make sure it is setup according to your company’s security practices. If you have an existing bucket you want to use instead, note that this solution will require a source bucket in the same region as the solution itself and it will override any triggers already in place on the bucket.

Step 2: Deploy code and services

AWS CloudFormation is a service that gives developers and businesses an easy way to create a collection of related AWS and third-party resources. You can provision them in an orderly and predictable fashion.

  1. Download the latest version of the solution from https://github.com/aws-samples/digital-user-engagement-reference-architectures/blob/master/cloudformation/S3_triggered_import.yaml
  2. Log in to your AWS account and navigate to the Amazon CloudFormation from the services tab in the AWS Management Console: https://console.aws.amazon.com/cloudformation/home
  3. Click on the Create Stack button and choose to provision New Resources. Then select Upload a template file and choose the file you just downloaded in the first step.
  4. On the Specify stack details screen all the information is pre-populated as shown in the screenshot below. Parameters:
    · Replace the PinpointProjectID field with the value you saved in Step 1
    · ValidatePhone: Choose true if you wish to validate the numbers via the Pinpoint API before importing the segment.
    · AssumeUS: Choose true if you want to assume US (+1) phone number for any phone 10 digits long or false if you want to import as-is.
    · AutoCreateCampaign: Choose true if you want to automatically create a campaign based on the imported file or false if you want to just import into the system without automatically scheduling any campaigns. This setting will be saved as an ImportSegment Lambda environment variable so you can adjust it later.
    · CampaignDelay: Number of minutes from the time of import to start of the campaign (if AutoCreateCampaign is set to true). Allows for the last-minute double check and/or pause as needed. Will be saved as CreateCampaign Lambda environment variable.
    · FileDropS3Bucket: Name of the existing Amazon S3 Bucket where new import files will be placed. Note that it has to be in the same region as you are running this template and the bucket should not have any existing notification configurations or they will be overwritten.
    · FileDropS3Prefix: Prefix (sub-folder name) of the Amazon S3 Bucket where you will be uploading new files to be imported.
  5. Settings on the configure stack options page are optional, click Next.

Select all acknowledgment boxes and click Create Stack. It takes a couple of minutes for the AWS CloudFormation to deploy all the resources.

The solution is now deployed and you can test it by uploading the sample CSV file to the Amazon S3 bucket. You will notice that the output CSV file is created in the “results” folder of the same S3 bucket, if you have validation enabled. You can also navigate to the Amazon Pinpoint console to check the Amazon Pinpoint segment. Once the deployment is complete and the segment is created, you can leverage Amazon Pinpoint campaigns to reach out to your customers.

Conclusion and Next Steps

Enabling solutions such as this provides an efficient and integrated mechanism to validate phone numbers and import customer contacts into Amazon Pinpoint. It saves time so that you can focus on creating effective campaigns to engage with your customers.

As the potential next steps, you can look into further expanding the solution by:

  1. Adjusting the default security of the Amazon S3 bucket by limiting who has access to new files. You can also adjust its encryption and the expiration of the files.
  2. Build out the lookup AWS Lambda to additionally fetch other information about the contact using your other systems of records and/or even 3rd party tools. You can also add business logic such as blocking numbers from certain countries (or vice versa, only allow certain countries).
  3. Add more dynamic segments and new endpoint (or user) attributes to more easily track the contacts based on their upload dates, type of phone number, etc.

Create a nice interface your users can use to interact with when needing to upload instead of using the S3 console directly. This “interface” may even be just a backend flow that simply integrates your system of records. This is so they don’t have to deal with any interface and uploads in the first place.

For this, and some other reference architectures you could consider, see https://github.com/aws-samples/digital-user-engagement-reference-architectures.

References

Amazon Pinpoint

https://aws.amazon.com/pinpoint/

Validating phone numbers in Amazon Pinpoint

https://docs.aws.amazon.com/pinpoint/latest/developerguide/validate-phone-numbers.html

Amazon Pinpoint Campaigns

https://docs.aws.amazon.com/pinpoint/latest/userguide/campaigns.html

Pinpoint Segment

https://docs.aws.amazon.com/pinpoint/latest/userguide/tutorials-create-a-segment.html

 

Send voice appointment reminders using Amazon Pinpoint custom channels and Amazon Connect

Post Syndicated from Ryan Lowe original https://aws.amazon.com/blogs/messaging-and-targeting/send-voice-appointment-reminders-using-amazon-pinpoint-custom-channels-and-amazon-connect/

Introduction

In this post, we will walk through setting up an always-on appointment reminder campaign in Amazon Pinpoint. No-show rates are a constant challenge for service providers. Industries such as hospitality estimate 20% of diners miss reservations in big cities,1 while salons average five missed appointments per week.2 Professional services such as financial institutions and sales teams have similar challenges to ensure clients do not miss meetings. To these businesses, an appointment missed represents lost revenue. As a result, the no-show rate is a key metric to improve. An outbound voice message provides another way to reach customers versus emails or SMS, and voice reminders give customers the choice of channels based on personal preferences.

Overview

Amazon Pinpoint is a multichannel communications service enabling customers to send both promotional and transactional messages across email, SMS, push notifications, voice, and custom channels. Amazon Connect is an easy to use omnichannel cloud contact center that helps companies provide superior customer service at a lower cost.

There are benefits of using these services together. Amazon Pinpoint allows you to build a segment of users which can be used within a campaign. Amazon Connect can enable customers to send outbound voice messages at scale should your user audience be large and require a high number of transactions per second (TPS).

To use these services together, you setup custom channels in Amazon Pinpoint, which can be created via an AWS Lambda function. These functions enable you to call APIs to trigger message sends as part of Amazon Pinpoint campaigns. Amazon Pinpoint has developed a new AWS Lambda function which can be used to send outbound voice messages via Amazon Connect. This configuration allows you to define the voice message to be sent, define the segment of users you would like to target, and send voice messages at scale through Amazon Connect via the Amazon Pinpoint custom channel.

The audience for this solution are technical customers who are used to working with multiple AWS services and are familiar with AWS Lambda functions. The solution built relies on the Amazon Pinpoint custom channel feature and targeting, along with the Amazon Connect outbound voice API called via a prepared AWS Lambda function. Once completed, you will be able to create an evergreen campaign which will send outbound voice messages to your patients who have an appointment the following day.

The costs associated with this solution will be:

  1. Amazon Connect outbound voice calls per minute
  2. Amazon Connect claimed phone number(s)
  3. Amazon Pinpoint Monthly Targeted Audience (MTA) costs.

The costs for a outbound voice reminder system that sends 10k messages per day, with an average length of 20 seconds per call, to an total monthly audience of 300k, in the US are as follows. Note that prices with vary for other countries. Complete Amazon Connect outbound call pricing can be found here.

Solution

Prerequisites:

For this walkthrough the article assumes:

  • An AWS account
  • Basic understanding of IAM and privileges required to create the following; IAM identity provider, roles, policies, and users
  • Basic understanding of Amazon Pinpoint and how to create a project
  • Basic understanding of Amazon Connect and experience in creating contact flows. More information on setup of Amazon Connect can be found here.

Step 1: Create an Appointment Reminder custom event

The first step in setting up this solution is to create and report a custom event to Amazon Pinpoint. There are multiple ways to report events in your application. Ffor demonstration purposes, below are two example event calls using the AWS SDK for Python (Boto3) from inside an AWS Lambda Function.

It is important to note that the Amazon Pinpoint events API can also be used to update endpoints when the event gets registered. In the below example, the first API call will update the endpoint attributes AppointmentDate and AppointmentTime with the details of the upcoming appointment. These attributes will be used in the outgoing message to the end-user

Sample Event: Appointment Coming Up

import boto3

client = boto3.client('pinpoint')
app_id = '[PINPOINT_PROJECT_ID]'
endpoint_id = '[ENDPOINT_ID]'
address = '[PHONE_NUMBER]'

def lambda_handler(event, context):

client.put_events(
ApplicationId = applicationId,
EventsRequest={
'BatchItem': {
endpoint_id: {
'Endpoint': {
'ChannelType': 'CUSTOM',
'Address': address,
'Attributes': {
'AppointmentDate': ['December 15th, 2020'],
'AppointmentTime': ['2:15pm']
}
},
'Events':{
'appointment-event': {
'Attributes':{},
'EventType': 'AppointmentReminder',
'Timestamp': datetime.datetime.fromtimestamp(time.time()).isoformat()
}
}
}
}
}
)

NOTE: The following steps assume that the AppointmentReminder event is being reported to Amazon Pinpoint. If you are unable to integrate the above API call into your application, you can manually create an AWS Lambda function using a Python runtime with the above code to trigger sample events.

Step 2: Create an Amazon Connect contact flow for outbound calls

This article assumes that you have an Amazon Connect contact center already setup and working. In this step, we will set up our Amazon Connect contact flow to dial our recipients and play read the message before hanging up.

  1. Log in to your Amazon Connect instance using your access URL (https://<alias>.awsapps.com/connect/login).
    Note: Replace alias with your instance’s alias.
  2. In the left navigation bar, pause on Routing, and then choose Contact flows.
  3. Under Contact flows, choose a template, or choose Create contact flow to design a contact flow from scratch. For more information, see Create a New Contact Flow.
  4. Download the sample JSON contact flow configuration file Outbound_calling.json.
  5. Choose the dropdown menu under Save and choose Import flow (beta).
  6. Select the Outbound_calling.json file in the Import flow (beta) dialog and choose Save.
  7. Choose Save to open the Save flow dialog. Then choose Save to close the dialog.
  8. Choose Publish to open the Publish dialog. Then choose Publish to close the dialog.
  9. In the contact flow designer, expand Show additional flow information.
  10. Under ARN, copy the Amazon Resource Name (ARN) contact flow. It looks like the following:
    arn:aws:connect:region:123456789012:instance/[ConnectInstanceId]/contact-flow/[ConnectContactFlowId]Note the ConnectInstanceId and ConnectContactFlowId from the ARN, they will be used in the next step.
  11. In the left navigation bar, pause on Routing and then choose Queues.
  12. Choose the queue you wish to use for the outbound calls.
  13. In the Edit queue screen, expand Show additional queue information.
  14. Under ARN, copy the Amazon Resource Name (ARN) for the queue. It looks like the following:
    arn:aws:connect:region:123456789012:instance/[ConnectInstanceId]/contact-flow/[ConnectQueueId]Note the ConnectQueueId from the ARN. It will be used in the next step.

Step 3: Deploy and modify the Amazon Pinpoint to the Amazon Connect custom channel with AWS Lambda function

Next, we will need to deploy an Amazon Pinpoint custom channel. Custom channels in Amazon Pinpoint allow you to send messages through any service with an API, including Amazon Connect. The AWS Serverless Application Repository contains an open-sourced AWS Lambda function that we will use for our custom channel. After deploying the AWS Lambda function, we will customize it to match our requirements.

  1. Navigate to the AWS Lambda Console, then choose Create function.
  2. Under Create function, Choose Browser serverless app repository.
  3. Under Public applications, choose the checkbox next to Show apps that create custom IAM roles or resource policies and enter amazon-pinpoint-connect-channel in the search box.
  4. Choose the amazon-pinpoint-connect-channel card from the list and review the Application details.
  5. Under Application settings enter the details for ConnectContactFlowId, ConnectInstanceId, and ConnectQueueId from the previous step.
  6. After reviewing all the details, choose the checkbox next to I acknowledge that this app creates custom IAM roles and resource policies and choose Deploy.
  7. Wait a couple minutes for the application to deploy two AWS Lambda functions and an AWS Simple Queue Service queue.
  8. Under Resources, choose the PinpointConnectQueuerFunction resource to open the AWS Lambda function configuration. This is the AWS Lambda function that Amazon Pinpoint will call when the message is crafted.
  9. Under Function code, scroll down to line 31 and replace
    message = "Hello World! -Pinpoint Connect Channel"
    with
    message = "This is a reminder of your upcoming appointment on {0} at {1}".format(endpoint_profile["Attributes"]["AppointmentDate"][0], endpoint_profile["Attributes"]["AppointmentTime"][0])
  10. Choose Deploy.

Step 4: (Optional) Modify the custom channel AWS Lambda function to meet change the rate of outgoing calls

By default, the custom channel we deployed in the previous step will place outbound calls through Amazon Connect at a rate of 1 call every 3 seconds. This allows you to configure how many active outbound calls to avoid running into service limits. Review your current service limits in Amazon Connect for more details.

  1. Navigate to the AWS Lambda Console, then choose AmazonPinpointConnectChannel-backgroundprocessor function.
  2. Under Function code, scroll down to line 73 and replace the sleep timer, currently set with 3 seconds, with your requirements.
  3. Choose Deploy.

Step 5: Create a Pinpoint custom campaign with your lambda function and segment

  1. Create a CSV file to import endpoints with the attributes of AppointmentDate and AppointmentTime.
    Example:
    Id,Address,ChannelType,Attributes.AppointmentDate,Attributes.AppointmentTime
    1,+1[PHONE_NUMBER],SMS,November 30 2020,9:00am
    2,+1[PHONE_NUMBER2],SMS,November 30 2020,10:00am
  2. Navigate to the Amazon Pinpoint console.
  3. In the All Projects list, select your project.
  4. In the navigation pane, choose Segments.
  5. Choose Create a Segment.
  6. Choose Import a segment and upload your CSV file and choose Create segment.
  7. In the navigation pane, choose Campaigns.
  8. Choose Create campaign.
  9. In the Create a campaign wizard, enter a name for campaign name.
  10. Under Channel choose Custom.
  11. Choose Next.
  12. On the Choose a segment screen, choose the segment created above, and choose Next.
  13. On the Create your message screen, do the following:
    a) For Lambda function choose AmazonPinpointConnectChannel that we deployed in Step 3 above.
    b) For endpoint Options choose SMS.
    c) Choose Next.
  14. On the Choose when to send the campaign screen, do the following:
    a) Choose When an event occurs.
    b) Under Events, choose the AppointmentReminder event.
    c) Under campaign dates, choose a Start date and time and an End date and time to be used as the campaign’s duration.
  15. Choose Next.
  16. Review the campaign details and choose Launch campaign.

Cleanup:

To remove the two AWS Lambda functions and the Amazon Simple Queue Service queue provisioned in the steps above in order not to incur further charges, please follow these steps below.

  1. Navigate to the Amazon CloudFormation Console.
  2. Choose severlessrepo-amazon-pinpoint-connect-channel and choose Delete.
  3. Choose Delete stack in the delete confirmation window.

 

Next Steps:

You can continue to iterate on this experience using Amazon Pinpoint and Amazon Connect to create a custom user experience.

To learn more about these services, please visit the Amazon Pinpoint or Amazon Connect web pages.

(1) https://www.scisolutions.com/uploads/news/Missed-Appts-Cost-HMT-Article-042617.pdf

(2) https://blog.carbonfreedining.org/the-ultimate-guide-to-restaurant-no-shows

Auto-reply to incoming emails using Amazon Simple Email Service (SES)

Post Syndicated from Ilya Pupko original https://aws.amazon.com/blogs/messaging-and-targeting/auto-reply-to-incoming-emails-using-amazon-simple-email-service-ses/

Both Amazon Pinpoint and Amazon Simple Email Service (SES) are known for their ability to send out transactional and promotional emails at scale and with ease. However, both are often not set up to receive email replies. Owners often assume that the “no-reply” addresses they are using do not require much consideration. This means that if a customer does reply, they would get an unhelpful server rejection indicating that the address is invalid. They would also not be able to unsubscribe via the simple reply, which is an otherwise established common practice. Automated guidance that the address is not monitored and who and how to reach for assistance would never be provided. In summary, a very unprofessional experience.

If you do have full control over the DNS and are not already receiving emails at the subdomain used for these emails, you can follow this short guide. It walks you through all the setup needed to have automated and templated responses to any address at the domain. This includes the address you use to send emails. Follow this post to ensure that your Amazon SES and Amazon Pinpoint are set up in accordance with common configuration and best business practices to have professional auto-reply to emails sent to the configured sending email addresses.

Solution overview

The proposed solution does not rely on any additional services. It does not add any additional charges beyond the cost directly associated with receiving and sending the emails and the minimal AWS Lambda function for the automated logic. It relies on SES built-in capability to receive emails, Amazon Pinpoint native templates, and uses Lambda for basic orchestration.

lambda diagram for response

Note, in this walkthrough and related code, we are using Amazon Pinpoint templates as they can be managed and maintained directly via the console, but you can choose to use SES templates (via the CreateTemplate API) or, if it makes better sense in your scenario, even just hardcode the template into the AWS Lambda function itself.

To complete the setup, all you must do is follow these steps:

      1. Confirm (Sub-) Domain setup in SES (even if you use Amazon Pinpoint to send your emails out, the SES portion of the console should show the validated domain as well). See SES Developer Guide.
      2. Ensure that your SES domain is verified and you are out of the sandbox. If still in the sandbox, you can only send emails to the Amazon SES mailbox simulator addresses and email addresses/domains that you have pre-verified. See Moving out of the Amazon SES sandbox.
      3. Configure SES to receive incoming emails. Please note that this must be done on the whole subdomain you use, not just a single email address. See Setting up Amazon SES email receiving.
      4. Create/add a new template you want to use via Amazon Pinpoint. Simply switch the console over to Amazon Pinpoint, select Message templates, click Create, select Email, and fill out the rest of the self-explanatory field.
        1. Plaintext portion is optional – you can either skip it or fill it out and enable in the Lambda function we are deploying in the next step.
        2. Similarly, if you prefer to use the SES template, you can instead. Just use the associated line in that same code.
        3. Same with a hardcoded template, if you prefer that for some reason.
      5. Have this pre-defined CloudFormation create the required SES receive rule, and Lambda function. This processes the incoming email and sends back the response, all using the code shared in the dedicated portion of our GitHub, AWS Digital User Engagement Reference Architectures repository. Specifically:
          1. Download the YAML from SES_Auto_Reply.yaml.
          2. Go to CloudFormation in AWS Management Console. (Remember to choose the region you want it deployed on)
          3. Click Create Stack and then choose With new resources
          4. Leave the default “Template is ready“, but switch to ”Upload a template file“ and choose the file you just downloaded
          5. Follow the wizard to give the “stack” a new name and enter the name of the template you created in step 4.
          6. Optionally you can also set the default response address, the addresses and/or domains you want to limit the auto-response to, and adjust the incoming email rule-set it should be stored under (the default should be fine, unless you have manually adjusted it in the past)
      6. Once deployed, the behavior is immediately active and you can further adjust any of these elements.

 

Conclusion and what’s next?

This architecture, once deployed, sends out the templated auto-response using the SES/Pinpoint domain/email address it received the original email on.

The new rule is added to the SES email receiving rule set to allow further customization:

  1. The rule can be limited to specific email address, specific domain, or just be set to be across all domains.
  2. It can also have the default response address set or reuse the address that the original rejected email was sent to.
  3. It can be moved down on the priority with other rules taking precedence and possibly even overriding it.
  4. It can have other actions added to it, like notifying SNS for additional tracking.

The Lambda function looks up the chosen Amazon Pinpoint template and uses it to reply. Here are some of the customizations you may want to consider within this function and the template:

  1. When sending the automated reply, by default, the template’s configured subject is appended with the original incoming email subject. You can adjust this to fit your company’s brand better.
  2. By default, the function supports an optional template tag %%NAME%% and %%ID%%. If the first appears in the template, it is automatically replaced with the original email’s FROM address. And if %%ID%% appears in the template, it is replaced with the SES’s original email message id, to help with any required audits.
  3. It is assumed that no additional tracking and actions are needed on such rejected and auto-replied emails, but you can further modify the flow by moving the rule around and adding more actions (as mentioned above), and even specify a particular/different SES Configuration Set for the outgoing emails.

Are you using this flow as a baseline for a more complex business flow or have other questions about it? We want to hear back – please comment here or file an issue in the GitHub repository. If you want to file a pull request to make it even more useful for others, please do so, we do appreciate community participation.

If you liked this article, we are continually expanding our Amazon Pinpoint and SES Architecture References and publish new solutions for these and other services. For most recent SES documentation, please see official SES documentation site, and for Amazon Pinpoint, please see Amazon Pinpoint documentation site.

 

 

 

Application integration patterns for microservices: Running distributed RFQs

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/application-integration-patterns-running-distributed-rfqs/

This post is courtesy of Dirk Fröhner, Principal Solutions Architect.

The first blog in this series introduces asynchronous messaging for building loosely coupled systems that can scale, operate, and evolve individually. It considers messaging as a communications model for microservices architectures. Part 2 dives into fan-out strategies and applies the respective patterns to a concrete use case.

In this post, I look at how to apply messaging patterns to help coordinate distributed requests and responses. Specifically, I focus on a composite pattern called scatter-gather, as presented in the book “Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions” (Hohpe and Woolf, 2004).

I also show how a client can communicate with a backend via synchronous REST API operations while asynchronous messaging is applied internally for processing.

Overview

The use case is for Wild Rydes, a fictional application that replaces traditional taxis with unicorns. It’s used in several hands-on AWS workshops that illustrate serverless development concepts.

Wild Rydes wants to allow customers to initiate requests for quotation (RFQs) for their rides. This allows unicorns to make special offers to potential customers within a defined schedule. A customer can send their ride details and ask for quotations from all unicorns that are within a certain vicinity. The customer can then choose the best offer.

Wild Rydes

The scatter-gather pattern

The scatter-gather pattern can be used to implement this use-case on the server side. This pattern is ideal for requesting responses from multiple parties, then aggregating and processing that data.

As presented by Hohpe and Woolf, the scatter-gather pattern is a composite pattern that illustrates how to “broadcast a message to multiple recipients and re-aggregate the responses back into a single message”. The pattern is illustrated in the following diagram.

Scatter-gather architecture

The flow starts with the Requester to initiate the broadcast to all potential Responders. This can be architected in a loosely coupled manner using pub-sub messaging with Amazon SNS or Amazon MQ, as shown in this blog post.

All responders must send their answers somewhere for aggregation and processing. This can also be architected in a loosely coupled manner using a message queue with Amazon SQS or Amazon MQ, as described in this blog post.

The Aggregator component consumes the individual responses from the response queue. It forwards the aggregate to the Processor component for final processing. Both Aggregator and Processor can be part of the same application or process. If separated, they can be decoupled through messaging. The Requester can also be part of the same application or process as Aggregator and Processor.

Explaining the architecture and API

In this section, I walk through the use-case and explain how it can be architected and implemented. I show how the scatter-gather pattern works in the backend, and the client-to-backend communication.

Submit instant ride RFQ

To initiate such an RFQ, the customer app communicates with the ride booking service on the backend. The ride booking service exposes a REST API. By default, an RFQ runs for five minutes, but Wild Rydes is working on a feature to let a customer individually set that value.

A request to submit an instant-ride RFQ contains start and destination locations for the ride and the customer ID:

POST /<submit-instant-ride-rfq-resource-path> HTTP/1.1
...

{
    "from": "...",
    "to": "...",
    "customer": "..."
}

The RFQ is a lengthy process so the client app should not expect an immediate response. Instead, the API accepts the RFQ, creates an RFQ task resource, and returns to the client. The response contains a URL to request an update for the status. It also provides an estimated time for the end of the RFQ:

HTTP/1.1 202 Accepted
...

{
    "links": {
        "self": "http://.../<rfq-task-resource-path>",
        "...": "..."
    },
    "status": "running",
    "eta": "..."
}

The following architecture shows this interaction, excluding the process after a new RFQ is submitted.

Client app interaction

Processing the RFQ

The backend uses the scatter-gather pattern to publish the RFQ to unicorns and collect responses for aggregation and processing.

Backend architecture

1. The ride booking service acts as the requester in the scatter-gather pattern. Following a new RFQ from the client app, it publishes the details into an SNS topic. This topic is related to the location of the ride’s starting point since customers need quotes from unicorns within the vicinity. These messages are the green request messages.

2. The unicorn management service maintains instances of unicorn management resources and subscribes them to RFQ topics related to their current location. These resources receive the RFQ request messages and handle the interaction with the Wild Rydes unicorn app.

3. The unicorns in the vicinity are notified through the Wild Rydes unicorn app about the new RFQ and can react if they are available. Notification options between the unicorn management service and the Wild Rydes unicorn app include push notifications and web sockets.

4. Every addressed unicorn can now submit their quote. All quotes go back through the unicorn management resources and the unicorn management service into the RFQ response queue. They act as the responders in the sense of the scatter-gather pattern.

5. The ride booking service also acts as aggregator and processor in the sense of the scatter-gather pattern. It uses SQS to consume messages from an RFQ response queue that eventually contains the RFQ responses from the involved unicorns. It starts doing so immediately after it publishes the details of a new RFQ into the RFQ topic. The messages from the RFQ response queue relate to the blue response messages.

The ride booking service consumes all incoming responses from that queue. This continues until the deadline or all participating unicorns have answered, whatever occurs first. The aggregator responsibility can be as simple as persisting the details of each incoming RFQ response into an Amazon DynamoDB table.

To match incoming responses to the right RFQ, it uses a fundamental integration pattern, correlation ID. In this pattern, a requester adds a unique ID to an outgoing message and each responder is asked to forward this ID in their response.

Also, responders must know where to send their responses to. To keep this dynamic, there is another fundamental integration pattern: return address. It suggests that a requester adds meta information into outgoing messages that indicate the address for their responses. In this architecture, this is the ARN of the SQS queue that acts as the RFQ response queue. This supports an option to simplify the response management: the RFQ response queue is a dedicated queue per customer.

Lastly, the processor responsibility in the ride booking service reads the RFQ responses from the DynamoDB table. It converts the data to JSON for the Wild Rydes customer app.

Check RFQ status

During the RFQ processing, a customer may want to know how many responses have already arrived, or if the results are already available. After submitting an instant ride RFQ, the client receives a representation of the running task. It can use the self-link to request an update:

GET /<rfq-task-resource-path> HTTP/1.1

While the task is running, a response from the ride booking service comes back with the respective status value and the count of responses that have already arrived:

HTTP/1.1 200 OK
...

{
    "links": {
        "self": "http://.../<rfq-task-resource-path>",
        "...": "..."
    },
    "status": "running",
    "responses-received": 2,
    "eta": "..."
}

After the RFQ is completed

An RFQ is completed if either the time is up or all unicorns have answered. The result of the RFQ is then available to the customer. If the client requests an update to the task representation, the response indicates this by redirecting to the RFQ result:

HTTP/1.1 303 See Other
Location: <url-of-rfq-result-resource>

Requesting a representation of the results resource, the client receives the quotes of all the participating unicorns. The frontend customer app can visualize these accordingly:

HTTP/1.1 200 OK
...

{
    "links": { ... },
    "from": "...",
    "to": "...",
    "customer": "...",
    "quotes": [ ... ]
}

The ride booking service can also use means of active notifications to make the customer app aware once the RFQ result is ready, including the link to the RFQ result. Examples for this include push notifications and web sockets.

Conclusion

In this blog, I present the scatter-gather pattern, which is a composite pattern based on pub-sub and point-to-point messaging channels. It also employs correlation ID and return address. I show how this is implemented in the Wild Rydes example application. You can use this integration pattern for communication in your microservices.

I cover how synchronous API communication between end user client and backend can work along with asynchronous messaging for request processing internally.

To learn more:

For more serverless learning resources, visit https://serverlessland.com.

Introducing Amazon SNS FIFO – First-In-First-Out Pub/Sub Messaging

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/introducing-amazon-sns-fifo-first-in-first-out-pub-sub-messaging/

When designing a distributed software architecture, it is important to define how services exchange information. For example, the use of asynchronous communication decouples components and simplifies scaling, reducing the impact of changes and making it easier to release new features.

The two most common forms of asynchronous service-to-service communication are message queues and publish/subscribe messaging:

  • With message queues, messages are stored on the queue until they are processed and deleted by a consumer. On AWS, Amazon Simple Queue Service (SQS) provides a fully managed message queuing service with no administrative overhead.
  • With pub/sub messaging, a message published to a topic is delivered to all subscribers to the topic. On AWS, Amazon Simple Notification Service (SNS) is a fully managed pub/sub messaging service that enables message delivery to a large number of subscribers. Each subscriber can also set a filter policy to receive only the messages that it cares about.

You can use topics when you want to fan out messages to multiple applications, and queues when you want to send messages to one application. Using topics and queues together, you can decouple microservices, distributed systems, and serverless applications.

With SQS, you can use FIFO (First-In-First-Out) queues to preserve the order in which messages are sent and received, and to avoid that a message is processed more than once.

Introducing SNS FIFO Topics
Today, we are adding similar capabilities for pub/sub messaging with the introduction of SNS FIFO topics, providing strict message ordering and deduplicated message delivery to one or more subscribers.

FIFO topics manage ordering and deduplication similar to FIFO queues:

Ordering – You configure a message group by including a message group ID when publishing a message to a FIFO topic. For each message group ID, all messages are sent and delivered in order of their arrival. For example, to ensure the delivery of messages related to the same customer in order, you can publish these messages to the topic using the customer’s account number as the message group ID. There is no limit in the number of message groups with FIFO topics and queues. You don’t need to declare in advance the message group ID, any value will work. If you don’t have a logical distinction between messages, you can simply use the same message group ID for all and have a single group of ordered messages. The message group ID is passed to any subscribed FIFO queue.

Deduplication – Distributed systems (like SNS) and client applications sometimes generate duplicate messages. You can avoid duplicated message deliveries from the topic in two ways: either by enabling content-based deduplication on the topic, or by adding a deduplication ID to the messages that you publish. With message content-based deduplication, SNS uses a SHA-256 hash to generate the message deduplication ID using the body of the message. After a message with a specific deduplication ID is published successfully, there is a 5-minute interval during which any message with the same deduplication ID is accepted but not delivered. If you subscribe a FIFO queue to a FIFO topic, the deduplication ID is passed to the queue and it is used by SQS to avoid duplicate messages being received.

You can use FIFO topics and queues together to simplify the implementation of applications where the order of operations and events is critical, or when you cannot tolerate duplicates. For example, to process financial operations and inventory updates, or to asynchronously apply commands that you receive from a client device. FIFO queues can use message filtering in FIFO topics to selectively receive only a subset of messages rather than every message published to the topic.

How to Use SNS FIFO Topics
A common scenario where FIFO topics can help is when you receive updates that need to be processed in order. For example, I can use a FIFO topic to receive updates from an application where my customers edit their account profiles. Then, I subscribe an SQS FIFO queue to the FIFO topic, and use the queue as trigger for a Lambda function that applies the account updates to an Amazon DynamoDB table used by my Customer management system that needs to be kept in sync.

The decoupling introduced by the FIFO topic makes it easier to add new functionality with minimal impact to existing applications. For example, to reward my loyal customers with additional promotions, I add a new Loyalty application that is storing information in a relational database managed by Amazon Aurora. To keep the customer’s information stored in the Loyalty database in sync with my other applications, I can subscribe a new FIFO queue to the same FIFO topic, and add a new Lambda function that receives customer updates in the same order as they are generated, and applies them to the Loyalty database. In this way, I don’t need to change code and configuration of other applications to integrate the new Loyalty app.

First, I create two FIFO queues in the SQS console, leaving all options to their defaults:

  • The customer.fifo queue to process updates in my Customer management system.
  • The loyalty.fifo queue to help me collect and store customer updates for the Loyalty application.

In the SNS console, I create the updates.fifo topic. I select FIFO as type, and enable Content-based message deduplication.

Then,  I subscribe the customer.fifo and loyalty.fifo queues to the topic.

To be able to receive messages, I add a statement to the access policy of both queues granting the updates.fifo topic permissions to send messages to the queues. For example, for the customer.fifo queue the statement is:

{
  "Effect": "Allow",
  "Principal": {
    "Service": "sns.amazonaws.com"
  },
  "Action": "SQS:SendMessage",
  "Resource": "arn:aws:sqs:us-east-2:123412341234:customer.fifo",
  "Condition": {
    "ArnLike": {
      "aws:SourceArn": "arn:aws:sns:us-east-2:123412341234:updates.fifo"
    }
  }
}

Now, I use the SNS console to publish 4 messages in sequence. For all messages, I use the same message group ID. In this way, they are all in the same message group. The only part that is different is the message body, where I use in order:

  • Update One
  • Update Two
  • Update Three
  • Update One

In the SQS console, I see that only 3 messages have been delivered to the FIFO queues:

Why is that? When I created the FIFO topics, I enabled content-based deduplication. The 4 messages were sent within the 5-minute deduplication window. The last message has been recognized as a duplicate of the first one and has not been delivered to the subscribed queues.

Let’s see the actual messages in the queues. I use the AWS Command Line Interface (CLI) to receive the messages from SQS, and the jq command-line JSON processor to format the output and get only the Message in the Body.

Here are the messages in the customer.fifo queue:

$ aws sqs receive-message --queue-url https://sqs.us-east-2.amazonaws.com/123412341234/customer.fifo --max-number-of-messages 10 | jq '.Messages[].Body | fromjson | .Message'

"Update One"
"Update Two"
"Update Three"

And these are the messages in the loyalty.fifo queue:

$ aws sqs receive-message --queue-url https://sqs.us-east-2.amazonaws.com/123412341234/loyalty.fifo --max-number-of-messages 10 | jq '.Messages[].Body | fromjson | .Message'

"Update One"
"Update Two"
"Update Three"

As expected, the 3 messages with unique content have been delivered to both queues in the same order as they were sent.

Available Now
You can use SNS FIFO topics in all commercial regions. You can process up to 300 transactions per second (TPS) per FIFO topic or FIFO queue. With SNS, you pay only for what you use, you can find more information in the pricing page.

To learn more, please see the documentation.

Danilo

Analyze and improve email campaigns with Amazon Simple Email Service and Amazon QuickSight

Post Syndicated from Apoorv Gakhar original https://aws.amazon.com/blogs/messaging-and-targeting/analyze-and-improve-email-campaigns-with-amazon-simple-email-service-and-amazon-quicksight/

Email is a popular channel for applications, used in both marketing campaigns and other outbound customer communications. The challenge with email is that it can become increasingly complex to manage for companies that must send large quantities of messages per month. This complexity is especially true when companies need to measure detailed email engagement metrics to track campaign success.

As a marketer, you want to monitor several metrics, including open rates, click-through rates, bounce rates, and delivery rates. If you do not track your email results, you could potentially be wasting your campaign resources. Monitoring and interpreting your sending results can help you deliver the best content possible to your subscribers’ inboxes, and it can also ensure that your IP reputation stays high. Mailbox providers prioritize inbox placement for senders that deliver relevant content. As a business professional, tracking your emails can also help you stay on top of hot leads and important clients. For example, if someone has opened your email multiple times in one day, it might be a good idea to send out another follow-up email to touch base.

Building a large-scale email solution is a complex and expensive challenge for any business. You would need to build infrastructure, assemble your network, and warm up your IP addresses. Alternatively, working with some third-party email solutions require contract negotiations and upfront costs.

Fortunately, Amazon Simple Email Service (SES) has a highly scalable and reliable backend infrastructure to reduce the preceding challenges. It has improved content filtering techniques, reputation management features, and a vast array of analytics and reporting functions. These features help email senders reach their audiences and make it easier to manage email channels across applications. Amazon SES also provides API operations to monitor your sending activities through simple API calls. You can publish these events to Amazon CloudWatch, Amazon Kinesis Data Firehose, or by using Amazon Simple Notification Service (SNS).

In this post, you learn how to build and automate a serverless architecture that analyzes email events. We explore how to track important metrics such as open and click rate of the emails.

Solution overview

 

The metrics that you can measure using Amazon SES are referred to as email sending events. You can use Amazon CloudWatch to retrieve Amazon SES event data. You can also use Amazon SNS to interpret Amazon SES event data. However, in this post, we are going to use Amazon Kinesis Data Firehose to monitor our user sending activity.

Enable Amazon SES configuration sets with open and click metrics and publish email sending events to Amazon Kinesis Data Firehose as JSON records. A Lambda function is used to parse the JSON records and publish the content in the Amazon S3 bucket.

Ingested data lands in an Amazon S3 bucket that we refer to as the raw zone. To make that data available, you have to catalog its schema in the AWS Glue data catalog. You create and run the AWS Glue crawler that crawls your data sources and construct your Data Catalog. The Data Catalog uses pre-built classifiers for many popular source formats and data types, including JSON, CSV, and Parquet.

When the crawler is finished creating the table definition and schema, you analyze the data using Amazon Athena. It is an interactive query service that makes it easy to analyze data in Amazon S3 using SQL. Point to your data in Amazon S3, define the schema, and start querying using standard SQL, with most results delivered in seconds.

Now you can build visualizations, perform ad hoc analysis, and quickly get business insights from the Amazon SES event data using Amazon QuickSight. You can easily run SQL queries using Amazon Athena on data stored in Amazon S3, and build business dashboards within Amazon QuickSight.

 

Deploying the architecture:

Configuring Amazon Kinesis Data Firehose to write to Amazon S3:

  1. Navigate to the Amazon Kinesis in the AWS Management Console. Choose Kinesis Data Firehose and create a delivery stream.
  2. Enter delivery stream name as “SES_Firehose_Demo”.
  3. Under the source category, select “Direct Put or other sources”.
  4. On the next page, make sure to enable Data Transformation of source records with AWS Lambda. We use AWS Lambda to parse the notification contents that we only process the required information as per the use case.
  5. Click the “Create New” Lambda function.
  6. Click on “General Kinesis Data FirehoseProcessing” Lambda blueprint and this opens up the Lambda console. Enter following values in Lambda
    • Name: SES-Firehose-Json-Parser
    • Execution role: Create a new role with basic Lambda permissions.
  7. Click “Create Function”. Now replace the Lambda code with the following provided code and save the function.
    • 'use strict';
      console.log('Loading function');
      exports.handler = (event, context, callback) => {
         /* Process the list of records and transform them */
          const output = event.records.map((record) => {
              console.log(record.recordId);
              const payload =JSON.parse((Buffer.from(record.data, 'base64').toString()))
              console.log("payload : " + payload);
              
              if (payload.eventType == "Click") {
              const resultPayLoadClick = {
                      eventType : payload.eventType,
                      destinationEmailId : payload.mail.destination[0],
                      sourceIp : payload.click.ipAddress,
                  };
              console.log("resultPayLoad : " + resultPayLoadClick.eventType + resultPayLoadClick.destinationEmailId + resultPayLoadClick.sourceIp);
              
              //const parsed = resultPayLoad[0];
              //console.log("parsed : " + (Buffer.from(JSON.stringify(resultPayLoad))).toString('base64'));
              
              
              return{
                  recordId: record.recordId,
                  result: 'Ok',
                  data: (Buffer.from(JSON.stringify(resultPayLoadClick))).toString('base64'),
              };
              }
              else {
                  const resultPayLoadOpen = {
                      eventType : payload.eventType,
                      destinationEmailId : payload.mail.destination[0],
                      sourceIp : payload.open.ipAddress,
                  };
              console.log("resultPayLoad : " + resultPayLoadOpen.eventType + resultPayLoadOpen.destinationEmailId + resultPayLoadOpen.sourceIp);
              
              //const parsed = resultPayLoad[0];
              //console.log("parsed : " + (Buffer.from(JSON.stringify(resultPayLoad))).toString('base64'));
              
              
              return{
                  recordId: record.recordId,
                  result: 'Ok',
                  data: (Buffer.from(JSON.stringify(resultPayLoadOpen))).toString('base64'),
              };
              }
          });
          console.log("Output : " + output.data);
          console.log(`Processing completed.  Successful records ${output.length}.`);
          callback(null, { records: output });
      };

      Please note:

      For this blog, we are only filtering out three fields i.e. Eventname, destination_Email, and SourceIP. If you want to store other parameters you can modify your code accordingly. For the list of information that we receive in notifications, you may check out the following document.

      https://docs.aws.amazon.com/ses/latest/DeveloperGuide/event-publishing-retrieving-firehose-examples.html

  8. Now, navigate back to your Amazon Kinesis Data Firehose console and choose the newly created Lambda function.
  9. Keep the convert record format disabled and click “Next”.
  10. In the destination, choose Amazon S3 and select a target Amazon S3 bucket. Create a new bucket if you do not want to use the existing bucket.
  11. Enter the following values for Amazon S3 Prefix and Error Prefix. When event data is published.
    • Prefix:
      fhbase/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/hour=!{timestamp:HH}/
    • Error Prefix:
      fherroroutputbase/!{firehose:random-string}/!{firehose:error-output-type}/!{timestamp:yyyy/MM/dd}/
  12. You may utilize the above values in the Amazon S3 prefix and error prefix. If you use your own prefixes make sure to accordingly update the target values in AWS Glue which you will see in further process.
  13. Keep the Amazon S3 backup option disabled and click “Next”.
  14. On the next page, under the Permissions section, select create a new role. This opens up a new tab and then click “Allow” to create the role.
  15. Navigate back to the Amazon Kinesis Data Firehose console and click “Next”.
  16. Review the changes and click on “Create delivery stream”.

Configure Amazon SES to publish event data to Kinesis Data Firehose:

  1. Navigate to Amazon SES console and select “Email Addresses” from the left side.
  2. Click on “Verify a New Email Address” on the top. Enter your email address to which you send a test email.
  3. Go to your email inbox and click on the verify link. Navigate back to the Amazon SES console and you will see verified status on the email address provided.
  4. Open the Amazon SES console and select Configuration set from the left side.
  5. Create a new configuration set. Enter “SES_Firehose_Demo”  as the configuration set name and click “Create”.
  6. Choose Kinesis Data Firehose as the destination and provide the following details.
    • Name: OpenClick
    • Event Types: Open and Click
  7. In the IAM Role field, select ‘Let SES make a new role’. This allows SES to create a new role and add sufficient permissions for this use case in that role.
  8. Click “Save”.

Sending a Test email:

  1. Navigate to Amazon SES console, click on “Email Addresses” on the left side.
  2. Select your verified email address and click on “Send a Test email”.
  3. Make sure you select the raw email format. You may use the following format to send out a test email from the console. Make sure you send out this email to a recipient inbox to which you have the access.
    • X-SES-CONFIGURATION-SET: SES_Firehose_Demo
      X-SES-MESSAGE-TAGS: Email=NULL
      From: [email protected]
      To: [email protected]
      Subject: Test email
      Content-Type: multipart/alternative;
          		boundary="----=_boundary"
      
      ------=_boundary
      Content-Type: text/html; charset=UTF-8
      Content-Transfer-Encoding: 7bit
      This is a test email.
      
      <a href="https://aws.amazon.com/">Amazon Web Services</a>
      ------=_boundary
  4. Once the email is received in the recipient’s inbox, open the email and click the link present in the same. This generates a click and open event and send the response back to SES.

Creating Glue Crawler:

  1. Navigate to the AWS Glue console, select “crawler” from the left side, and then click on “Add crawler” on the top.
  2. Enter the crawler name as “SES_Firehose_Crawler” and click “Next”.
  3. Under Crawler source type, select “Data stores” and click “Next”.
  4. Select Amazon S3 as the data source and prove the required path. Include the path until the “fhbase” folder.
  5. Select “no” under Add another data source section.
  6. In the IAM role, select the option to ‘Create an IAM role’. Enter the name as “SES_Firehose-Crawler”. This provides the necessary permissions automatically to the newly created role.
  7. In the frequency section, select run on demand and click “Next”. You may choose this value as per your use case.
  8. Click on add Database and provide the name as “ses_firehose_glue_db”. Click on create and then click “Next”.
  9. Review your Glue crawler setting and click on “Finish”.
  10. Run the above-created crawler. This crawls the data from the specified Amazon S3 bucket and create a catalog and table definition.
  11. Now navigate to “tables” on the left, and verify a “fhbase” table is created after you run the crawler.

If you want to analyze the data stored until now, you can use Amazon Athena and test the queries. If not, you can move to the Amazon Quicksight directly.

Analyzing the data using Amazon Athena:

  1. Open Athena console and select the database, which is created using AWS Glue
  2. Click on “setup a query result location in Amazon S3” as shown in the following screenshot.
  3. Navigate to the Amazon S3 bucket created in earlier steps and create a folder called “AthenaQueryResult”. We store our Athena query result in this bucket.
  4. Now navigate back to Amazon Athena and select the Amazon S3 bucket with the folder location as shown in the following screenshot and click “Save”.
  5. Run the following query to test the sample output and accordingly modify your SQL query to get the desired output.
    • Select * from “ses_firehose_glue_db”.”fhbase”

Note: If you want to track the opened emails by unique Ip addresses then you can modify your SQL query accordingly. This is because every time an email gets opened, you will receive a notification even if the same email was previously opened.

 

Visualizing the data in Amazon QuickSight dashboards:

  1. Now, let’s analyze this data using Amazon Athena via Amazon Quicksight.
  2. Log into Amazon Quicksight and choose Manage data, New dataset. Choose Amazon Athena as a new data source.
  3. Enter the data source name as “SES-Demo” and click on “Create the data source”.
  4. Select your database from the drop-down as “ses_firehose_glue_db” and table “fhbase” that you have created in AWS Glue.
  5. And add a custom SQL based on your use case and click on “Confirm query”. Refer to the example below.
  6. You can perform ad hoc analysis and modify your query according to your business needs as shown in the following image. Click “Save & Visualize”.
  7. You can now visualize your event data on Amazon Quicksight dashboard. You can use various graphs to represent your data. For this demo, the default graph is used and two fields are selected to populate on the graph, as shown below.

 

Conclusion:

This architecture shows how to track your email sending activity at a granular level. You set up Amazon SES to publish event data to Amazon Kinesis Data Firehose based on fine-grained email characteristics that you define. You can also track several types of email sending events, including sends, deliveries, bounces, complaints, rejections, rendering failures, and delivery delays. This information can be useful for operational and analytical purposes.

To get started with Amazon SES, follow this quick start guide and you can learn more about monitoring sending activity here.

About the Authors

Chirag Oswal is a solutions architect and AR/VR specialist working with the public sector India. He works with AWS customers to help them adopt the cloud operating model on a large scale.

Apoorv Gakhar is a Cloud Support Engineer and an Amazon SES Expert. He is working with AWS to help the customers integrate their applications with various AWS Services.

 

Additional Resources:

Amazon SES Dedicated IP Pools

Amazon Personalize optimizer using Amazon Pinpoint events

Template Personalization using Amazon Pinpoint

 

 

Building resilient serverless patterns by combining messaging services

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-resilient-no-code-serverless-patterns-by-combining-messaging-services/

In “Choosing between messaging services for serverless applications”, I explain the features and differences between the core AWS messaging services. Amazon SQS, Amazon SNS, and Amazon EventBridge provide queues, publish/subscribe, and event bus functionality for your applications. Individually, these are robust, scalable services that are fundamental building blocks of serverless architectures.

However, you can also combine these services to solve specific challenges in distributed architectures. By doing this, you can use specific features of each service to build sophisticated patterns with little code. These combinations can make your applications more resilient and scalable, and reduce the amount of custom logic and architecture in your workload.

In this blog post, I highlight several important patterns for serverless developers. I also show how you use and deploy these integrations with the AWS Serverless Application Model (AWS SAM).

Examples in this post refer to code that can be downloaded from this GitHub repo. The README.md file explains how to deploy and run each example.

SNS to SQS: Adding resilience and throttling to message throughput

SNS has a robust retry policy that results in up to 100,010 delivery attempts over 23 days. If a downstream service is unavailable, it may be overwhelmed by retries when it comes back online. You can solve this issue by adding an SQS queue.

Adding an SQS queue between the SNS topic and its subscriber has two benefits. First, it adds resilience to message delivery, since the messages are durably stored in a queue. Second, it throttles the rate of messages to the consumer, helping smooth out traffic bursts caused by the service catching up with missed messages.

To build this in an AWS SAM template, you first define the two resources, and the SNS subscription:

  MySqsQueue:
    Type: AWS::SQS::Queue

  MySnsTopic:
    Type: AWS::SNS::Topic
    Properties:
      Subscription:
        - Protocol: sqs
          Endpoint: !GetAtt MySqsQueue.Arn

Finally, you provide permission to the SNS topic to publish to the queue, using the AWS::SQS::QueuePolicy resource:

  SnsToSqsPolicy:
    Type: AWS::SQS::QueuePolicy
    Properties:
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Sid: "Allow SNS publish to SQS"
            Effect: Allow
            Principal: "*"
            Resource: !GetAtt MySqsQueue.Arn
            Action: SQS:SendMessage
            Condition:
              ArnEquals:
                aws:SourceArn: !Ref MySnsTopic
      Queues:
        - Ref: MySqsQueue

To test this, you can publish a message to the SNS topic and then inspect the SQS queue length using the AWS CLI:

aws sns publish --topic-arn "arn:aws:sns:us-east-1:123456789012:sns-sqs-MySnsTopic-ABC123ABC" --message "Test message"
aws sqs get-queue-attributes --queue-url "https://sqs.us-east-1.amazonaws.com/123456789012/sns-sqs-MySqsQueue- ABC123ABC " --attribute-names ApproximateNumberOfMessages

This results in the following output:

CLI output

Another usage of this pattern is when you want to filter messages in architectures using an SQS queue. By placing the SNS topic in front of the queue, you can use the message filtering capabilities of SNS. This ensures that only the messages you need are published to the queue. To use message filtering in AWS SAM, use the AWS:SNS:Subcription resource:

  QueueSubcription:
    Type: 'AWS::SNS::Subscription'
    Properties:
      TopicArn: !Ref MySnsTopic
      Endpoint: !GetAtt MySqsQueue.Arn
      Protocol: sqs
      FilterPolicy:
        type:
        - orders
        - payments 
      RawMessageDelivery: 'true'

EventBridge to SNS: combining features of both services

Both SNS and EventBridge have different characteristics in terms of targets, and integration with broader features. This table compares the major differences between the two services:

Amazon SNSAmazon EventBridge
Number of targets10 million (soft)5
Limits100,000 topics. 12,500,000 subscriptions per topic.100 event buses. 300 rules per event bus.
Input transformationNoYes – see details.
Message filteringYes – see details.Yes, including IP address matching – see details.
FormatRaw or JSONJSON
Receive events from AWS CloudTrailNoYes
TargetsHTTP(S), SMS, SNS Mobile Push, Email/Email-JSON, SQS, Lambda functions15 targets including AWS LambdaAmazon SQSAmazon SNSAWS Step FunctionsAmazon Kinesis Data StreamsAmazon Kinesis Data Firehose.
SaaS integrationNoYes – see integration partners.
Schema Registry integrationNoYes – see details.
Dead-letter queues supportedYesNo
Public visibilityCan create public topicsCannot create public buses
Cross-RegionYou can subscribe your AWS Lambda functions to an Amazon SNS topic in any Region.Targets must be same Region. You can publish across Region to another event bus.

In this pattern, you configure an SNS topic as a target of an EventBridge rule:

SNS topic as a target for an EventBridge rule

In the AWS SAM template, you declare the resources in the preceding diagram as follows:

Resources:
  MySnsTopic:
    Type: AWS::SNS::Topic

  EventRule: 
    Type: AWS::Events::Rule
    Properties: 
      Description: "EventRule"
      EventPattern: 
        account: 
          - !Sub '${AWS::AccountId}'
        source:
          - "demo.cli"
      Targets: 
        - Arn: !Ref MySnsTopic
          Id: "SNStopic"

The default bus already exists in every AWS account, so there is no need to declare it. For the event bus to publish matching events to the SNS topic, you define permissions using the AWS::SNS::TopicPolicy resource:

  EventBridgeToToSnsPolicy:
    Type: AWS::SNS::TopicPolicy
    Properties: 
      PolicyDocument:
        Statement:
        - Effect: Allow
          Principal:
            Service: events.amazonaws.com
          Action: sns:Publish
          Resource: !Ref MySnsTopic
      Topics: 
        - !Ref MySnsTopic       

EventBridge has a limit of five targets per rule. In cases where you must send events to hundreds or thousands of targets, publishing to SNS first and then subscribing those targets to the topic works around this limit. Both services have different targets, and this pattern allows you to deliver EventBridge events to SMS, HTTP(s), email and SNS mobile push.

You can transform and filter the message using these services, often without needing an AWS Lambda function. SNS does not support input transformation but you can do this in an EventBridge rule. Message filtering is possible in both services but EventBridge provides richer content filtering capabilities.

AWS CloudTrail can log and monitor activity across services in your AWS account. It can be a useful source for events, allowing you to respond dynamically to objects in Amazon S3 or react to changes in your environment, for example. This natively integrates with EventBridge, allowing you to ingest events at scale from dozens of services.

Using EventBridge enables you to source events from outside your AWS account, offering integrations with a list of software as a service (SaaS) providers. This capability allows you to receive events from your accounts with SaaS providers like Zendesk, PagerDuty, and Auth0. These events are delivered to a partner event bus in your account, and can then be filtered and routed to an SNS topic.

Additionally, this pattern allows you to deliver events to Lambda functions in other AWS accounts and in other AWS Regions. You can invoke Lambda from SNS topics in other Regions and accounts. It’s also possible to make SNS topics publicly read-only, making them extensible endpoints that other third parties can consume from. SNS has comprehensive access control, which you can incorporate into this pattern.

Cross-account publishing

EventBridge to SQS: Building fault-tolerant microservices

EventBridge can route events to targets such as microservices. In the case of downstream failures, the service retries events for up to 24 hours. For workloads where you need a longer period of time to store and retry messages, you can deliver the events to an SQS queue in each microservice. This durably stores those events until the downstream service recovers. Additionally, this pattern protects the microservice from large bursts of traffic by throttling the delivery of messages.

Fault-tolerant microservices architecture

The resources declared in the AWS SAM template are similar to the previous examples, but it uses the AWS::SQS::QueuePolicy resource to grant the appropriate permission to EventBridge:

  EventBridgeToToSqsPolicy:
    Type: AWS::SQS::QueuePolicy
    Properties:
      PolicyDocument:
        Statement:
        - Effect: Allow
          Principal:
            Service: events.amazonaws.com
          Action: SQS:SendMessage
          Resource:  !GetAtt MySqsQueue.Arn
      Queues:
        - Ref: MySqsQueue

Conclusion

You can combine these services in your architectures to implement patterns that solve complex challenges, often with little code required. This blog post shows three examples that implement message throttling and queueing, integrating SNS and EventBridge, and building fault tolerant microservices.

To learn more building decoupled architectures, see this Learning Path series on EventBridge. For more serverless learning resources, visit https://serverlessland.com.

Choosing between messaging services for serverless applications

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/choosing-between-messaging-services-for-serverless-applications/

Most serverless application architectures use a combination of different AWS services, microservices, and AWS Lambda functions. Messaging services are important in allowing distributed applications to communicate with each other, and are fundamental to most production serverless workloads.

Messaging services can improve the resilience, availability, and scalability of applications, when used appropriately. They can also enable your applications to communicate beyond your workload or even the AWS Cloud, and provide extensibility for future service features and versions.

In this blog post, I compare the primary messaging services offered by AWS and how you can use these in your serverless application architectures. I also show how you use and deploy these integrations with the AWS Serverless Application Model (AWS SAM).

Examples in this post refer to code that can be downloaded from this GitHub repository. The README.md file explains how to deploy and run each example.

Overview

Three of the most useful messaging patterns for serverless developers are queues, publish/subscribe, and event buses. In AWS, these are provided by Amazon SQS, Amazon SNS, and Amazon EventBridge respectively. All of these services are fully managed and highly available, so there is no infrastructure to manage. All three integrate with Lambda, allowing you to publish messages via the AWS SDK and invoke functions as targets. Each of these services has an important role to play in serverless architectures.

SNS enables you to send messages reliably between parts of your infrastructure. It uses a robust retry mechanism for when downstream targets are unavailable. When the delivery policy is exhausted, it can optionally send those messages to a dead-letter queue for further processing. SNS uses topics to logically separate messages into channels, and your Lambda functions interact with these topics.

SQS provides queues for your serverless applications. You can use a queue to send, store, and receive messages between different services in your workload. Queues are an important mechanism for providing fault tolerance in distributed systems, and help decouple different parts of your application. SQS scales elastically, and there is no limit to the number of messages per queue. The service durably persists messages until they are processed by a downstream consumer.

EventBridge is a serverless event bus service, simplifying routing events between AWS services, software as a service (SaaS) providers, and your own applications. It logically separates routing using event buses, and you implement the routing logic using rules. You can filter and transform incoming messages at the service level, and route events to multiple targets, including Lambda functions.

Integrating an SQS queue with AWS SAM

The first example shows an AWS SAM template defining a serverless application with two Lambda functions and an SQS queue:

Producer-consumer example

You can declare an SQS queue in an AWS SAM template with the AWS::SQS::Queue resource:

  MySqsQueue:
    Type: AWS::SQS::Queue

To publish to the queue, the publisher function must have permission to send messages. Using an AWS SAM policy template, you can apply policy that enables send messaging to one specific queue:

      Policies:
        - SQSSendMessagePolicy:
            QueueName: !GetAtt MySqsQueue.QueueName

The AWS SAM template passes the queue name into the Lambda function as an environment variable. The function uses the sendMessage method of the AWS.SQS class to publish the message:

const AWS = require('aws-sdk')
AWS.config.region = process.env.AWS_REGION 
const sqs = new AWS.SQS({apiVersion: '2012-11-05'})

// The Lambda handler
exports.handler = async (event) => {
  // Params object for SQS
  const params = {
    MessageBody: `Message at ${Date()}`,
    QueueUrl: process.env.SQSqueueName
  }
  
  // Send to SQS
  const result = await sqs.sendMessage(params).promise()
  console.log(result)
}

When the SQS queue receives the message, it publishes to the consuming Lambda function. To configure this integration in AWS SAM, the consumer function is granted the SQSPollerPolicy policy. The function’s event source is set to receive messages from the queue in batches of 10:

  QueueConsumerFunction:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: code/
      Handler: consumer.handler
      Runtime: nodejs12.x
      Timeout: 3
      MemorySize: 128
      Policies:  
        - SQSPollerPolicy:
            QueueName: !GetAtt MySqsQueue.QueueName
      Events:
        MySQSEvent:
          Type: SQS
          Properties:
            Queue: !GetAtt MySqsQueue.Arn
            BatchSize: 10

The payload for the consumer function is the message from SQS. This is an array of messages up to the batch size, containing a body attribute with the publishing function’s MessageBody. You can see this in the CloudWatch log for the function:

CloudWatch log result

Integrating an SNS topic with AWS SAM

The second example shows an AWS SAM template defining a serverless application with three Lambda functions and an SNS topic:

SNS fanout to Lambda functions

You declare an SNS topic and the subscribing Lambda functions with the AWS::SNS:Topic resource:

  MySnsTopic:
    Type: AWS::SNS::Topic
    Properties:
      Subscription:
        - Protocol: lambda
          Endpoint: !GetAtt TopicConsumerFunction1.Arn    
        - Protocol: lambda
          Endpoint: !GetAtt TopicConsumerFunction2.Arn

You provide the SNS service with permission to invoke the Lambda functions but defining an AWS::Lambda::Permission for each:

  TopicConsumerFunction1Permission:
    Type: 'AWS::Lambda::Permission'
    Properties:
      Action: 'lambda:InvokeFunction'
      FunctionName: !Ref TopicConsumerFunction1
      Principal: sns.amazonaws.com

The SNSPublishMessagePolicy policy template grants permission to the publishing function to send messages to the topic. In the function, the publish method of the AWS.SNS class handles publishing:

const AWS = require('aws-sdk')
AWS.config.region = process.env.AWS_REGION 
const sns = new AWS.SNS({apiVersion: '2012-11-05'})

// The Lambda handler
exports.handler = async (event) => {
  // Params object for SNS
  const params = {
    Message: `Message at ${Date()}`,
    Subject: 'New message from publisher',
    TopicArn: process.env.SNStopic
  }
  
  // Send to SQS
  const result = await sns.publish(params).promise()
  console.log(result)
}

The payload for the consumer functions is the message from SNS. This is an array of messages, containing subject and message attributes from the publishing function. You can see this in the CloudWatch log for the function:

CloudWatch log result

Differences between SQS and SNS configurations

SQS queues and SNS topics offer different functionality, though both can publish to downstream Lambda functions.

An SQS message is stored on the queue for up to 14 days until it is successfully processed by a subscriber. SNS does not retain messages so if there are no subscribers for a topic, the message is discarded.

SNS topics may broadcast to multiple targets. This behavior is called fan-out. It can be used to parallelize work across Lambda functions or send messages to multiple environments (such as test or development). An SNS topic can have up to 12,500,000 subscribers, providing highly scalable fan-out capabilities. The targets may include HTTP/S endpoints, SMS text messaging, SNS mobile push, email, SQS, and Lambda functions.

In AWS SAM templates, you can retrieve properties such as ARNs and names of queues and topics, using the following intrinsic functions:

Amazon SQSAmazon SNS
Channel typeQueueTopic
Get ARN!GetAtt MySqsQueue.Arn!Ref MySnsTopic
Get name!GetAtt MySqsQueue.QueueName!GetAtt MySnsTopic.TopicName

Integrating with EventBridge in AWS SAM

The third example shows the AWS SAM template defining a serverless application with two Lambda functions and an EventBridge rule:

EventBridge integration with AWS SAM

The default event bus already exists in every AWS account. You declare a rule that filters events in the event bus using the AWS::Events::Rule resource:

  EventRule: 
    Type: AWS::Events::Rule
    Properties: 
      Description: "EventRule"
      EventPattern: 
        source: 
          - "demo.event"
        detail: 
          state: 
            - "new"
      State: "ENABLED"
      Targets: 
        - Arn: !GetAtt EventConsumerFunction.Arn
          Id: "ConsumerTarget"

The rule describes an event pattern specifying matching JSON attributes. Events that match this pattern are routed to the list of targets. You provide the EventBridge service with permission to invoke the Lambda functions in the target list:

  PermissionForEventsToInvokeLambda: 
    Type: AWS::Lambda::Permission
    Properties: 
      FunctionName: 
        Ref: "EventConsumerFunction"
      Action: "lambda:InvokeFunction"
      Principal: "events.amazonaws.com"
      SourceArn: !GetAtt EventRule.Arn

The AWS SAM template uses an IAM policy statement to grant permission to the publishing function to put events on the event bus:

  EventPublisherFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: code/
      Handler: publisher.handler
      Timeout: 3
      Runtime: nodejs12.x
      Policies:
        - Statement:
          - Effect: Allow
            Resource: '*'
            Action:
              - events:PutEvents      

The publishing function then uses the putEvents method of the AWS.EventBridge class, which returns after the events have been durably stored in EventBridge:

const AWS = require('aws-sdk')
AWS.config.update({region: 'us-east-1'})
const eventbridge = new AWS.EventBridge()

exports.handler = async (event) => {
  const params = {
    Entries: [ 
      {
        Detail: JSON.stringify({
          "message": "Hello from publisher",
          "state": "new"
        }),
        DetailType: 'Message',
        EventBusName: 'default',
        Source: 'demo.event',
        Time: new Date 
      }
    ]
  }
  const result = await eventbridge.putEvents(params).promise()
  console.log(result)
}

The payload for the consumer function is the message from EventBridge. This is an array of messages, containing subject and message attributes from the publishing function. You can see this in the CloudWatch log for the function:

CloudWatch log result

Comparing SNS with EventBridge

SNS and EventBridge have many similarities. Both can be used to decouple publishers and subscribers, filter messages or events, and provide fan-in or fan-out capabilities. However, there are differences in the list of targets and features for each service, and your choice of service depends on the needs of your use-case.

EventBridge offers two newer capabilities that are not available in SNS. The first is software as a service (SaaS) integration. This enables you to authorize supported SaaS providers to send events directly from their EventBridge event bus to partner event buses in your account. This replaces the need for polling or webhook configuration, and creates a highly scalable way to ingest SaaS events directly into your AWS account.

The second feature is the Schema Registry, which makes it easier to discover and manage OpenAPI schemas for events. EventBridge can infer schemas based on events routed through an event bus by using schema discovery. This can be used to generate code bindings directly to your IDE for type-safe languages like Python, Java, and TypeScript. This can help accelerate development by automating the generation of classes and code directly from events.

This table compares the major features of both services:

Amazon SNSAmazon EventBridge
Number of targets10 million (soft)5
Availability SLA99.9%99.99%
Limits100,000 topics. 12,500,000 subscriptions per topic.100 event buses. 300 rules per event bus.
Publish throughputVaries by Region. Soft limits.Varies by Region. Soft limits.
Input transformationNoYes – see details.
Message filteringYes – see details.Yes, including IP address matching – see details.
Message size maximum256 KB256 KB
BillingPer 64 KB
FormatRaw or JSONJSON
Receive events from AWS CloudTrailNoYes
TargetsHTTP(S), SMS, SNS Mobile Push, Email/Email-JSON, SQS, Lambda functions.15 targets including AWS LambdaAmazon SQSAmazon SNSAWS Step FunctionsAmazon Kinesis Data StreamsAmazon Kinesis Data Firehose.
SaaS integrationNoYes – see integrations.
Schema Registry integrationNoYes – see details.
Dead-letter queues supportedYesNo
FIFO ordering availableNoNo
Public visibilityCan create public topicsCannot create public buses
Pricing$0.50/million requests + variable delivery cost + data transfer out cost. SMS varies.$1.00/million events. Free for AWS events. No charge for delivery.
Billable request size1 request = 64 KB1 event = 64 KB
AWS Free Tier eligibleYesNo
Cross-RegionYou can subscribe your AWS Lambda functions to an Amazon SNS topic in any Region.Targets must be in the same Region. You can publish across Regions to another event bus.
Retry policy
  • For SQS/Lambda, exponential backoff over 23 days.
  • For SMTP, SMS and Mobile push, exponential backoff over 6 hours.
At-least-once event delivery to targets, including retry with exponential backoff for up to 24 hours.

Conclusion

Messaging is an important part of serverless applications and AWS services provide queues, publish/subscribe, and event routing capabilities. This post reviews the main features of SNS, SQS, and EventBridge and how they provide different capabilities for your workloads.

I show three example applications that publish and consume events from the three services. I walk through AWS SAM syntax for deploying these resources in your applications. Finally, I compare differences between the services.

To learn more building decoupled architectures, see this Learning Path series on EventBridge. For more serverless learning resources, visit https://serverlessland.com.

Send real-time alerts using Amazon Pinpoint

Post Syndicated from Dhiraj Thakur original https://aws.amazon.com/blogs/messaging-and-targeting/send-real-time-alerts-using-amazon-pinpoint/

Businesses need to send real-time notifications in order to take action when alerted of a critical situation. Examples could include anomaly detection, healthcare emergencies, operations failures, and fraud transactions. Email, SMS, and push notifications are often used to notify stakeholders in real-time. However, building a large-scale, real-time notification solution can be a complex and costly challenge for a business.

Amazon Pinpoint enables you to engage with your stakeholders in real-time by sending email, SMS and push notifications. Your app can use the Amazon Pinpoint API and the AWS SDKs to send direct messages. With transactional messages, you send alerts to specific recipients, as opposed to messages that you send to segments. There is no minimum fee, no setup cost, and no fixed monthly cost with Amazon Pinpoint.

In this blog, we explore a solution to notify stakeholders of a large customer transaction. This requires immediate attention as our stakeholders want to ensure that there is enough inventory available to deliver goods with no delay.

Solution Overview

The solution that we build to handle this use case can be deployed in one hour. The following diagram illustrates the AWS services integrated in this solution:

At a high level, the solution uses the following workflow:

1. Define large value transaction threshold in rule table in Amazon DynamoDB.
2. Setup Amazon Pinpoint and configure to send email and SMS.
3. Setup AWS Lambda and implement the logic to send SMS and email if a customer makes a large value order transaction.
4. Create a test transaction in an order table using Amazon DynamoDB.
5.  Check details of SMS and email received.

Setting up the solution

Step 1: Set up Amazon Pinpoint

The first step in setting up this solution is to create a new Amazon Pinpoint project and configure the SMS and Email channel.
1. Navigate to Services -> Pinpoint.
2.Click on create a project.
3. Provide a name and click on the Create button.
4. Select Email in left panel and click on Edit.
5.  Select “Enable the email channel for this project”. Select “Verify a new email address” and provide a default sender address. Click on verify email address. Click on save. You should get a verification mail. Verify the email by clicking on the verification link.

Once your email is verified, it should look like this:

6. Repeat the same step to verify receiver email address as well.

Please note: By enabling the email channel, we can send up to 200 email per day in sandbox mode. We must verify an email address or domain identity in order to send any email. While we remain in sandbox, we also must verify all recipient email addresses before we proceed — this does not apply in production.

7.  Select SMS and voice in left panel and select “Enable the SMS channel for this project”. Please make sure “Transactional” is selected for critical or time-sensitive messages. Amazon Pinpoint optimizes the delivery of these messages for highest reliability.

Please note: We do not need to verify any phone numbers for this channel. However, we can request dedicated codes (either short or long) for our own exclusive use. Otherwise AWS will automatically allocate codes when we send SMS.

8. Note down the project ID.

We have done all the configurations in Amazon Pinpoint. Now, it’s time to setup Amazon DynamoDB tables.

Step 2: Set up Amazon DynamoDB table

1. Navigate to Services -> DynamoDB.
2. Click Create table.
3. Create a record similar to this:

This table stores your definition of large-value order transaction.

{
		"transaction_type" : "premium"
		"min_transaction_value" : 10000
		"max_transaction_value" : 50000
		"send_notification" : ""
		"unit" : "$"
		"email" : "[email protected]"
		"phone" : "+91123456789"
		
	
	}

4. Create order_detail table. Provide order_id as partition key. You can select “String” as data type.
5. Now we must enable the stream in this table. Once we enable a stream on a table, Amazon DynamoDB captures information about every modification to data items in the table. We will integrate this table with AWS Lambda to validate the order value. Click on Manage Stream.
6. Select appropriate view type and click on Enable.

Step 3: Set up AWS Lambda

In this step, we create an AWS Lambda function and then integrate it with the order_detail table. After, we will check the order and send a notification if large enough.

1. Navigate to Service>Lambda.
2.Click on Create function.
3. Select runtime as Python 3.6. Select an execution role. Please make sure that your role gives read access to Amazon DynamoDB table.
4. Click on Add Trigger.
5.  Select the Amazon DynamoDB table from the drop-down. Select order_detail table from the drop-down. Keep everything default. This integrates order_detail streaming to our AWS Lambda function.

Our Lambda function is invoked every time a transaction happens in order_detail table.

6. Now copy the source code.

The AWS Lambda code is available on GitHub here: https://github.com/aws-samples/send-real-time-notification-using-amazon-pinpoint-samples 

Step 4: Testing

1. That’s it! We have built the solution. Now it’s time to do an end to end test by creating an order transaction in order_detail table.

2. Run the Amazon DynamoDB put-item command to create a new item over our threshhold, or replace an old item with a new one. In our case it will be a new order transaction in order_detail table. Please make sure you have configured AWS CLI. You can refer to the quickstart guide to learn more.

aws dynamodb put-item --table-name order_detail --item "{""order_id"":{""S"":""O0001""},""customer_id"":{""S"":""C0001""},""customer_name"":{""S"":""JOHN MILLER""},""order_date"":{""S"":""2020-06-13T17:42:34Z""},""item_id"":{""S"":""P0001""},""item_quantity"":{""N"":""12""},""order_value"":{""N"":""20000""},""unit"":{""S"":""$""},""delivery_date"":{""S"":""2020-06-20""}}" --return-consumed-capacity TOTAL

3. It returns a success message similar to this:

4.  The order value of $20,000 is falling in the range of our large value order transaction definition (between $10,000 to $50,000). You should receive an SMS on the mobile number you have provided in the transaction_alert_rule table.

5. You will also receive an email as per configuration in the transaction_alert_rule table.

Step 4: Clean up

You’ve now successfully built our real-time notification solution using Amazon Pinpoint. Delete the resource you created in this blog like the Amazon DynamoDB tables to avoid ongoing charges. You can use the AWS CLI, AWS Management Consoles, or the AWS APIs to perform the cleanup.

Conclusion

Customers can use Amazon Pinpoint to help scale communications across use cases, including real-time notifications. Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service. You can connect with customers and stakeholders over channels like email, SMS, push, or voice.

 

Serverless Stream-Based Processing for Real-Time Insights

Post Syndicated from Justin Pirtle original https://aws.amazon.com/blogs/architecture/serverless-stream-based-processing-for-real-time-insights/

Building on our previous posts regarding messaging patterns and queue-based processing, we now explore stream-based processing and how it helps you achieve low-latency, near real-time data processing in your applications. AWS offers two managed services for streaming, Amazon Kinesis and Amazon Managed Streaming for Apache Kafka (Amazon MSK).

What is streaming data?

At AWS, we define streaming data as data that is emitted at high volume in a continuous, incremental manner with the goal of low-latency processing. Whereas traditional batch-oriented business intelligence would offer insights in retrospect after months, days, or hours have passed, stream-based processing can offer actionable insights in real time. Stream-based processing is commonly used to respond to clickstream events, rapidly ingest various types of logs, and extract, transform, and load (ETL) data in real-time into data lakes and data warehouses.

Amazon Kinesis is the AWS service that makes it easy to collect, process, and analyze such real-time, streaming data with four different capabilities:

For this blog post, we focus on Kinesis Data Streams and Kinesis Data Firehose, since both of these services are foundational for streaming, ingestion, buffering, and processing in your streaming data pipeline.

Kinesis Data Streams

Amazon Kinesis Data Streams is a massively scalable service that can continuously capture gigabytes of data per second from hundreds of thousands of sources. Like many distributed systems, Kinesis Data Streams achieves this level of scalability by partitioning or sharding your data where records are simultaneously written to and read from different shards in parallel. All Kinesis Data Streams require allocation of at least one shard and you choose how many shards you want to allocate to a given stream.

When writing to a shard in a Kinesis Data Stream, each shard supports ingestion of up to 1 MB of data per second or 1,000 records written per second. When reading from a shard, each shard supports output of 2 MB of data per second. You choose an initial number of shards to allocate for your Kinesis Data Stream, then can update your shard allocation over time. Increasing your shard allocation enables your application to easily scale from thousands of records to millions of records written per second.

Producing streaming data

Streaming data producers are processes that put records onto a Kinesis stream by calling the putRecord API to write a single record or putRecords API to write multiple records in a single invocation. Common approaches for producing messages including direct use of AWS tools, including:

  • AWS SDK, which simplifies authentication and other semantics of invoking AWS service APIs
  • Amazon Kinesis Agent, which enables local file/log monitoring and rotation sending in real time
  • Amazon Kinesis Producer Library, which simplifies aggregating records into larger payloads to improve throughput.

Additionally, several AWS services natively integrate with Amazon Kinesis as a data producer:

There are also several third-party services that offer native integration as data producers, including:

Regardless of the producer service/tool of choice, all data producers put records onto a stream by providing a partition key, stream name, and the data itself, which altogether must not exceed 1 MB in size. The partition key provided is used to determine which shard the data should be written to on the stream. Amazon Kinesis Data Streams offers ordering guarantees and maintain message ordering within a given shard in a stream using sequence numbers to track the unique position of each message sent.

Consuming streaming data

Once records are written to a Kinesis Data Stream, they are buffered in their respective shards for consumption. Unlike queue-based processing, the records are buffered until the data retention period set on the stream elapses, enabling one or more consumers to replay all in the messages in the shards of the stream. If your application must deliver your records to a data lake, data warehouse, Elasticsearch Service cluster, or Splunk, Kinesis Data Firehose can natively deliver your records to the following without needing to write any custom code:

  • Amazon S3
  • Amazon Redshift
  • Amazon Elasticsearch Service
  • Splunk

You simply indicate the desired delivery destination and configuration regarding how to batch and deliver the messages. Kinesis Data Firehose can also use your configured S3 desired object naming, Amazon Redshift table name, Amazon Elasticsearch index name, and more.

For custom processing or destinations outside of the Amazon Kinesis Data Firehose supported services above, you will need to write and execute custom code to consume data from the stream. Though you can use the Kinesis Client Library (KCL) to run your own custom processing application on persistent virtual machines or container instances, AWS Lambda offers serverless computing with native event source integration with Amazon Kinesis Data Streams. AWS Lambda as a stream consumer takes care of the operational overhead of reading shards, maintaining record order, check pointing as records are processed, and parallelizing processing.

Serverless stream processing with AWS Lambda

When configured with a Kinesis Stream as its event source, AWS Lambda continuously polls every shard in your stream at no extra charge and only invokes your Lambda code if and when there are messages in the stream. It additionally scales up the number of concurrent executions to parallelize reading all shards of a stream at the same time (and can have multiple executions reading the same shard simultaneously for a higher parallelization factor, if desired). AWS Lambda automatically checkpoints which records were successfully processed and handles retries and any failures automatically according to your desired configuration.

Best of all, there is no additional cost of the Lambda service handling all of these operational needs for you. You only pay for compute time when your function is invoked and messages are available on the stream for processing. You’re able to focus on processing your data with your business logic directly in your code since your records are sent as an array to your Lambda code. There is no additional code to author/manage regarding checkpointing, shard splits/merges, or other complexities.

Conclusion

In this blog, we defined streaming data and explored the Amazon Kinesis service and its various capabilities. We then reviewed the various options available for producing and consuming real-time streaming data with Amazon Kinesis, including using AWS Lambda for serverless streaming data processing. Please refer to the following resources for further learning on AWS streaming data processing:

Application Integration Using Queues and Messages

Post Syndicated from Mithun Mallick original https://aws.amazon.com/blogs/architecture/application-integration-using-queues-and-messages/

In previous blog posts in this messaging series, we provided an overview of messaging and we also explained the common characteristics to consider when  evaluating messaging channel technologies. In this post, we will explain some of the semantics of queue-based processing, its use in designing flexible systems, and how to apply it to your use cases. AWS offers two queue-based services: Amazon Simple Queue Service (SQS) and Amazon MQ. We will focus on SQS in this blog.

Building Blocks

In the digital world, even the most basic design of web based systems requires the use of queues to integrate applications. SQS is a secure, serverless, durable, and highly available message queue service. It provides a simple REST API to create a queue as well as send, receive, and delete messages.

Producing Messages

Message producers are processes that call the SendMessage APIs of SQS. SQS supports two types of queues: standard and first-in-first-out (FIFO). Standard queues provide best effort ordering while FIFO provides first-in-first-out ordering. Messages can be sent either as single messages or in batches. Standard queues can support unlimited throughput by adding as many concurrent producers as needed, whereas FIFO queues can support up to 300 TPS without batching and 3000 TPS with batching.

app integration- producing messages

In terms of message delivery, SQS standard supports “at-least-once” delivery, meaning that messages will be delivered at least once but occasionally, more than one copy of the message will be delivered. SQS FIFO provides “exactly once” delivery, meaning it can detect duplicate messages.

Consuming Messages

Message consumers are processes that make the ReceiveMessage API call on SQS. Messages from queues can be processed either in batch or as a single message at a time. Each approach has its advantages and disadvantages.

  • Batch processing: This is where each message can be processed independently, and an error on a single message is not required to disrupt the entire batch. Batch processing provides the most throughput for processing, and also provides the most optimizations for resources involved in reading messages.
  • Single message processing: Single message processing is commonly used in scenarios where each message may trigger multiple processes within the consumer. In case of errors, the retry is confined to the single message.

To maintain the order of processing, FIFO queues are typically consumed by a single process. Messages across message groups can still be processed in parallel. However, the overall throughput limit will still apply for FIFO queues.

SQS supports long and short polling on queues. Short polling will sample a subset of servers and will return the messages or provide an empty response if there are no messages in those servers. Long polling will query all servers and return immediately if there are messages. Long polling can reduce cost by reducing the number of calls in cases of empty responses. Get more details and sample code for sending and receiving messages.

It’s simple to poll for messages but it does have an overhead of running a process continuously. In some situations it may be hard to monitor such processes and troubleshoot in case of errors. SQS integration with AWS Lambda takes the undifferentiated heavy lifting of polling logic into a Lambda agent.

app integration- consuming messages

Get more details for SQS as an event source for Lambda in this blog post: AWS Lambda Adds Amazon Simple Queue Service to Supported Event Sources. This is supported for both standard as well as FIFO queues.

Benefits

Queue-based processing can protect backend systems like relational databases from unexpected surge in front-end traffic. You can decouple the processing logic by using an intermediate queue.

Consider a scenario that involves a web application to order products related to a TV show. It’s possible that the ordering and payment systems rely on a traditional relational database. During peak show seasons, queues can be used to control the traffic to the ordering and payment system. In the following diagram we show how the web application requests are staged in the queue, while the consumption of the users on the backend is based on the capacity of the database.

app integration - benefits

Error Handling

Asynchronous message processing presents unique challenges with error handling since they just manifest as log entries on the producer or consumer and create backlogs in processing. In order to handle errors elegantly, the first requirement is to address the nature of the error. In case the error is transient, it may help to retry after a brief delay and eventually move to a dead letter queue if repeated attempts fail. The other error scenario can be cases where the data itself is bad and the error won’t fix, even after repeated attempts. In such cases, consumer process needs to make this determination and needs to move the message to an error queue.

In some cases, due to its highly distributed architecture, Standard SQS can deliver a duplicate message, which it can result in duplicate key errors for the consumer. The best way to mitigate it is to make the consumers idempotent so that processing the same message multiple times produces the same result.

Conclusion

In this blog, we explained the importance of messaging in building distributed applications, various aspects of queue based processing using SQS like sending and receiving messages, FIFO versus Standard SQS, and common error handling scenarios. We also covered using SQS as an event source for AWS Lambda. Please refer to following blog posts for using messaging in different integration patterns:

Introduction to Messaging for Modern Cloud Architecture

Post Syndicated from Sam Dengler original https://aws.amazon.com/blogs/architecture/introduction-to-messaging-for-modern-cloud-architecture/

We hope you’ve enjoyed reading our posts on best practices for your serverless applications. The posts in the following series will focus on best practices when introducing messaging patterns into your applications. Let’s review some core messaging concepts and see how they can be used to address challenges when designing modern cloud architectures.

Introduction

Applications can communicate information with each other using messages, a mechanism for packaging a data payload and associated metadata. The application that sends a message is called the producer and the application that receives the message is called the consumer. Producers and consumers exchange messages using a variety of transportation channels, for example point-to-point requests, message queues, subscription topics, or event buses. These transportation channels have differently characteristics that make them useful when implementing message communication patterns. Dependencies emerge when producers and consumers exchange messages, which is called coupling.

Synchronous Communication

synchronous communication

Message communication is called synchronous when the producer sends a message to the consumer and waits for a response before the producer continues its processing logic. An example of synchronous communication over a point-to-point channel is when a HTTP client makes a request to a HTTP service, waits for the service to process the request, and then applies logic to the HTTP response to determine how to proceed.

Synchronous communication patterns are more straightforward to implement, however they create tight coupling between producers and consumers. Tight coupling can cause problems due to traffic spikes and failures propagating directly throughout the application. For example, in a three-tier architecture, when the application experiences a spike in client traffic, the web tier directly translates the traffic spike as pressure on downstream resources (the logic and data tiers), which may not scale to meet the sudden demand. Likewise, downstream resource failure in the logic or data tier directly impacts the web tier from responding to client requests. Applications can mimic a synchronous experience, for example a status spinner, using asynchronous communication with a polling or push notification strategy.

Asynchronous Communication

Asynchronous communication

Message communication is called asynchronous when the producer sends a message to the consumer and proceeds without waiting for the response. An example of asynchronous communication over a message queue channel is when a client publishes a message to a queue, and after the queue acknowledges receipt of the message, the publisher proceeds without waiting for the consumer to process the message.

Asynchronous communication patterns are implemented using transportation channels such as queues, topics, and event buses to create loose coupling between producers and consumers. Loose coupling increases an architecture’s resiliency to failure and ability to handle traffic spikes because it creates an indirection between producer and consumer communication, enabling them to operate independently of each other. Using the three-tier architecture example, a message queue can be introduced between the web, logic, and data tiers to enable each to scale independently of each other. When the application experiences a spike in client traffic, the web tier translates the traffic spike as more messages to the queue for processing, however the logic tier may continue to process messages off the queue without being directly impacted.

Considerations and Next Steps

Although asynchronous communication patterns can benefit modern cloud architectures, there are tradeoffs to consider. Asynchronous messaging adds latency to end-to-end processing time due to the addition of middleware. Producers and consumers take a dependency on the middleware stack, which must also scale to meet demand and be resilient to failure. Care must be taken to appropriately configure producers, consumers, and middleware to handle errors so that messages are not lost, more monitoring is required to ensure proper operations, and multiple logs must be correlated to troubleshoot and diagnose problems.

Amazon MQ, Amazon KinesisAmazon Simple Queue Service (SQS), Amazon Simple Notification Service (SNS), and Amazon EventBridge are highly available, large scale, failure resistant managed services that can be used to implement asynchronous messaging patterns. You can explore these services at the AWS Messaging page and their integration into Serverless Architectures in the free new digital course, Architecting Serverless Solutions. You can also visit the AWS Event-Driven Architecture page to learn how to apply messaging patterns to build event-driven solutions. The upcoming posts in this series will explore these AWS services to help ensure message patterns are implemented using best practices when applied to modern cloud architecture.

The value of using an Email Service Provider (ESP) for end customer communications

Post Syndicated from Heidi Gloudemans original https://aws.amazon.com/blogs/messaging-and-targeting/the-value-of-using-an-email-service-provider/

In the world of pervasive consumer chat and messaging applications, email has retained its place as a ubiquitous channel for end customer communications. The Radicati Group reports that email usage will continue to grow from an estimate of 3.9 billion users in 2019 to 4.4 billion in 2023.1 Email is a welcome (and preferred) conduit for many consumers to stay connected to their favorite brands.2 Marketers and developers have found that email has the highest ROI of any end customer communications channel because of its low barrier to entry, affordability and ability to target specific recipients.

However, the very strength of email may also be its biggest weakness. The ubiquity of email as a communication channel means that many consumer mailboxes are inundated with potentially malicious or unwanted messages. Brands must then take action to ensure that their marketing and transactional messages are received by the end customer in a timely manner. While personalized content is one of the fastest growing parts of facilitating deeper customer engagement, organizations should first understand the value of partnering with a mature and trusted email service provider (ESP.)

Many organizations have built internal competencies around business email. Some businesses use an on-premises email server instance or subscribe to a managed email cloud offering (like Amazon WorkMail), while others use email servers on cloud-hosted infrastructure like Amazon EC2. While the fundamentals of email transport is shared between business and marketing email platforms, marketing and transactional email has unique parameters that require additional consideration. ESPs have built specialized expertise in delivering email at scale.

Security and Scale

ESPs have the ability to send email on behalf of an organization’s domain or subdomains. ESPs facilitate mail delivery securely through both domain-key identified mail (DKIM) records and sender policy framework (SPF) records. SPF validates for the mail received that the IP address is part of an accepted senders list. The second authentication protocol used is DKIM which provides a signature to verify each mail using an encrypted key pairing to validate the domain purported is the actual domain sending the message. The combination of both security methods is the first step in authenticating a message from an ESP to the mailbox provider or recipient domain. An additional optional authentication reporting mechanism is Domain Message Reporting and Conformance (DMARC), which can report back any attempts of non-authenticated parties to spoof sending domains.

In addition to security, the best ESPs can also deliver mail at a very large scale. Scale is not just the compute power to deliver hundreds of millions of email a day. It also includes the ability to sustain significant sending volumes of email over time while receiving end-point delivery messages. However, in addition to ESP infrastructure, there are other insights that ESPs can provide around deliverability of a message.

Deliverability

ESPs typically give mail senders the ability to manage their reputation assigned by major internet service providers (ISPs) that include Google Mail, Outlook.com, etc. Reputation is assigned to sending IP addresses and domains with a combination of influencing factors. Evaluation criteria includes, but is not limited to, the following:

  • Type of message and content – Marketing vs. Transactional
  • Valid email addresses and hard bounce rates
  • Feedback from end users (ie. Spam complaints and unsubscribes)
  • Open rates and engagement from end users
  • Time of delivery and cadence of delivery
  • Major blacklist listings, like Spamhaus

Reputation impacts deliverability, which impacts whether a message ends up in an end customer’s mailbox. Transactional messages and marketing messages are typically treated differently by ISPs. Transactional messages (like order confirmations, password resets, etc.) need to be sent and received within a short period of time after a transaction and has less strict rules when it comes to content and scheduling. Marketing messages must adhere to specific regulatory rules per country.

A poor deliverability designation may throttle or even block your message’s ability to reach its destination, instead ending up in the junk folder or blocked altogether. Top ESPs monitor and provide feedback, ensuring mail senders have the ability to take action. However, ESPs also have a responsibility to keep bad actors off their platforms and will block bad senders utilizing them as an ESP resource.

Most ESPs give mail senders the flexibility to manage their sender reputation through both deployment options and delivery statistics. Common deployment options include shared and dedicated IP addresses and pools. Shared IP pools comprise of many individual senders all using the same IP space. The IP reputation is shared across the pool and can both positively and negatively influence senders in the pool based on the majority of traffic. Dedicated IPs are managed directly by a single customer, and can even be dedicated to specific functions. For example a set of IPs can be dedicated to transaction emails vs. pure marketing communications. Many customers prefer dedicated environments because of the ability directly influence deliverability and prevent external influence on their sending reputation.

Statistics that may prove to be leading indicators of deliverability problems include:

  • Mailbox placement (Inbox vs. Junk folder)
  • Detailed email deliverability statistics (bounce rates, etc.)
  • Blacklist monitoring

However, the most important factor impacting mailbox placement is having explicit permission from the end user, with agreed upon content and frequency. End users should know when they are opting into email communications with senders. It is not recommended to use purchased or rented lists, and doing so is typically in violation of the use policy of most ESPs. To stay compliant, it is important to keep your contact preferences up to date and quickly react to unsubscribes and complaints.

Email Analytics

In addition to measuring your sender reputation, metrics are also an important part of measuring the success of your email campaign. Part of calculating the ROI of your outbound email effort is understanding how many emails were opened and whether the end user completed any calls to action (CTAs) in the mail, like clicking a link.

Email analytics from your ESP should include the end-to-end lifecycle of the mail, including:

  • Deliverability rates
  • Delivery rates per ISP
  • Unique Open rates
  • Unique Click through rates
  • Unsubscribes
  • Complaints

Ultimately, in combination with web commerce integration, these metrics can help you determine conversion rates of your campaign to a paying customer.

Conclusion

Connecting to end customers and driving engagement is the goal of every brand. But to make each connection count, brands must ensure that their messages are being delivered through purpose-built infrastructure. Email Service Providers (ESPs) should be part of every organization’s successful email marketing strategy.

Amazon SES has been the trustworthy, flexible and affordable email service provider for developers and marketers since 2012. Learn more here: https://aws.amazon.com/ses/

1https://www.statista.com/statistics/255080/number-of-e-mail-users-worldwide/

2https://www.statista.com/statistics/984615/consumer-brand-communications-channels/

 

About the Author
Heidi Gloudemans is a Senior PMM on Amazon SES and Amazon Pinpoint

Integrating B2B using event notifications with Amazon SNS

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/integrating-b2b-using-event-notifications-with-amazon-sns/

This post is courtesy of Murat Balkan, AWS Solutions Architect

Event notification patterns are popular among B2B integrations. Their scalable and decoupled structure helps implement complex integration scenarios in a variety of enterprises.

This post introduces a generic serverless architecture that applies to external integrations that use event notifications with Amazon SNS and Event Fork Pipelines. Some business scenarios involving B2B integrations include:

  • Inventory information sourcing to customers
  • Catalog sourcing to suppliers or partners
  • Real-time event sourcing to customers (for example, in an online auction)

External integration use cases vary, but a fundamental fact unifies them: External integration is difficult because target systems are not under your control. For example, your IT capabilities may differ. You may not have an internal development team and might rely on tools that can read data from a specific source type.

Alternatively, you may have a large development team and therefore have more data-processing needs and capabilities. These systems may replicate the information from the source, run complex machine learning algorithms against historical data, and must act upon real-time data.

Overview of event notification

Event notification is the sharing of state changes that occur in an application or domain with other applications or domains. You can relate events to any domain object such as orders, products, shipments, or financial transactions. The owner of these entities publishes changes to subscribers. The subscribers subscribe to different events and receive notifications accordingly when a new event is available.

After receiving an event, the integrating party application must determine what to do with the event. The application can store the event, enrich it with additional information, relay it to another party, or ignore it.

To ensure scalability, the publishing application must write or publish to a durable and scalable destination, for you to read from there. These destinations can be message queues (such as Amazon MQ and Amazon SQS) or data streams (such as Amazon Kinesis and Apache Kafka, and the managed AWS version Amazon MSK). To choose between streams and queues, evaluate the traffic characteristics and business use cases. However, the main principles are the same for both.

While creating your B2B external integration architecture, consider different needs, and introduce a mechanism to subscribe to only events of interest. In the example of an online auction site, those that perform active and automated bidding might be interested in real-time bidding events. Others, such as shippers, are only interested in tomorrow’s auction inventory. For the latter, an InventoryItemCreated event can be enough.

Events reflect the nature of a business environment, which can be unpredictable. If a worldwide event affects the markets, event counts might soar dramatically. A marketing event can also cause order events to rise. You need a scalable infrastructure to support your architecture. Serverless is a perfect fit for these kinds of scenarios, and this post’s architecture leverages several AWS serverless components.

In this architecture, you interact with a self-subscription application that exposes a REST API. To start interacting with the system, you also select one or more integration channels for receiving the events. You may prefer SFTP, while others prefer webhooks or multiple channels at the same time. Your IT and development capabilities play an essential role in this selection.

After you determine the integration channels, optionally select types of events channels. The self-subscription application knows all possible event types. As part of its development process, the application provisions them each time a new event appears.

The architecture’s notification channels are as follows:

  • Submission of read-time updates using webhook integration
  • Direct S3 access or SFTP integration
  • Access to Kinesis directly from other AWS accounts

Main data flow

The data flow begins when the publisher applications publish all of their events to a single Amazon SNS topic. Amazon SNS follows the publish and subscribe pattern to fan out a published message to all subscribers of that message topic.

It is worth mentioning Amazon’s new serverless service offering for event-based integrations, Amazon EventBridge. Amazon EventBridge is an event bus that makes it easy to connect applications together using data from your own applications, Software-as-a-Service (SaaS) applications, and AWS services. Amazon EventBridge comes with a powerful rules engine which allows you to put the business logic onto the bus. It can manipulate the payload of the messages and deliver specific payloads to specific consumer applications. Native event integration capabilities with AWS services make it a good candidate for event-based integrations.

For this architecture, I used SNS because AWS offers a quick deployment option through Event Fork Pipelines, a collection of open-source event handling pipelines, based on the AWS Serverless Application Model (AWS SAM).

You can deploy Event Fork Pipelines directly from the AWS Serverless Application Repository into your AWS account. The proposed SNS based architecture also allows the use of custom message payloads in any JSON format, including raw.

SNS has a powerful feature called subscription filter policies. These policies serve as intercepting filters and pass only the desired types of messages to subscribers. Because SNS performs the filtering, you don’t have to implement filtering logic, which decreases their complexity. The policies look for specific attributes and their values in the message. You can use the message attribute Event_Type for filtering.

After filtering the events, route them to the previously selected notification channels. The events land in a queue at each channel before the channel’s logic processes. SNS has built-in integration with SQS, a powerful serverless queueing service. SQS holds your events and acts as a buffer. Every delivery channel’s characteristics and handlers are different. You need a different SQS queue per delivery channel type.

Lambda functions subscribed to the webhook queue handle the webhook notification mechanism. You can convert the polled events to external HTTPS calls against your web servers. Internet delivery over the HTTP protocol is always slower than internal message propagation.

To keep up with the constant flow of events and increase message throughput, webhooks are sent in parallel. SQS provides different features for handling errors that might occur on the subscribing side. For example, the visibility timeout mechanism causes messages to be available after a specified time period, which serves as an auto-retry mechanism for consumed but not properly processed messages. You can also reject functional errors, which cause SQS to send you to dead-letter queues (DLQ) for further troubleshooting.

Amazon S3 handles the file-based notification mechanism. In this integration, a Lambda function polls a dedicated queue that integrates with S3. This function forwards the events to Amazon Kinesis Data Firehose. Kinesis Data Firehose acts as a buffer to consolidate individual messages into bigger files. SQS provides up to 10 messages in a single batch.

After reaching the Kinesis Data Firehose batch size or batch interval, Kinesis Data Firehose delivers the files to an S3 bucket. You can share this bucket with accounts using cross-account access. If you rely on SFTP for file transfers, AWS Transfer for SFTP can expose the objects over SFTP.

Kinesis Data Firehose also lets you define Lambda functions for the transformation of data before your Lambda function writes it to S3. You can use this part of the process to cleanse, filter, or enrich your data. For direct system integration, SQS cross-account access is always an option if you have an account. For more information, see Basic Examples of Amazon SQS Policies.

Example architecture that uses different event types for different delivery channels

Figure 1: Example architecture that uses different event types for different delivery channels

Self-subscription application

A self-subscription application collects channel and event type selections. You can use a single-page application that interacts with a REST API that Amazon API Gateway hosts. API Gateway uses AWS Lambda for backend processing and Amazon DynamoDB for user profile persistence. After collecting integration channel selections and optionally event type filters for these channels, the self-subscription application also orchestrates cloud provisioning tasks.

As subscriptions occur, the self-subscription application’s backend Lambda function triggers AWS CloudFormation to update the subscriptions, subscription filters, and other notification infrastructure components. A different AWS CloudFormation stack manages every integrating party.

Because the whole architecture is serverless, you can use AWS SAM during your provisioning and let AWS SAM interact with AWS CloudFormation. AWS SAM aims to simplify infrastructure as code practices for serverless resources. AWS SAM also allows you to inject application resources from AWS Serverless Application Repository.

AWS provides a set of serverless applications via AWS Serverless Application Repository to cover common integration scenarios for event-driven architectures. These off-the-shelf applications speed up the development and delivery of common event-driven mechanisms such as Command Query Responsibility Segregation (CQRS), Event Replay, and Event Storage or Backup. You can reference Event Fork   applications within AWS SAM templates to use in your applications.

While designing the provisioning pipeline of the proposed architecture, reuse two of the Event Fork Pipeline applications: Event Replay Pipeline (fork-event-replay-pipeline) and Event Storage and Backup Pipeline (fork-event-storage-backup-pipeline). Your webhooks use case is a custom Event Fork Pipeline application.

Because a single AWS SAM template contains these applications, you can deploy and manage the subscription filters and Event Fork Pipelines as a single self-subscription application stack for each integrating party.

CI/CD Pipeline that deploys the serverless application via AWS SAM

Figure 2: CI/CD Pipeline that deploys the serverless application via AWS SAM

In this architecture, Lambda converts the user input into an AWS SAM template and puts it into an S3 bucket. This PUT action triggers an AWS CodePipeline pipeline. The pipeline’s build phase downloads, packages, and deploys the provided AWS SAM template. You can also enhance the pipeline with features such as notifications, manual approvals, or external integrations.

Conclusion

Architectures such as this help you share your business or data events with suppliers, partners, and customers while minimizing integration time and streamlining your business processes. You can try out existing Event Fork Pipelines that was published by AWS, create custom pipelines for your internal use or share them with other AWS users in the AWS Serverless Application Repository.

Migrating from IBM MQ to Amazon MQ using a phased approach

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/migrating-from-ibm-mq-to-amazon-mq-using-a-phased-approach/

This post is contributed by Mithun Mallick, Solutions Architect and Christian Mueller, Solutions Architect

Message-oriented middleware (MOM), or message brokers, are the backbone that integrates business critical applications in many industries. MOMs are used to integrate systems like inventory management, payment systems, and CRM systems. They are also used to orchestrate order-processing workflows across multiple systems, or integrate modern web applications with legacy backend applications.

Some of the most commonly used MOMs in the market are IBM MQ, TIBCO EMS, Rabbit MQ, and Apache ActiveMQ. These systems are often costly to maintain and can have high licensing costs.

Our customers often tell us they want to migrate their applications to the cloud but are hindered by the heavy-lift involved in migrating their MOM systems. They are concerned that migrating their MOM is difficult, and that they must migrate all interconnected systems in one step.

In this post, we show how to build a bridge solution from on-premises IBM MQ to Amazon MQ, to migrate your applications in a step-by-step manner. Amazon MQ is a managed message broker service from AWS that makes it easy to set up and operate message brokers in the cloud.

We explain the steps to move the producers (senders) and consumers (receivers) in phases from your on-premises to the cloud. This process uses Amazon MQ as the message broker, and decommissions IBM MQ once all producers/consumers have been successfully migrated.

You can migrate applications without disrupting existing business systems, and take advantage of the agility, flexibility, and reliability of the cloud. The same pattern can be used if you are using another MOM system, like TIBCO EMS or Rabbit MQ (which supports JMS/NMS, AMQP, or MQTT).

In this post, we show how the phased migration approach helps you mitigate the risk involved a ‘big bang’ cutover. The goal of the solution is to enable an incremental migration of your on-premises applications and MOM to the AWS Cloud. It also replaces your message broker with Amazon MQ. The solution is shown in the diagram below:

Migrating from IBM MQ to Amazon MQ

You can follow the step-by-step instructions in the GitHub’s README file for your own migration.

Initial state and pre-requisites

In the initial state, you are running producers and consumers connected to an on-premises IBM MQ broker.

On-premises IBM MQ broker.

The first step is to facilitate a bridge from IBM MQ to Amazon MQ. Given that IBM MQ is running on-premises, the bridge solution must either exchange messages over the internet by setting up an AWS VPN tunnel, or by using AWS Direct Connect to establish connectivity with the AWS Cloud.

To ensure confidentiality and security for your message exchange, we recommend using AWS VPN. If you have low latency requirements to forward your messages, using AWS Direct Connect is the recommended way.

To complete the hands-on portion of this blog, you must have access to an IBM MQ broker. If you don’t have access, you can provision an IBM MQ broker, running on a Docker container in AWS Fargate.

Once the connectivity is established, you must set up an Amazon MQ broker from the AWS Management Console, or using an AWS CloudFormation template. There is a template provided as part of this post.

Step – 1

Begin by deploying some consumer applications on AWS. These consumers can be new applications or additional instances of applications already running on-premises. You configure these consumer applications to receive messages from Amazon MQ. At this stage, message producers are still on-premises sending messages to the IBM MQ broker.

Next, bridge from IBM MQ to Amazon MQ using a proxy pattern. The proxy pattern is technology-agnostic, and you implement the pattern using Apache Camel to build a JMS bridge. Apache Camel is an open source integration framework for implementing Enterprise Integration Patterns. Apache Camel includes JMS components that easily connect with IBM MQ and Amazon MQ.

In this step, you build an Apache Camel route to consume messages from IBM MQ, and forward to Amazon MQ. Here is an example from the camel-context.xml file, which defines the configuration:

<route id="ibmMQ-to-amazonMQ">
<description>Camel Route from IBM MQ to Amazon MQ</description>
<from uri="ibmMQ:queue:DEV.QUEUE.2?concurrentConsumers=5"/>
<inOnly uri="amazonMQ:queue:DEV.QUEUE.2?preserveMessageQos=true"/>
</route>

This Apache Camel route defines how messages from the producer applications connected to IBM MQ move to Amazon MQ. In this example, there is one sample route but you may have many routes in your production use-case.

Apache Camel is deployed as a Docker container running on AWS Fargate as compute engine for Amazon Elastic Container Service (ECS). ECS is a container orchestration framework that manages the deployment of the containers, and runs them in a highly scalable manner.

AWS Fargate eliminates the heavy lifting of scaling the underlying virtual machines for your ECS cluster. By defining the desired capacity of AWS Fargate tasks, it introduces self-healing capabilities to the JMS bridge. AWS Fargate tracks the number of healthy tasks, and creates new tasks automatically if an old one is no longer available.

Now the JMS bridge and the on-premises consumers are listening on the same queue and waiting for messages. Messages sent to IBM MQ are consumed by on-premises consumers as well as the JMS bridge.  The JMS bridge forwards messages to Amazon MQ to be consumed by the applications already migrated to the cloud. You can now validate that messages are processed by applications on AWS and on-premises.

Now phase 1 of the migration is validated. You can continue to move more consumers as you get more comfortable with the availability and scalability of the bridge solution. The goal of this phase is to reach a state where messages are still produced on-premises, with some consumers running on AWS.

Migrate IBM MQ to Amazon MQ Step 1

Step – 2

Several consumer applications are now migrated to AWS. The goal now is to move the producers to AWS. Start the migration by running a few producer applications on AWS and connect them to Amazon MQ. The Apache Camel bridge is also updated to facilitate bidirectional flow of messages.

The following configuration code shows the route, which moves messages from Amazon MQ to IBM MQ:

<route id="amazonMQ-to-ibmMQ">
<description>Camel Route from Amazon MQ to IBM MQ</description>
<from uri="amazonMQ:queue:DEV.QUEUE.2?concurrentConsumers=5"/>
<inOnly uri="ibmMQ:queue:DEV.QUEUE.2?preserveMessageQos=true"/>
</route>

At this point, messages originating from the producers on AWS have consumers on both on-premises and on AWS. You can validate the processing of messages on AWS as well as on-premises. The state can be seen in the picture as below:

MIgrate IBM MQ to AMazon MQ Step 2

Move more producer applications to AWS as you validate test results, and are comfortable with the bridge solution. Step 2 of the migration is validated once you have confirmed the results of the testing on AWS.

Step – 3

To make this JMS bridge resilient, it must scale in and out automatically, based on your current load. You can configure this by using Amazon CloudWatch metrics and CloudWatch alarms. These alarms can trigger scaling activities to scale in or out, with a fixed number of instances or a percentage-based scaling.

You can also scale out your AWS system based on the utilization of your on-premises broker by defining custom CloudWatch metrics. For example, by running a CRON script on the on-premises broker machine to periodically report the metrics such as queue depth.

Migrate IBM MQ to Amazon MQ Step 3

At this point, this has shown the advantages that the cloud offers for running services efficiently with high availability and resiliency. The automatic scaling capabilities of Amazon ECS generate additional instances of Apache Camel containers as load increases and the queues are filling up. It also scales it in as the load decreases.

You have now established a stable and scalable bridge solution. The next step is to move all the remaining producer/consumer applications to the AWS Cloud. If you have applications that cannot move to the cloud, such as mainframe applications, these can remained connected through your on-premises IBM broker. All other applications can be migrated.

Step – 4

All producers and consumer applications have now been moved to AWS. All the messages that are sent to Amazon MQ broker are processed directly by the consumers running on AWS. The Apache Camel route to move messages from Amazon MQ to IBM MQ and vice versa is disabled.

Migrate IBM MQ to Amazon MQ Step 4

Step – 5

The final goal is to move all application from on-premises to the AWS Cloud. Once all applications are migrated, you can decommission the Apache Camel bridge solution. All the resources deployed in the Apache Camel bridge solution are deleted, along with the automatic scaling and Amazon CloudWatch alarm configuration.

All producers and consumers are now migrated to running on AWS with Amazon MQ as their message broker.

Migrate IBM MQ to Amazon MQ Step 5

Conclusion

In this blog, we showed how to migrate on-premises applications that are dependent on commercial message brokers to the AWS Cloud. The approach relies on a bridge solution, which is based on the proxy pattern, and is technology-independent.

The bridge provides a low risk migration of the applications in phases so that you can validate the migration and avoid any disruption to your business.

For more information on migrating to Amazon MQ and using Apache Camel, please see Migrating from RabbitMQ to Amazon MQ and Integrating Amazon MQ with other AWS services via Apache Camel.

Application integration patterns for microservices: Fan-out strategies

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/application-integration-patterns-for-microservices-fan-out-strategies/

This post is courtesy of Dirk Fröhner

The first blog in this series introduced asynchronous messaging for building loosely coupled systems that can scale, operate, and evolve individually. It considered messaging as a communications model for microservices architectures. This post covers concrete architectural considerations, focusing on the messaging architecture.

Wild Rydes

Wild Rydes is a fictional technology start-up. You may have heard of it – it disrupts individual transportation by replacing traditional taxis with unicorns. We use the Wild Rydes storyline in several hands-on AWS workshops. It illustrates concepts such as serverless development, event-driven design, API management, and messaging in microservices.

This blog post explores the decision-making process in building the Wild Rydes workshop, with a goal of helping you apply these concepts to your applications.

In the workshop, a customer requests a ‘unicorn’ ride using the Wild Rydes customer application. Registered unicorn drivers can use the application to manage their rides. Unicorn drivers submit a ride completion message after they have successfully delivered a customer to their destination.

Wild Rydes app image

Submit a ride completion

API exposed by the unicorn management service

At Wild Rydes, end-user clients are implemented as mobile applications and communicate via REST APIs (also known as hypermedia APIs) with the backend services.

For this use case, the application interacts with the API exposed by the unicorn management service. It uses the submit-ride-completion resource that it discovered from the API’s home document to send the relevant details of a ride to the backend. In response, the backend persists these details, creates a new completed-ride resource. This returns the respective status code, the location, and a representation of the new resource to the client. The API details are shown below.

Request from client to submit the details of a completed ride:

POST /<submit-ride-completion-resource-path> HTTP/1.1
Content-Type: application/json;charset=UTF-8
...

{
    "from": "...",
    "to": "...",
    "duration": "...",
    "distance": "...",
    "customer": "...",
    "fare": "..."
}

Response from the unicorn management service:

HTTP/1.1 201 Created
Date: Sat, 31 Aug 2019 12:00:00 GMT
Location: <url-of-newly-created-completed-ride-resource>
Content-Location: <url-of-newly-created-completed-ride-resource>
Content-Type: application/json;charset=UTF-8
...

{
    "links": {
        "self": {
            "href": "https://..."
        }
    },
    <completed-ride-resource-representation-properties>
}

Schematic architecture for the use case

The schematic architecture for the use case is shown in diagram 1 below:

Diagram 1: Mutliple microservices need information about ride completion

Diagram 1: Multiple microservices need information about ride completion

There are other microservices in Wild Rydes that are also interested in a new completed ride. The examples from the diagram are:

  • Customer notification service: customers should receive a notification in the app about their latest completed ride.
  • Customer accounting service: After all, Wild Rydes is a business, so this service is responsible for collecting the fare from the customer.
  • Customer loyalty service: Everybody wants to collect miles and would like to receive benefits for being a loyal customer.
  • Data lake ingestion service: Wild Rydes is a data-driven company and they want to ingest all data generated from any process into their data lake for arbitrary analytics.
  • Extraordinary rides service: This special service is interested in rides with fares or distances above certain thresholds for preparing insights for business managers.

Based on this scenario, let’s review the integration options.

Integration options

Integration via database

The unicorn management service stores the details of a completed ride in a database. It could share the database with the other services directly, but that creates tight coupling. Sharing the database also restricts your flexibility to scale and evolve your services.

Integration via REST APIs

What about using REST APIs for the integration? The HTTP-based implementation of the REST architectural style uses the distributed architecture concepts of the web. However, what does this mean for the implementation?

Diagram 2: Using REST APIs to communicate to microservices

Diagram 2: Using REST APIs to communicate to microservices

As shown in diagram 2 above:

  • Effectively, all interested services on the right-hand side would have to expose an API resource. These would be called by the unicorn management service for each newly completed ride.
  • To enable elasticity behind a single resource URL, you may need a load balancer in front of each interested service.
  • The unicorn management service would have to know about all these interested services and their respective APIs. Hopefully, each service uses a streamlined API resource.
  • Lastly, the unicorn management service must store, retry, and track all request attempts in case an interested service is not available. This ensures durability so we don’t lose any of these notifications.

One approach is to manage a recipient list in the unicorn management service. This adds additional complexity to the unicorn management service and coupling on both sides. Although there are self-registration and discovery approaches, managing a recipient list is not the core use case of the unicorn management service.

Diagram 3: Using a separate service to manage the fan-out to other services

Diagram 3: Using a separate service to manage the fan-out to other services

A better approach would be to externalize the recipient list into a separate Request Distribution Service, as diagram 3 shows. This decouples both sides, but binds each side to the new service. Still, the unicorn management service is still responsible for the delivery of the ride data to all the recipients. Again, this heavy lifting is not the main task of this service.

Diagram 4: Filtering information for extraordinary rides

Diagram 4: Filtering information for extraordinary rides

In diagram 4, the information filtering for the Extraordinary Rides Service is self-managed. This means that there is code on one side to either not send or to discard irrelevant ride data.

For this use case, integration via REST APIs potentially adds coupling to the services. And it adds heavy lifting to the services that is beyond their actual domain.

Integration via messaging

A third option could use messaging for the integration.

Publish-subscribe pattern

Both Amazon SNS and Amazon EventBridge can be used to implement the publish-subscribe pattern.  In this use case, we recommend Amazon SNS, which scales to support high throughput and fan-out applications. Amazon EventBridge includes direct integrations with software as a service (SaaS) applications and other AWS services. It’s ideal for publish-subscribe use cases involving these types of integrations.

Diagram 5: Using Amazon SNS to implement a publish-subscribe pattern

Diagram 5: Using Amazon SNS to implement a publish-subscribe pattern

Diagram 5 shows an SNS topic called Ride Completion Topic. The unicorn management service can now send the details about a completed ride into that topic. All interested services on the right-hand side can subscribe to this topic.

Using a message topic to publish the details of a completed ride frees us from managing the recipient list, as well as making ensuring reliable delivery of the messages. It also decouples both sides as much as possible. Services on the right-hand side can autonomously subscribe to the topic. The Unicorn Management Service does not know anything about the topic’s subscribers.

Message filter pattern

Looking at the Extraordinary Rides Service, the message filter functionality of Amazon SNS can autonomously and individually discard irrelevant messages. The Extraordinary Rides Service can specify the threshold values for the fare and distance.

Diagram 6: Filtering extraordinary rides using Amazon SNS

Diagram 6: Filtering extraordinary rides using Amazon SNS

Topic-queue-chaining pattern

Consider the publish-subscribe channel between the Unicorn Management Service, and the subscribing services on the right-hand side.

One of the consuming services may go offline for maintenance. Or the code that processes messages from the ride completion topic could run into an exception. These are two examples where a subscriber service could potentially miss topic messages.

A good pattern to apply here is topic-queue-chaining. That means that you add a queue, in our case an SQS queue, between the ride completion topic and each of the subscriber services. As messages are buffered persistently in an SQS queue, it prevents lost messages if a subscriber process run into problems for many hours or days.

Diagram 7: Chaining topics and queues to buffer messages persistently

Diagram 7: Chaining topics and queues to buffer messages persistently

Queues as buffering load balancers

An SQS queue in front of each subscriber service also acts as a buffering load balancer.

Since every message is delivered to one of potentially many consumer processes, you can scale out the subscriber services, and the message load is distributed over the available consumer processes.

As messages are buffered in the queue, they are preserved during a scaling event, such as when you must wait until an additional consumer process becomes operational.

Lastly, these queue characteristics help flatten peak loads for your consumer processes, buffering messages until consumers are available. This allows you to process messages at a pace decoupled from the message source.

Conclusion

The Wild Rydes example shows how messaging can provide decoupling and greater flexibility for your microservices landscape.

In contrast to REST APIs, a messaging system takes care of message delivery outside of your service code. Using a publish-subscribe channel provides simple fan-out capability. And message filters allow for selective message reception without the effort of implementing that logic into your code.

With topic-queue-chaining pattern, you can add queue characteristics to a fan-out scenario so that you can easily scale out on the consumer side, and flatten peak loads.

For a deeper dive into queues and topics and how to use them in your microservices architecture, please use the following resources:

  1. AWS whitepaper: Implementing Microservices on AWS
  2. AWS blog: Implementing enterprise integration patterns with AWS messaging services: point-to-point channels
  3. AWS blog: Implementing enterprise integration patterns with AWS messaging services: publish-subscribe channels
  4. AWS blog: Building Scalable Applications and Microservices: Adding Messaging to Your Toolbox

Understanding asynchronous messaging for microservices

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/understanding-asynchronous-messaging-for-microservices/

This post is courtesy of Dirk Fröhner

One of the implications of applying the microservices architectural style is that much communication between components happens over the network. After all, your microservices landscape is a distributed system. To achieve the promises of microservices, such as being able to individually scale, operate, and evolve each service, this communication must happen in a loosely coupled and reliable manner.

A common way to loosely couple services is to expose an API following the REST architectural style. REST APIs are based on the architecture of the web and provide loose coupling between communicating parties. REST APIs offer a great way to decouple interfaces from concrete implementations, and to advise clients about what they can do next, by the use of links and link relations.

While REST APIs are common and useful in microservices design, REST APIs tend to be designed with synchronous communications, where a response is required. A request coming from an end-user client can trigger a complex communications path within your services landscape, which can effectively add coupling between the services at runtime. After all, this is why there are mitigation patterns like circuit-breaker in the first place. REST APIs can also add some heavy lifting to your infrastructure that we will discuss further below.

Asynchronous messaging

If loose-coupling is important, especially in a system that requires high resilience and has unpredictable scale, another option is asynchronous messaging.

Asynchronous messaging is a fundamental approach for integrating independent systems, or building up a set of loosely coupled systems that can operate, scale, and evolve independently and flexibly. As our colleague Tim Bray said, “If your application is cloud-native, or large-scale, or distributed, and doesn’t include a messaging component, that’s probably a bug.” In this blog post, we will outline some fundamental benefits of asynchronous messaging for the communications between microservices.

For a refresher on the fundamental messaging patterns and their implementations with Amazon SQS, Amazon SNS, and Amazon MQ, please read our previous blog posts

For a summary of the semantics of queues and topics:

  • A queue is like a buffer. You can put messages into a queue, and you can retrieve messages from a queue. Message queues operate so that any given message is only consumed by one receiver, although multiple receivers can be connected to the queue.
  • A topic is like a broadcasting station. You can publish messages to a topic, and anyone interested in these messages can subscribe to the topic. In this model, any message published to a topic is immediately received by all of the subscribers of the topic (unless you have applied the message filter pattern).

Use-case

Consider a typical scenario illustrated in the diagram below. An end-user client (EUC) addresses an API resource of one of our services, through Amazon API Gateway in this example. From there, the request can potentially follow a path across the microservices landscape to get completely processed.

To provide the final result, there will be potentially cascading subsequent requests sent between other microservices. This example illustrates the complexity involved in processing a single end user request.

End User Client accessing a service using an API

Diagram 1: End-User Client accessing a service using an API

End-user clients (EUCs) often communicate with services via REST APIs in a synchronous manner. However, the communication can also be designed using an asynchronous approach. For instance, if an EUC submits a request that takes some time to process, the respective API resource can respond with HTTP status 202 Accepted, and a link to a resource that provides the current processing status. Downstream, the communication between the service that receives that request, and other services that are involved in processing the request, can happen asynchronously using messaging services.

There are situations where a communications model using asynchronous messaging can make your life easier than using REST APIs.

Infrastructure complexity

Start with looking at the infrastructure complexity for the backends of your services. Depending on your implementation paradigm, you have to include different components in your infrastructure that you don’t have to deal with when using messaging.

Imagine your services each expose a REST API. Typically, this means you add a load balancer in front of your compute layer, and your backend implementation includes an HTTP server. It is usually a good idea to decouple your services APIs from their concrete implementations, so you could also consider adding Amazon API Gateway in front of your load balancer.

For a serverless approach, you don’t need to worry about load balancing and scaling out infrastructure. Amazon API Gateway with AWS Lambda integration provides a fully managed solution for removing complexities around infrastructure management.

Using Amazon SQS as a cloud-native messaging service for queues, you don’t employ any of the above mentioned components. As described in a prior post, an SQS queue can act as a load balancer in itself. The consumers, or target services, don’t need an HTTP server, but simply ask a queue for available messages. If you use AWS Lambda for your consumers, this process is even simpler, as the Lambda functions are automatically invoked when messages appear in an SQS queue. See Using AWS Lambda with Amazon SQS to learn more.

The same applies to Serverless architectures implementing a publish/subscribe pattern. Lambda function executions can be directly triggered by SNS messages. Without AWS Lambda, you need load balancers and web servers in your backend implementations to receive SNS notifications, as those are injected via web hooks into your services. SNS also provides the fan-out functionality that you would otherwise have to build using an intermediary component to implement a recipient list of subscribers.

Reliability, resilience

For synchronous systems, if a service crashes while it processes the payload of an API request, the information is lost. A good way to prevent this on a microservice is to explicitly persist an incoming request immediately after receiving it. Then process and reprocess, until the request is finally marked as resolved.

This approach requires additional work, and it requires the microservice to not crash while persisting an incoming API request. The microservice sending a request must also resend if the target service doesn’t acknowledge receipt. For example, it doesn’t respond with a successful HTTP status code, or the connection drops.

When sending messages to a queue, this additional work is addressed by the messaging infrastructure. A message will remain in a queue unless a consumer explicitly states that processing is finished by acknowledging the message reception. As long as message reception is not acknowledged by a consumer, it will stay in the queue. Messages can be retained in an SQS queue for a maximum of 14 days.

Scale out latency

Under increased load, your services must scale out to process the requests. You must then consider scale-out latency, which may be managed for you with serverless implementations. It takes a few moments from when an Auto Scaling group triggers the launch of additional instances until these are ready to operate. Also launching new container tasks takes time. When your scaling threshold is not optimal and the scaling event occurs late, your available resources may be unable to serve all incoming requests. These requests may be lost or answered with HTTP status code 5xx.

Using message queues that buffer messages during a scaling event help prevent this. Even in use cases where the EUC is waiting for an immediate response, this is the more reliable architecture. If your infrastructure needs time to scale out and you are not able to process all requests in time, the requests are persisted.

When messaging is your only choice

What happens when your services must respond to peak loads at scale?

For many applications, the scale-out latency, including load balancer pre-warming, will eventually become too large to handle steeply ascending loads fast enough. With a serverless architecture, exposing your Lambda functions with API Gateway can handle steeply ascending loads. But you must still consider downstream systems, which may be easily overwhelmed.

In these scenarios, where rapid scaling without overwhelming downstream systems is important, messaging may be your best choice. Message queues help protect your downstream services by buffering incoming payloads for consumption at the pace of the consuming service. This helps not only for the communications between microservices, but also when peak loads flood your client-facing API. Often, the most important goal is to accept an incoming request, while the actual processing of that request can happen later. You decouple these steps from each other by using queues.

Serverless messaging systems like Amazon SQS and Amazon SNS can respond quickly to support high scale. These are often the best solution when scale is unpredictable.  While the instance-based messaging system, Amazon MQ, provides compatibility with open standards, it requires manual scaling for large workloads, unlike serverless messaging services.

Conclusion

We hope you got some inspiration to also employ asynchronous messaging for your microservices communications architecture. In blog XYZ we provide concrete examples of these patterns. For more information, feel free to consume the following resources:

  1. AWS whitepaper: Implementing Microservices on AWS
  2. AWS blog: Implementing enterprise integration patterns with AWS messaging services: point-to-point channels
  3. AWS blog: Implementing enterprise integration patterns with AWS messaging services: publish-subscribe channels
  4. AWS blog: Building Scalable Applications and Microservices: Adding Messaging to Your Toolbox

Read the next blog in the series,  Application Integration Patterns for Microservices: Fan-out Strategies.

Designing durable serverless apps with DLQs for Amazon SNS, Amazon SQS, AWS Lambda

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/designing-durable-serverless-apps-with-dlqs-for-amazon-sns-amazon-sqs-aws-lambda/

This post is courtesy of Otavio Ferreira, Sr Manager, SNS.

In a postal system, a dead-letter office is a facility for processing undeliverable mail. In pub/sub messaging, a dead-letter queue (DLQ) is a queue to which messages published to a topic can be sent, in case those messages cannot be delivered to a subscribed endpoint.

Amazon SNS supports DLQs, making your applications more resilient and durable upon delivery failure modes.

Understanding message delivery failures and retries

The delivery of a message fails when it’s not possible for Amazon SNS to access the subscribed endpoint. There are two reasons why this might happen:

  • Client errors, where the client is SNS (the message sender).
  • Server errors, where the server is the system that hosts the subscription endpoint (the message receiver), such as Amazon SQS or AWS Lambda.

Client errors

Client errors happen when SNS has stale subscription metadata. One common cause of client errors is when you (the endpoint owner) delete the endpoint. For example, you might delete the SQS queue that is subscribed to your SNS topic, without also deleting the SNS subscription corresponding to the queue. Another common cause is when you change the resource policy attached to your endpoint in a way that prevents SNS from delivering messages to that endpoint.

These errors are considered client errors because the client has attempted the delivery of a message to a destination that, from the client’s perspective, is no longer accessible. SNS does not retry the delivery of messages that failed as the result of client errors.

Server errors

Server errors happen when the system that powers the subscribed endpoint is unavailable, or when it returns an exception response indicating that it failed to process a valid request from SNS.

When server errors occur, SNS retries the failed deliveries according to a backoff function, which can be either linear or exponential. When a server error occurs for an AWS managed endpoint, backed by either SQS or Lambda, then SNS retries the delivery for up to 100,015 times, over 23 days.

Server errors can also happen with customer managed endpoints, namely HTTP, SMS, email, and mobile push endpoints. SNS also retries the delivery for these types of endpoints. HTTP endpoints support customer-defined retry policies, while SNS sets an internal delivery retry policy for SMS, email, and mobile push endpoints to 50 times, over 6 hours.

Delivery retries

SNS may receive a client error, or continue to receive a server error for a message beyond the number of retries defined by the corresponding retry policy. In that event, SNS discards the message. Setting a DLQ to your SNS subscription enables you to keep this message, regardless of the type of error, either client or server. DLQs give you more control over messages that cannot be delivered.

For more information on the delivery retry policy for each delivery protocol supported by SNS, see Amazon SNS Message Delivery Retry.

Using DLQs for AWS services

SNS, SQS, and Lambda support DLQs, addressing different failure modes. All DLQs are regular queues powered by SQS.

In SNS, DLQs store the messages that failed to be delivered to subscribed endpoints. For more information, see Amazon SNS Dead-Letter Queues.

In SQS, DLQs store the messages that failed to be processed by your consumer application. This failure mode can happen when producers and consumers fail to interpret aspects of the protocol that they use to communicate. In that case, the consumer receives the message from the queue, but fails to process it, as the message doesn’t have the structure or content that the consumer expects. The consumer can’t delete the message from the queue either. After exhausting the receive count in the redrive policy, SQS can sideline the message to the DLQ. For more information, see Amazon SQS Dead-Letter Queues.

In Lambda, DLQs store the messages that resulted in failed asynchronous executions of your Lambda function. An execution can result in an error for several reasons. Your code might raise an exception, time out, or run out of memory. The runtime executing your code might encounter an error and stop. Your function might hit its concurrency limit and be throttled. Regardless of the error type, when the error occurs, your code might have run completely, partially, or not at all. By default, Lambda retries an asynchronous execution twice. After exhausting the retries, Lambda can sideline the message to the DQL. For more information, see AWS Lambda Dead-Letter Queues.

When you have a fan-out architecture, with SQS queues and Lambda functions subscribed to an SNS topic, we recommend that you set DLQs to your SNS subscriptions, and to your destination queues and functions as well. This approach gives your application resilience against message delivery failures, message processing failures, and function execution failures too.

Applying DLQs in a use case

Here’s how everything comes together. The following diagram shows a serverless backend architecture that supports a car rental application. This is a durable serverless architecture based on DLQs for SNS, SQS, and Lambda.

Dead Letter Queue - DLQ SNS use case with architecture diagram

When a customer places an order to rent a car, the application sends that request to an API, which is powered by Amazon API Gateway. The REST API is backed by an SNS topic named Rental-Orders, and deployed onto an Amazon VPC subnet. The topic then fans out that order to the following two subscribed endpoints, for parallel processing:

  • An SQS queue, named Rental-Fulfilment, which feeds the integration with an internal fulfilment system hosted on Amazon EC2.
  • A Lambda function, named Rental-Billing, which processes and loads the customer order into a third-party billing system, also hosted on Amazon EC2.

To increase the durability of this serverless backend API, the following DLQs have been set up:

  • Two SNS DLQs, namely Rental-Fulfilment-Fanout-DLQ and Rental-Billing-Fanout-DLQ, which store the order in case either the subscribed SQS queue or Lambda function ever becomes unreachable.
  • An SQS DLQ, named Rental-Fulfilment-DLQ, which stores the order when the fulfilment system fails to process the order.
  • A Lambda DLQ, named Rental-Billing-DLQ, which stores the order when the function fails to process and load the order into the billing system.

When the DLQ captures the message, you can inspect the message for troubleshooting purposes. After you address the error at hand, you can poll the DLQ to retry the processing of the message.

Setting up DLQs for subscriptions, queues, and functions can be done using the AWS Management Console, SDK, CLI, API, or AWS CloudFormation. You can use the SDK, CLI, and API for polling the DLQs as well.

Configuring DLQs for subscriptions

You can attach a DLQ to an SNS subscription by setting the subscription’s RedrivePolicy parameter. The policy is a JSON object that refers to the DLQ ARN. The ARN must point to an SQS queue in the same AWS account as that of the SNS subscription. Also, both the DLQ and the subscription must be in the same AWS Region.

Here’s how you can configure one of the SNS DLQs applied in the car rental application example, presented earlier.

The following JSON object is a CloudFormation template that subscribes the SQS queue Rental-Fulfilment to the SNS topic Rental-Orders. The template also sets a RedrivePolicy that targets Rental-Fulfilment-Fanout-DLQ as a DLQ.

Lastly, the template sets a FilterPolicy value. It makes SNS deliver a message to the subscribed queue only if the published message carries an attribute named order-status with value set to either confirmed or canceled. As Amazon SNS Message Filtering happens before message delivery, messages that are filtered out aren’t sent to that subscription’s DLQ.

Internally, the CloudFormation template uses the SNS Subscribe API action for deploying the subscription and setting both policies, all part of the same API request.

{  
   "Resources": {
      "mySubscription": {
         "Type" : "AWS::SNS::Subscription",
         "Properties" : {
            "Protocol": "sqs",
            "Endpoint": "arn:aws:sqs:us-east-1:123456789012:Rental-Fulfilment",
            "TopicArn": "arn:aws:sns:us-east-1:123456789012:Rental-Orders",
            "RedrivePolicy": {
               "deadLetterTargetArn": 
                  "arn:aws:sqs:us-east-1:123456789012:Rental-Fulfilment-Fanout-DLQ"
            },
            "FilterPolicy": { 
               "order-status": [ "confirmed", "canceled" ]
            }
         }
      }
   }
}

Maybe the SNS topic and subscription are already deployed. In that case, you can use the SNS SetSubscriptionAttributes API action to set the RedrivePolicy, as shown by the following code examples, based on the AWS CLI and the AWS SDK for Java.

$ aws sns set-subscription-attributes 
   --region us-east-1
   --subscription-arn arn:aws:sns:us-east-1:123456789012:Rental-Orders:44019880-ffa0-4067-9cb4-b974443bcck2
   --attribute-name RedrivePolicy 
   --attribute-value '{"deadLetterTargetArn":"arn:aws:sqs:us-east-1:123456789012:Rental-Fulfilment-Fanout-DLQ"}'
AmazonSNS sns = AmazonSNSClientBuilder.defaultClient();

String subscriptionArn = "arn:aws:sns:us-east-1:123456789012:Rental-Orders:44019880-ffa0-4067-9cb4-b974443bcck2";

String redrivePolicy = "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:123456789012:Rental-Fulfilment-Fanout-DLQ\"}";

SetSubscriptionAttributesRequest request = new SetSubscriptionAttributesRequest(
  subscriptionArn, 
  "RedrivePolicy", 
  redrivePolicy
);

sns.setSubscriptionAttributes(request);

Monitoring DLQs

You can use Amazon CloudWatch metrics and alarms to monitor the DLQs associated with your SNS subscriptions. In the car rental example, you can monitor the DLQs to be notified when the API failed to distribute any car rental order to the fulfillment or billing systems.

As regular SQS queues, the DLQs in SNS emit a number of metrics to CloudWatch, in 5-minute data points, such as NumberOfMessagesSent, NumberOfMessagesReceived and NumberOfMessagesDeleted. You can use these SQS metrics to be notified upon activity in your DLQs in SNS, so you may trigger a message recovery protocol.

You might have a case where you expect the DLQ to be always empty. In that case, create an CloudWatch alarm on NumberOfMessagesSent, set the alarm threshold to zero, and provide a separate SNS topic to be notified when the alarm goes off. The SNS topic, in its turn, can delivery your alarm notification to any endpoint type that you choose, such as email address, phone number, or mobile pager app.

Additionally, SNS itself provides its own set of metrics that are relevant to DLQs. Specifically, SNS metrics include the following:

  • NumberOfNotificationsRedrivenToDlq – Used when sending the message to the DLQ succeeds.
  • NumberOfNotificationsFailedToRedriveToDlq – Used when sending the message to the DLQ fails. This can happen because the DLQ either doesn’t exist anymore or doesn’t have the required access permissions to allow SNS to send messages to it. For more information about setting up the required access policy, see Giving Permissions for Amazon SNS to Send Messages to Amazon SQS.

Debugging with DLQs

Use CloudWatch Logs to see the exceptions that caused your SNS deliveries to fail and your messages to be sidelined to DLQs. In the car rental example, you can inspect the rental orders in the DLQs, as well as the logs associated with these queues. Then you can understand why those orders failed to be fanned out to the fulfilment or billing systems.

SNS can log both successful and failed deliveries in CloudWatch. You can enable Amazon SNS Delivery Status Logging by setting three SNS topic attributes, which are delivery protocol-specific. As an example, for SNS deliveries to SQS queues, you must set the following topic attributes: SQSSuccessFeedbackRoleArn,  SQSFailureFeedbackRoleArn, and SQSSuccessFeedbackSampleRate.

The following JSON object represents a successful SNS delivery in an CloudWatch Logs entry. The status code logged is 200 (SUCCESS). The attribute RedrivePolicy shows that the SNS subscription in question had its DLQ set.

{
  "notification": {
    "messageMD5Sum": "7bb3327ac55e49485bad42e159ca4d4b",
    "messageId": "e8c2bb09-235c-5f5d-b583-efd8df0f7d74",
    "topicArn": "arn:aws:sns:us-east-1:123456789012:Rental-Orders",
    "timestamp": "2019-10-04 05:13:55.876"
  },
  "delivery": {
    "deliveryId": "6adf232e-fb12-5062-a564-27ff3741051f",
    "redrivePolicy": "{\"deadLetterTargetArn\": \"arn:aws:sqs:us-east-1:123456789012:Rental-Fulfilment-Fanout-DLQ\"}",
    "destination": "arn:aws:sqs:us-east-1:123456789012:Rental-Fulfilment",
    "providerResponse": "{\"sqsRequestId\":\"b2608a46-ccc4-51cc-003d-de972097debc\",\"sqsMessageId\":\"05fecd22-60a1-4d7d-bb79-026d49700b5a\"}",
    "dwellTimeMs": 58,
    "attempts": 1,
    "statusCode": 200
  },
  "status": "SUCCESS"
}

The following JSON object represents a failed SNS delivery in CloudWatch Logs. In the following code example, the subscribed queue doesn’t exist. As a client error, the status code logged is 400 (FAILURE). Again, the RedrivePolicy attribute refers to a DLQ.

{
  "notification": {
    "messageMD5Sum": "81c395cbd350da6bedfe3b24db9517b0",
    "messageId": "9959db9d-25c8-57a6-9439-8e5be8f71a1f",
    "topicArn": "arn:aws:sns:us-east-1:123456789012:Rental-Orders",
    "timestamp": "2019-10-04 05:16:51.116"
  },
  "delivery": {
    "deliveryId": "be743821-4c2c-5acc-a586-6cf0807f6fb1",
    "redrivePolicy": "{\"deadLetterTargetArn\": \"arn:aws:sqs:us-east-1:123456789012:Rental-Fulfilment-Fanout-DLQ\"}",
    "destination": "arn:aws:sqs:us-east-1:123456789012:Rental-Fulfilment",
    "providerResponse": "{\"ErrorCode\":\"AWS.SimpleQueueService.NonExistentQueue\", \"ErrorMessage\":\"The specified queue does not exist or you do not have access to it.\",\"sqsRequestId\":\"Unrecoverable\"}",
    "dwellTimeMs": 53,
    "attempts": 1,
    "statusCode": 400
  },
  "status": "FAILURE"
}

When the message delivery fails and there is a DLQ attached to the subscription, the message is sent to the DLQ and an additional entry is logged in CloudWatch. This new entry is specific to the delivery to the DLQ and refers to the DLQ ARN as the destination, as shown in the following JSON object.

{
  "notification": {
    "messageMD5Sum": "81c395cbd350da6bedfe3b24db9517b0",
    "messageId": "8959db9d-25c8-57a6-9439-8e5be8f71a1f",
    "topicArn": "arn:aws:sns:us-east-1:123456789012:Rental-Orders",
    "timestamp": "2019-10-04 05:16:52.876"
  },
  "delivery": {
    "deliveryId": "a877c79f-a3ee-5105-9bbd-92596eae0232",
    "destination":"arn:aws:sqs:us-east-1:123456789012:Rental-Fulfilment-Fanout-DLQ",
    "providerResponse": "{\"sqsRequestId\":\"8cef1af5-e86a-519e-ad36-4f33252aa5ec\",\"sqsMessageId\":\"2b742c5c-0750-4ec5-a717-b95897adda8e\"}",
    "dwellTimeMs": 51,
    "attempts": 1,
    "statusCode": 200
  },
  "status": "SUCCESS"
}

By analyzing Amazon CloudWatch Logs entries, you can understand why an SNS message was moved to a DLQ, and then take the required set of steps to recover the message. When you enable delivery status logging in SNS, you can configure the sample rate in which deliveries are logged, from 0% to 100%.

Encrypting DLQs

When your SNS subscription targets an SQS encrypted queue, then you probably want your DLQ to be an SQS encrypted queue as well. This configuration provides consistency in the form that your messages are encrypted at rest.

To follow this security recommendation, give the CMK you used to encrypt your DLQ a key policy that grants the SNS service principal access to AWS KMS API actions. For example, see the following sample key policy:

{
    "Sid": "GrantSnsAccessToKms",
    "Effect": "Allow",
    "Principal": { "Service": "sns.amazonaws.com" },
    "Action": [ "kms:Decrypt", "kms:GenerateDataKey*" ],
    "Resource": "*"
}

If you have an SNS encrypted topic, but a subscription in this topic points to a DLQ that isn’t an SQS encrypted queue, then messages sidelined to the DLQ aren’t encrypted at rest.

For more information, see Enabling Server-Side Encryption (SSE) for an Amazon SNS Topic with an Amazon SQS Encrypted Queue Subscribed.

Summary

DLQs for SNS, SQS, and Lambda increase the resiliency and durability of your applications. These DLQs address different failure modes, and can be used together.

  • SNS DLQs store messages that failed to be delivered to subscribed endpoints.
  • SQS DLQs store messages that the consumer system failed to process.
  • Lambda DLQs store the messages that resulted in failed asynchronous executions of your functions.

Setting up DLQs for subscriptions, queues, and functions can be done using the AWS Management Console, SDK, CLI, API, or CloudFormation. DLQs are available in all AWS Regions. Start today by running the tutorials: