Tag Archives: messaging

Send SMS messages at scale using 10DLC and Amazon Pinpoint

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/send-sms-messages-at-scale-using-10dlc-and-amazon-pinpoint/

This week, we’re adding support for 10DLC phone numbers to Amazon Pinpoint. You can use 10DLC phone numbers to send SMS text messages at scale quickly and affordably.

What is 10DLC?

The abbreviation 10DLC stands for Ten-Digit Long Code. 10DLC phone numbers are intended specifically for sending Application-to-Person (A2P) messages—that is, messages that are sent from applications like Amazon Pinpoint to individual recipients. 10DLC is a concept that’s unique to the SMS industry in the United States. If you don’t send text messages to recipients in the US, then 10DLC doesn’t apply to you.

Before the launch of 10DLC, you could purchase unregistered US long codes instantly through the Amazon Pinpoint console. These long codes didn’t require a registration process—anyone could purchase them for $1 per month. However, the mobile carriers never intended for senders to use them to send A2P messages. For these reasons, their capabilities were limited. To prevent bad actors from sending spam and other malicious content, unregistered long codes could only send one message per second, and about 100 messages in a 24-hour period. Carriers applied heavy filtering to these phone numbers and blocked them for sending high volumes of messages, or as a penalty for sending unsolicited messages.

The alternative to using unregistered long codes is to use a short code. Short codes are a premium SMS product. They offer high rates of deliverability and high throughput (starting at 100 messages per second and going up to thousands of messages per second). The mobile carriers apply a rigorous approval process to short code applications. This process takes several weeks to complete. Short codes cost $995 per month, plus a one-time setup fee of $650. We continue to offer and support short codes in Amazon Pinpoint. Short codes are the right solution for many of our customers, and will continue to be part of the US SMS landscape well into the future.

For many customers though, the ideal solution is somewhere in the middle. 10DLC was designed to cover that middle ground. With 10DLC, senders are required to register both their company and their campaign. This registration information is added to The Campaign Registry (TCR), an industry-wide database of companies and use cases that are authorized to send messages using 10DLC phone numbers. Some use cases, such as one-time passwords and other authentication systems, can be approved within a week. Other use cases, such as promotional messaging, are subject to additional scrutiny, but can still be approved in a few weeks. While 10DLC phone numbers don’t offer the high throughput rates that short codes do, they can exceed the one message per second limit of unregistered long codes while offering higher deliverability rates. And importantly for many customers, they don’t come with the price tag associated with short codes. You pay a one-time fee of $4 to register your company, and a $10 monthly fee for each 10DLC campaign that you register. You also pay a $1 monthly charge for each 10DLC long code that you lease.

Note: On March 1, 2021, T-Mobile will begin to charge a one-time, $50 fee for registering your company. This fee will be charged in addition to the $4 company registration fee. No other carriers have announced similar fees.

The following table compares the costs associated with obtaining and using a short code against the costs of obtaining and using a 10DLC phone number. This table assumes that you only register one 10DLC company and campaign. It also assumes that you only use a single long code with your 10DLC campaign.

Short code 10DLC
One-time fees $650 $54 ($4 company registration + $50 T-Mobile registration fee)
Monthly fees $995 $11 ($1 phone number lease + $10 campaign registration fee)

Senders with very low throughput and volume requirements can register a “low-volume” 10DLC campaign for $2 per month, as opposed to the standard campaign fee of $10 per month. This option is a good choice for test and proof-of-concept use cases.

Drawbacks of using 10DLC phone numbers

For users of Amazon Pinpoint, 10DLC phone numbers offer several benefits. However, they also come with a few drawbacks. One drawback is the different ways that the US carriers support 10DLC. As I mentioned earlier, when you apply for a 10DLC phone number, you have to provide information about your company or brand, and information about your specific messaging use case. The carriers use this information to calculate a trust score. They then use this trust score to determine the capabilities of your 10DLC phone number. On T-Mobile and Sprint, your trust score determines the maximum number of messages that you can send each day through your 10DLC phone number. But for AT&T, your trust score determines the number of messages that you can send each minute, with no limit on the daily number of messages that you can send. (As of this writing, Verizon hasn’t announced their throughput plan.) These differences mean that you must carefully manage your messaging program to stay within the daily and per-second limits imposed by the different carriers.

A final drawback to using 10DLC phone numbers is related to throughput. If your use case requires you to send a large number of text messages in a short amount of time (100 messages per second or more), you need a short code.

10DLC Capabilities

10DLC phone numbers typically have higher per-second and daily sending limits than unregistered long codes. The actual performance of your 10DLC phone number is based on the trust score for the company that you registered. The following table shows the trust score tiers and their associated limits.

Tier Message parts per minute (AT&T) Maximum daily messages (T-Mobile & Sprint)
High 1,800 200,000
Medium-High 300 40,000
Medium-Low 30 10,000
Basic 12 2,000

Setting up 10DLC

To set up 10DLC, you have to do three things. First, you must register your company. Second, you must register your use case. And third, you must add a phone number to your 10DLC campaign.

Important: When you complete the steps in this section, you are charged for registering both your company and your use case. These registration charges can’t be reversed. Only complete these steps if you agree to pay these charges.

Step 1: Register your company

When you register your company, you provide your company details to The Campaign Registry (TCR). The mobile carriers use this data to determine the trustworthiness of your use cases. Company approvals are usually granted instantly.

To register your company:

  1. Sign in to the Amazon Pinpoint console at https://console.aws.amazon.com/pinpoint.
  2. In the navigation pane, under Settings, choose SMS and voice.
  3. On the 10DLC campaigns tab, choose Register company, as shown in the following image.
    Shows the location of the Create 10DLC Company button on the SMS and voice settings page of the Amazon Pinpoint console.
  4. On the Register your company page, fill out the form completely. There are a few things to note in this process:
    • The Doing business as (DBA) or brand name field is mandatory. The value that you provide can be the same as your company name.
    • The Support email and Support phone number are the email address and phone number that your customers can use to contact you when they have questions.
  5. When you finish, choose Create.

Step 2: Register a 10DLC campaign

After you register a company, you can begin to register campaigns. In 10DLC terms, a campaign is a use case or set of closely related use cases. Amazon Pinpoint also sends this information to TCR. Carriers use this information to determine whether traffic that they see from a certain phone number is legitimate. Campaigns associated with common, low-risk use cases can typically be approved in about a week.

To register a 10DLC campaign:

  1. On the SMS and voice settings page, on the 10DLC campaigns tab, choose Create 10DLC Campaign, as shown in the following image.
    Shows the location of the Create 10DLC Campaign button on the SMS and voice settings page of the Amazon Pinpoint console.
  2. On the Create 10DLC Campaign page, do the following:
    1. For Company name, choose the company that you registered in the preceding section.
    2. For 10DLC campaign name, enter a name that describes your messaging use case, such as “Example Corp One-Time Passwords.”
    3. For Vertical, choose the category that most accurately describes your company and use case. For example, if you develop software for the healthcare industry, choose Healthcare.
    4. For Help message, enter the response that will be returned to recipients who reply to your messages with the keyword HELP. A good help message describes the purpose of the campaign. It also provides your customers with a method of contacting you for more help (typically an email address or phone number).
    5. For Stop message, enter the response that will be returned to recipients who reply to your messages with the keyword STOP. A typical stop message tells your customer what type of messages they’re unsubscribing from, and lets them know that you won’t send them any more messages.
    6. Under Campaign use case, choose the use case that most accurately describes how you plan to use the 10DLC phone number. Many common use cases—including two-factor authentication (2FA), marketing, security and fraud alerts, and public service announcements—are considered Standard use cases. Use cases that involve a greater degree of risk for carriers—such as political, sweepstakes, and emergency notifications—are considered Special use cases.
  3. When you finish, choose Create.

Step 3: Associate phone numbers with your 10DLC campaign

After your 10DLC company and campaign are approved, you can purchase new long codes. When you purchase a long code, you choose which 10DLC campaign to associate it with.

To purchase a long code:

  1. On the SMS and voice settings page, on the Phone numbers tab, choose Request long code/toll-free.
  2. On the Define your phone numbers page, in the Phone number 1 section, do the following:
    1. For Country, choose United States.
    2. For Number type, choose 10DLC.
    3. For Assign to existing 10DLC campaign, choose the 10DLC campaign that you created in the preceding section.
    4. For Default message type, choose the option that most accurately describes your use case.
    5. In the Summary section, for Quantity, specify how many phone numbers you want to purchase.
  3. Choose Next. Then, on the Review and request page, choose Request.

Cleanup

If you no longer need the long codes that are associated with your 10DLC campaign registration, you can delete them. If you delete a long code, you’re no longer charged the $1 monthly lease charge. However, you’re still charged the recurring 10DLC campaign registration fee, unless you delete your 10DLC campaign as well.

If you want to delete the 10DLC company or campaign registration information in Amazon Pinpoint, you can do so by opening a case in the AWS Support Center. The SMS and voice settings page in the Amazon Pinpoint console contains links that you can use to quickly open these cases.

Conclusion

If you need to start sending SMS messages to your customers quickly, and without the expense of a short code, 10DLC is a great option. With common use cases such as two-factor authentication, your 10DLC campaigns and phone numbers can be ready to use relatively quickly. Messages that you send using 10DLC will have the high deliverability rates that were previously reserved only for short codes.

Opt-in to the new Amazon SES console experience

Post Syndicated from Simon Poile original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-ses-console-opt-in/

Amazon Web Services (AWS) is pleased to announce the launch of the newly redesigned Amazon Simple Email Service (SES) console. With its streamlined look and feel, the new console makes it even easier for customers to leverage the speed, reliability, and flexibility that Amazon SES has to offer. Customers can access the new console experience via an opt-in link on the classic console.

Amazon SES now offers a new, optimized console to provide customers with a simpler, more intuitive way to create and manage their resources, collect sending activity data, and monitor reputation health. It also has a more robust set of configuration options and new features and functionality not previously available in the classic console.

Here are a few of the improvements customers can find in the new Amazon SES console:

Verified identities

Streamlines how customers manage their sender identities in Amazon SES. This is done by replacing the classic console’s identity management section with verified identities. Verified identities are a centralized place in which customers can view, create, and configure both domain and email address identities on one page. Other notable improvements include:

  • DKIM-based verification
    DKIM-based domain verification replaces the previous verification method which was based on TXT records. DomainKeys Identified Mail (DKIM) is an email authentication mechanism that receiving mail servers use to validate email. This new verification method offers customers the added benefit of enhancing their deliverability with DKIM-compliant email providers, and helping them achieve compliance with DMARC (Domain-based Message Authentication, Reporting and Conformance).
  • Amazon SES mailbox simulator
    The new mailbox simulator makes it significantly easier for customers to test how their applications handle different email sending scenarios. From a dropdown, customers select which scenario they’d like to simulate. Scenario options include bounces, complaints, and automatic out-of-office responses. The mailbox simulator provides customers with a safe environment in which to test their email sending capabilities.

Configuration sets

The new console makes it easier for customers to experience the benefits of using configuration sets. Configuration sets enable customers to capture and publish event data for specific segments of their email sending program. It also isolates IP reputation by segment by assigning dedicated IP pools. With a wider range of configuration options, such as reputation tracking and custom suppression options, customers get even more out of this powerful feature.

  • Default configuration set
    One important feature to highlight is the introduction of the default configuration set. By assigning a default configuration set to an identity, customers ensure that the assigned configuration set is always applied to messages sent from that identity at the time of sending. This enables customers to associate a dedicated IP pool or set up event publishing for an identity without having to modify their email headers.

Account dashboard

There is also an account dashboard for the new SES console. This feature provides customers with fast access to key information about their account, including sending limits and restrictions, and overall account health. A visual representation of the customer’s daily email usage helps them ensure that they aren’t approaching their sending limits. Additionally, customers who use the Amazon SES SMTP interface to send emails can visit the account dashboard to obtain or update their SMTP credentials.

Reputation metrics

The new reputation metrics page provides customers with high-level insight into historic bounce and complaint rates. This is viewed at both the account level and the configuration set level. Bounce and complaint rates are two important metrics that Amazon SES considers when assessing a customer’s sender reputation, as well as the overall health of their account.

The redesigned Amazon SES console, with its easy-to-use workflows, will not only enhance the customers’ on-boarding experience, it will also change the paradigms used for their on-going usage. The Amazon SES team remains committed to investing on behalf of our customers and empowering them to be productive anywhere, anytime. We invite you to opt in to the new Amazon SES console experience and let us know what you think.

Updating opt-in status for Amazon Pinpoint channels

Post Syndicated from Varinder Dhanota original https://aws.amazon.com/blogs/messaging-and-targeting/updating-opt-in-status-for-amazon-pinpoint-channels/

In many real-world scenarios, customers are using home-grown or 3rd party systems to manage their campaign related information. This includes user preferences, segmentation, targeting, interactions, and more. To create customer-centric engagement experiences with such existing systems, migrating or integrating into Amazon Pinpoint is needed. Luckily, many AWS services and mechanisms can help to streamline this integration in a resilient and cost-effective way.

In this blog post, we demonstrate a sample solution that captures changes from an on-premises application’s database by utilizing AWS Integration and Transfer Services and updates Amazon Pinpoint in real-time.

If you are looking for a serverless, mobile-optimized preference center allowing end users to manage their Pinpoint communication preferences and attributes, you can also check the Amazon Pinpoint Preference Center.

Architecture

Architecture

In this scenario, users’ SMS opt-in/opt-out preferences are managed by a home-grown customer application. Users interact with the application over its web interface. The application, saves the customer preferences on a MySQL database.

This solution’s flow of events is triggered with a change (insert / update / delete) happening in the database. The change event is then captured by AWS Database Migration Service (DMS) that is configured with an ongoing replication task. This task continuously monitors a specified database and forwards the change event to an Amazon Kinesis Data Streams stream. Raw events that are buffered in this stream are polled by an AWS Lambda function. This function transforms the event, and makes it ready to be passed to Amazon Pinpoint API. This API call will in turn, change the opt-in/opt-out subscription status of the channel for that user.

Ongoing replication tasks are created against multiple types of database engines, including Oracle, MS-SQL, Postgres, and more. In this blog post, we use a MySQL based RDS instance to demonstrate this architecture. The instance will have a database we name pinpoint_demo and one table we name optin_status. In this sample, we assume the table is holding details about a user and their opt-in preference for SMS messages.

userid phone optin lastupdate
user1 +12341111111 1 1593867404
user2 +12341111112 1 1593867404
user2 +12341111113 1 1593867404

Prerequisites

  1. AWS CLI is configured with an active AWS account and appropriate access.
  2. You have an understanding of Amazon Pinpoint concepts. You will be using Amazon Pinpoint to create a segment, populate endpoints, and validate phone numbers. For more details, see the Amazon Pinpoint product page and documentation.

Setup

First, you clone the repository that contains a stack of templates to your local environment. Make sure you have configured your AWS CLI with AWS credentials. Follow the steps below to deploy the CloudFormation stack:

  1. Clone the git repository containing the CloudFormation templates:
    git clone https://github.com/aws-samples/amazon-pinpoint-rds-integration.git
    cd amazon-pinpoint-rds-integration
  2. You need an S3 Bucket to hold the template:
    aws s3 create-bucket –bucket <YOUR-BUCKET-NAME>
  3. Run the following command to package the CloudFormation templates:
    aws cloudformation package --template-file template_stack.yaml --output-template-file template_out.yaml --s3-bucket <YOUR-BUCKET-NAME>
  4. Deploy the stack with the following command:
    aws cloudformation deploy --template-file template_out.yaml --stack-name pinpointblogstack --capabilities CAPABILITY_AUTO_EXPAND CAPABILITY_NAMED_IAM

The AWS CloudFormation stack will create and configure resources for you. Some of the resources it will create are:

  • Amazon RDS instance with MySQL
  • AWS Database Migration Service replication instance
  • AWS Database Migration Service source endpoint for MySQL
  • AWS Database Migration Service target endpoint for Amazon Kinesis Data Streams
  • Amazon Kinesis Data Streams stream
  • AWS Lambda Function
  • Amazon Pinpoint Application
  • A Cloud9 environment as a bastion host

The deployment can take up to 15 minutes. You can track its progress in the CloudFormation console’s Events tab.

Populate RDS data

A CloudFormation stack will output the DNS address of an RDS endpoint and Cloud9 environment upon completion. The Cloud9 environment acts as a bastion host and allows you to reach the RDS instance endpoint deployed into the private subnet by CloudFormation.

  1. Open the AWS Console and navigate to the Cloud9 service.
    Cloud9Console
  2. Click on the Open IDE button to reach your IDE environment.
    Cloud9Env
  3. At the console pane of your IDE, type the following to login to your RDS instance. You can find the RDS Endpoint address at the outputs section of the CloudFormation stack. It is under the key name RDSInstanceEndpoint.
    mysql -h <YOUR_RDS_ENDPOINT> -uadmin -pmypassword
    use blog_db;
  4. Issue the following command to create a table that holds the user’s opt-in status:
    create table optin_status (
      userid varchar(50) not null,
      phone varchar(50) not null,
      optin tinyint default 1,
      lastupdate TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
    );
  5. Next, load sample data into the table. The following inserts nine users for this demo:
    
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user1', '+12341111111', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user2', '+12341111112', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user3', '+12341111113', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user4', '+12341111114', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user5', '+12341111115', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user6', '+12341111116', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user7', '+12341111117', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user8', '+12341111118', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user9', '+12341111119', 1);
  6. The table’s opt-in column holds the SMS opt-in status and phone number for a specific user.

Start the DMS Replication Task

Now that the environment is ready, you can start the DMS replication task and start watching the changes in this table.

  1. From the AWS DMS Console, go to the Database Migration Tasks section.
    DMSMigTask
  2. Select the Migration task named blogreplicationtask.
  3. From the Actions menu, click on Restart/Resume to start the migration task. Wait until the task’s Status transitions from Ready to Starting and Replication ongoing.
  4. At this point, all the changes on the source database are replicated into a Kinesis stream. Before introducing the AWS Lambda function that will be polling this stream, configure the Amazon Pinpoint application.

Inspect the AWS Lambda Function

An AWS Lambda function has been created to receive the events. The Lambda function uses Python and Boto3 to read the records delivered by Kinesis Data Streams. It then performs the update_endpoint API calls in order to add, update, or delete endpoints in the Amazon Pinpoint application.

Lambda code and configuration is accessible through the Lambda Functions Console. In order to inspect the Python code, click the Functions item on the left side. Select the function starting with pinpointblogstack-MainStack by clicking on the function name.

Note: The PINPOINT_APPID under the Environment variables section. This variable provides the Lambda function with the Amazon Pinpoint application ID to make the API call.

LambdaPPAPPID

Inspect Amazon Pinpoint Application in Amazon Pinpoint Console

A Pinpoint application is needed by the Lambda Function to update the endpoints. This application has been created with an SMS Channel by the CloudFormation template. Once the data from the RDS database has been imported into Pinpoint as SMS endpoints, you can validate this import by creating a segment in Pinpoint.

PinpointProject

Testing

With the Lambda function ready, you now test the whole solution.

  1. To initiate the end-to-end test, go to the Cloud9 terminal. Perform the following SQL statement on the optin_table:
    UPDATE optin_status SET optin=0 WHERE userid='user1';
    UPDATE optin_status SET optin=0 WHERE userid='user2';
    UPDATE optin_status SET optin=0 WHERE userid='user3';
    UPDATE optin_status SET optin=0 WHERE userid='user4';
  2. This statement will cause four changes in the database which is collected by DMS and passed to Kinesis Data Streams stream.
  3. This triggers the Lambda function that construct an update_endpoint API call to the Amazon Pinpoint application.
  4. The update_endpoint operation is an upsert operation. Therefore, if the endpoint does not exist on the Amazon Pinpoint application, it creates one. Otherwise, it updates the current endpoint.
  5. In the initial dataset, all the opt-in values are 1. Therefore, these endpoints will be created with an OptOut value of NONE in Amazon Pinpoint.
  6. All OptOut=NONE typed endpoints are considered as active endpoints. Therefore, they are available to be used within segments.

Create Amazon Pinpoint Segment

  1. In order to see these changes, go to the Pinpoint console. Click on PinpointBlogApp.
    PinpointConsole
  2. Click on Segments on the left side. Then click Create a segment.
    PinpointSegment
  3. For the segment name, enter US-Segment.
  4. Select Endpoint from the Filter dropdown.
  5. Under the Choose an endpoint attribute dropdown, select Country.
  6. For Choose values enter US.
    Note: As you do this, the right panel Segment estimate will refresh to show the number of endpoints eligible for this segment filter.
  7. Click Create segment at the bottom of the page.
    PinpointSegDetails
  8. Once the new segment is created, you are directed to the newly created segment with configuration details. You should see five eligible endpoints corresponding to database table rows.
    PinpointSegUpdate
  9. Now, change one row by issuing the following SQL statement. This simulates a user opting out from SMS communication for one of their numbers.
    UPDATE optin_status SET optin=0 WHERE userid='user5';
  10. After the update, go to the Amazon Pinpoint console. Check the eligible endpoints again. You should only see four eligible endpoints.

PinpointSegUpdate

Cleanup

If you no longer want to incur further charge, delete the Cloudformation stack named pinpointblogstack. Select it and click Delete.

PinpointCleanup

Conclusion

This solution walks you through how opt-in change events are delivered from Amazon RDS to Amazon Pinpoint. You can use this solution in other use cases as well. Some examples are importing segments from a 3rd party application like Salesforce and importing other types of channels like e-mail, push, and voice. To learn more about Amazon Pinpoint, visit our website.

Amazon SES celebrates 10 years of email sending and deliverability

Post Syndicated from Simon Poile original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-ses-celebrates-10-years-of-email-sending-and-deliverability/

Amazon Simple Email Service (Amazon SES) turns 10 years old today. Back on January 25th 2011, Amazon Web Services (AWS) had only 15 services. Today, AWS has grown to over 180 services. Jeff Barr launched Amazon SES as part of his web evangelist blog. Much of what he wrote about then is still true today. Even 10 years later, email is an important channel for customer communications. Developers still want to rely on a trusted global partner to deliver email at scale. However, mailbox providers are even more protective of their end users’ security. They actively work to ensure that any perceived, unwanted email doesn’t make it to the inbox.

Inbox providers use several factors to determine the legitimacy of email traffic. Over the last decade, we have worked diligently to measure many of those factors in Amazon SES to help our customers achieve great deliverability. The focus for much of that work has been a combination of investments into reputation, engagement, and trust. I want to outline what we’ve accomplished to improve your email sending over the last 10 years.

Reputation

Reputation is the measurement mailbox providers use to determine how closely you follow their sending standards. Amazon SES measures perceived reputation through metrics such as bounce rate or complaint rate in the reputation dashboard. The reputation dashboard also shares overall Amazon SES account sending status like “Healthy” or “Under Review.” Some Inbox providers, or ISPs, also provide feedback to help us measure the effectiveness of a specific IP or domain in sending trustworthy traffic.

You can influence reputation in Amazon SES through:

  • Setting up dedicated IPs: Set up IPs in Amazon SES for your own specific sending with appropriate warm-up plans. Split IPs out by use case such as separating password resets from marketing messages.
  • Customer owned IPs (New in 2020): You can now transition IPs you’ve invested in through your own data center or with another ESP to Amazon SES without interruption.
  • Following sending volume best practices: Nothing can flag your IP addresses faster than non-predictable sending patterns. We help you manage this through sending quotas.
  • Use our SES email simulator: Test your application sending without messages leaving the sandbox.

 

Engagement

Engagement is the rate by which customers are interacting with your content. Amazon SES helps you measure engagement through conversion rates (such as open or click-through) and unsubscribe rates. These are measured in the event publishing click stream. This area is more of an art in our deliverability calculus because success varies by industry and use case.

You can influence engagement in Amazon SES through:

  • Customizing content as much as possible, but follow content best practices to avoid setting off content filters. Mailbox providers often utilize behavioral content filtering using AI to determine if your content is relevant based on engagement behavior.
  • Use consent and list management (New in 2020) with customized topics and opt-out pages. It’s important to offer recipients a way to select what emails they want to receive from you and give them an option to opt-out. This is a great new feature that we’ve added based on customer feedback.
  • Remove emails that are not engaging from your lists. Some customers have a time limit, for example, 60 days, before they are automatically removed from an active email list.

 

Trust

Earning trust on email sending is done through the adherence to proper sending behavior, as measured by both individual ISPs as well as industry watch-groups. Trust is closely related to reputation.  We measure trust through messages in the reputation dashboard based on feedback loops, Real-time Blocklists (RBLs), and spam-traps. You can also see the complaint rate associated to your sending in the complaint area of the reputation dashboard. It has statuses like healthy or under review.

You can influence trust in Amazon SES through:

 

Deliverability is a multi-dimensional part of email sending, beyond just setting up an SMTP (Simple Mail Transfer Protocol) endpoint, with constant complexities. But, we’re here to help. In addition to these investments in deliverability, we’ve also expanded Amazon SES to 18 regions, including the government cloud. It’s been an exciting time at AWS, and we look forward to supporting all of our customers in the years to come with Amazon SES.

 

 

 

 

Send localized messages using Amazon Pinpoint templates and standard demographic attributes

Post Syndicated from Mohit Palriwal original https://aws.amazon.com/blogs/messaging-and-targeting/send-localized-messages-using-pinpoint-templates-and-standard-demographic-attributes/

As your application user base expands into more countries and languages, it’s important to make sure messages are localized for each recipient to improve engagement. Localizing your messages helps you reach your audience with content specific to their language settings. Creating separate messages for each language and managing each template separately can require a lot of duplication effort. It is also challenging to manage and group templates based on all possible locales or specific campaigns.

Amazon Pinpoint‘s messaging template provides a way to build a single message with multiple localizations. You prepare localizations based on locale of your audience registered with Amazon Pinpoint project.

This blog post walks you through a solution that uses the locale of your user endpoints to build a localized messaging template. We provide you with a template that is used with an Amazon Pinpoint campaign or journeys to target your audience across multiple locale with localized message content. This solution is applicable for all supported channels under Amazon Pinpoint, SMS, email, push, voice. This blog explains the solution for a SMS channel-specific scenario.

Solution overview

The solution below describes the workflow to send localized messaging to a group of users across various locales. The first prerequisite is to create an Amazon Pinpoint project in your AWS account and enable corresponding channels for message sending. Next, you will create an Amazon Pinpoint template using locale-specific message variables and register users endpoints with a demographic locale property. Once segment and template resources are generated, you can create a localized message in your campaign or journey.

Prerequisites

Setting up the solution

1. Set up Amazon Pinpoint

First, create a new Amazon Pinpoint project and configure the desired channels from which you want to send localized messages.

2. Create a localized template

  1. Create an Amazon Pinpoint messaging template with supported message variables of your choice. This builds more dynamic and personalized content.
  2. Use Demographic.Locale from supported Endpoint attributes to customize your message content per locale using eq comparison helper.

Below is an example of using an endpoint standard locale attribute in a template.

{{#eq Demographic.Locale "fr-FR"}} Bienvenue dans l'expérience utilisateur Pinpoint! 
{{else eq Demographic.Locale "de-DE"}} Willkommen bei Pinpoint User Experience! 
{{else}} Welcome to Pinpoint User Experience ! {{/eq}}  

3. Register your users with locale property

Register your user endpoint to pinpoint with the demographic locale/timezone standard attribute.

The below is an example for registering an SMS endpoint with de-DE locale.
aws pinpoint update-endpoint –application-id $APP_ID –endpoint-id

$ENDPOINT_ID --endpoint-request '{"Address":"+19999999999","ChannelType":"SMS","Demographic":{"Locale":"de-DE", "Timezone": "Europe/Berlin"}}'

Note: You can also register your user endpoints using the import segment feature. This accepts a .csv file with all endpoints.

4. Create a segment with all locale users

Create an Amazon Pinpoint segment to define the audience you want to target with localized message.

5. Create a journey or campaign

  1. Create an Amazon Pinpoint campaign or journey.
  2. Use the template from earlier in Step 2.
  3. Create a segment with all locale users from Step 4.Note: You can also use Amazon Pinpoint local time and quiet time features to target your audience in their local time zone or at a specific global time (for example 10am GMT). This also respects the quiet hours (for example 23:00 to 8:00) specific to their local time zone based on the EndpointDemographic.Timezone property.

 

6. Execution:

A marketing campaign manager wants to send a localized message to every audience based of their preferred language.

  1. Creates a single journey targeting a segment with 2 endpoints (each with unique locale) from Step 4.
  2. Create a segment with all locale users using the template defined in Step 2.
  3. Create a localized template

Conclusion

The Amazon Pinpoint messaging template provides you the ease of managing a single template for multiple locales.

With a localized messaging template you can simply target your audience across locales and receive targeted analytics. Get started today by visiting Amazon Pinpoint’s webpage.

Other useful links

 

Strategies for list management with Amazon Pinpoint and Amazon Simple Email Service

Post Syndicated from Heidi Gloudemans original https://aws.amazon.com/blogs/messaging-and-targeting/strategies-for-list-management-with-amazon-pinpoint-and-amazon-simple-email-service/

Managing customer lists is a large part of any outbound customer communication program. From customer acquisition to ongoing engagement, locating the best sources for subscribers and respecting their contact preferences is key to maintaining healthy customer lists. This article will discuss recommendations for list building using Amazon Pinpoint and Amazon Simple Email Service (SES). We will provide recommendations for a subscription process, what information to ask for, how to manage opt-outs, and optimize lists over time.

Customer acquisition

Customer acquisition is the first part of any list activity. There are a few guidelines that all outbound marketers should follow during the list building process. First, do not use 3rd party, leased, or purchased lists. The use of 3rd party lists for email risks complaints, impact to sender reputation, and inclusion in monitoring functions like spam traps. Email service providers like Amazon Pinpoint and Amazon SES will discontinue service for accounts with poor sending behavior that results from use of 3rd party lists.

Second, if you plan on contacting customers across channels, make sure you acquire permission to contact users on each channel. There are many places you can acquire customer contacts, ranging from your website, social media presence, or a QR code on a physical sign. Amazon Pinpoint also has a solution called the Amazon Pinpoint Preference Center that you can deploy to gather and manage customer contact preferences across channels.

There are a few items that you want to include in any customer acquisition form. First, tell the customer how often they can expect communication from you. Is it a weekly newsletter? Monthly? Even better if you give them the option to select how often you communicate with them. Next, tell them what value they can expect from registration. For example, special deals, early access to sales, or even just product and industry news. While you can provide some incentives, avoid providing high-value incentives for registration. Over-the-top enticements like free products will always cause low-quality registrations and resulting low-quality lists.

In addition, make your sign-up forms as concise as possible. Only put high-value content behind registration, and minimize the amount of information customers must provide to register. Having a full profile makes your life as a marketer trying to segment your customers easier. However, it potentially adds friction to the sign-up process which can result in lost customers.

If you can, allow the customer to indicate their content preferences later or during onboarding communications. If you use Amazon SES, you can support up to 20 list topics per account in the Amazon SES list management. For example, if you are a sportswear company, interests could include topics like hiking, biking, or running. You should then send customers emails only about the specific topics that the recipient is interested in receiving. Make sure you retain preference data. All countries are different, but some require you to prove that you received permission to contact a customer.

Managing your customer contacts

Onboarding communication is an essential first step once a customer has submitted their initial registration. Some organizations use the first communication as a registration confirmation step, called a “double opt-in.” In addition to driving engagement and initial calls-to-action (“Confirm your email”), double opt-in emails have the added benefit of verifying that a bot did not submit your customer email.

From the first message you send to a new customer to the last, you should always include the unsubscribe option. Every country has different requirements, of which you should research and educate your organization. Amazon SES now supports subscription management for custom URLs in the footers of your emails. Amazon SES also now supports contact preferences in a custom landing page, where customers can adjust their contact list preferences.

Removing unengaged users

Both Amazon Pinpoint and Amazon SES enable visibility to the success of outbound communications through open rates and click-through rates at the account or campaign level. If time passes with limited engagement from an individual customer, (i.e., they do not open your mails or engage with the content) there is a risk the recipient mailbox provider will start marking your messages as spam. Work with your business to determine the period of time after which you should automatically remove unengaged users from your contact list. Many organizations will remove customers from their contact lists after 60 to 90 days of non-engagement.  If you need the ability to quickly query customers that are not engaging with your communications from your data store of choice, enable event stream data in Amazon Pinpoint or Amazon SES using Amazon Kinesis. Amazon Pinpoint also has a solution, the Digital User Engagement Events Stream Database that creates a data store for that purpose.

Amazon SES and Amazon Pinpoint both also have the concept of global and account suppression lists. Global suppression lists are managed across AWS accounts, while account-based suppression lists are associated to your AWS account. If a customer explicitly unsubscribes from your list using Amazon SES list management, or complains through their inbox provider, they will automatically be added to the respective account suppression list. Customers that are part of suppression lists are no longer sent messages from your account. Respecting contact preferences like unsubscribe is an opportunity to earn trust with that specific customer, the market, and the recipient mailbox provider.

Conclusion

There are a number of additional best practices to drive customer engagement across communications channels. They can include message headlines, copy, graphics/images, and adaptive design across various endpoints and clients. However, nothing is as important as maintaining the trust of end customers with their contact information. Sourcing contact information with the customer permission, respecting contact preferences, and maintaining list hygiene is the cornerstone to a successful customer communications program. Learn more today about Amazon Pinpoint and Amazon SES and customer communications.

How Amazon Simple Email Service supported the growth of email in 2020

Post Syndicated from Simon Poile original https://aws.amazon.com/blogs/messaging-and-targeting/how-amazon-simple-email-service-supported-the-growth-of-email-in-2020/

Over the last 12 months, organizations of all types have increasingly needed to stay connected to their customers. With the move to virtual interactions accelerating across industries, email has remained a trusted channel for customer communications. Amazon Simple Email Service (SES) has seen record outbound email traffic in 2020, supporting critical customer communications during COVID and commercial moments like Black Friday and Cyber Monday.

The importance of email during COVID

Unlike real-time communications like voice or live chat, email is asynchronous. It can be read and consumed at the customer’s leisure. In some geographies like North America, email also represents an individual’s unique identity, persisting longer than mobile phone numbers or social networking accounts. Even with the importance of email established before 2020, it was important to most organizations to send only the right messages during the COVID crisis.

Many organizations chose to decrease promotional or marketing emails during the pandemic voluntarily. This decrease in sending was to recognize the increased stress most individuals were facing in their personal lives. However, even with the drop in marketing emails across organizational types, there was an increased need to communicate and maintain customer engagement. Most organizations went through three distinct customer communication phases with email in 2020: React, Respond, and Reimagine.

  • React – These were the initial emails sent to acknowledge the COVID crisis, occurring early in 2020. These emails included messages reinforcing commitment to customer health, employee safety, or communicating new cleaning protocols.
  • Respond – These messages often included communication on the status of the business or event. Most businesses needed to communicate their transition to remote work, temporary closures, and many in-person events canceled.
  • Reimagine – Throughout the crisis, organizations were reimagining how to do business. Healthcare started operating video consultations, and restaurants shifted to pick up/take out only. Email communication was vital to take customers on the journey into this “new normal,” even as some businesses started to reopen.

To send these customer communications at scale, many organizations worked with Amazon SES.

How Amazon SES scaled and supported customers in 2020

Amazon SES saw several sending spikes that aligned with organizations working to communicate with their customers during COVID. Nine times in 2020, transactions per second (TPS) in Amazon SES exceeding 150% of the previous record held by 2019 Black Friday. This over 150% TPS spike also occurred on 2020 Black Friday and Cyber Monday.

In addition to supporting those upsurges in throughput, the Amazon SES team also responded to customer feedback on increasing the global footprint of Amazon SES. Since January, Amazon SES increased the total number of regions supported from 7 to 14, including the US government cloud. These additional regions were deployed during 2020 as the team worked remotely. This regional expansion enabled customers to adhere to local data sovereignty requirements for email sending while also improving performance.

Customers also told us they needed tools to help them manage compliance with important governance laws like CAN-SPAM and GDPR. Amazon SES released list management to help organizations manage their customer’s contact information and preferences.

Looking forward

As we move into 2021, email will remain at the forefront of customer communication channels. Enterprise customers like Netflix and Duolingo rely on Amazon SES to deliver their email at scale. For more information on how you can use Amazon SES, visit our website.

Automate phone number validation with Amazon Pinpoint

Post Syndicated from Ilya Pupko original https://aws.amazon.com/blogs/messaging-and-targeting/automate-phone-number-validation-with-amazon-pinpoint/

Amazon Pinpoint allows you to engage with your customers across multiple messaging channels like SMS text, email, and voice messages. While planning and executing standard text (SMS) and voice-based campaigns, one of the challenges developers often run into is the need to verify if the phone numbers in their internal database are valid and conform to the standard E.164 format. You can attempt to verify the phone numbers manually one at a time, but it’s tedious. To overcome this issue, Amazon Pinpoint provides a phone number validation service that you can use to determine if a phone number is valid, have it automatically formatted, and obtain additional information about the phone number itself. For example, when you use the phone number validation service, it returns the following information:

  • The phone number in E.164 format.
  • The phone number type (such as mobile, landline, or VoIP).
  • The city and country where the phone number is based.
  • The service provider that is associated with the phone number.

This blog post aims to provide a step-by-step implementation guide and the necessary code to enable an integrated solution for number verification.

Process flows and architecture


This solution uses Amazon Simple Storage Service (Amazon S3), Amazon Pinpoint, AWS Step Functions, Amazon Simple Notification Service (SNS) and AWS Lambda. To initiate the process, you upload your source contact file in the CSV format to the dedicated Amazon S3 bucket. When the CSV file is uploaded, S3 triggers the associated tasks. Based on the optional configuration rules, the application code either runs the Phone Validate logic first or imports the contact information as-is into Amazon Pinpoint as a new imported segment and updates overall Amazon Pinpoint audience information. If Phone Validation is enabled, the system will first generate and save the new output file to Amazon S3 with the valid phone number, metadata, etc. and use this updated contact information during import. Additionally, the system will kick-off a scheduled campaign to all imported contacts.

This CloudFormation template will automatically create the following new resources on your first deploy:

  • AWS Lambda function: These functions contains the application code which validate the phone numbers. It also creates the segment for the uploaded contacts.
  • S3 event notification: When the CSV file is uploaded to the S3 bucket, the S3 Event Notification triggers the AWS Lambda function which initiate the AWS Step Functions State Machine. To learn more about the S3 Event Notification, check the documentation.
  • AWS Step Functions: This solution will set up an infrastructure to automatically trigger when a new file is placed in an S3 bucket. The process, managed by an AWS Step Functions state machine, will start a Pinpoint import process, wait for it to complete, and send notifications that the job started, successfully finished, or failed.
  • IAM role: The IAM role is used to make Amazon Pinpoint calls, to access S3, and interact with AWS Step Functions and Amazon SNS. You can check the IAM documentation to learn more about IAM roles.

Prerequisites and deployment steps

Step 1: Set up the Amazon Pinpoint project and the S3 bucket

In Amazon Pinpoint, a project (also sometimes referred to as “application”) is a collection of settings, customer information, segments, and campaigns. Setting up a Pinpoint project is the first step to deploy our solution. It holds the segment we will use in the later steps.

  1. Navigate to the Amazon Pinpoint from the services tab in the AWS Management Console and create a new Amazon Pinpoint project.
  2. Copy the Project ID from the Amazon Pinpoint console and save it in notepad. You will need it later.

In Amazon S3, create a new bucket to upload the files to. Make sure it is setup according to your company’s security practices. If you have an existing bucket you want to use instead, note that this solution will require a source bucket in the same region as the solution itself and it will override any triggers already in place on the bucket.

Step 2: Deploy code and services

AWS CloudFormation is a service that gives developers and businesses an easy way to create a collection of related AWS and third-party resources. You can provision them in an orderly and predictable fashion.

  1. Download the latest version of the solution from https://github.com/aws-samples/digital-user-engagement-reference-architectures/blob/master/cloudformation/S3_triggered_import.yaml
  2. Log in to your AWS account and navigate to the Amazon CloudFormation from the services tab in the AWS Management Console: https://console.aws.amazon.com/cloudformation/home
  3. Click on the Create Stack button and choose to provision New Resources. Then select Upload a template file and choose the file you just downloaded in the first step.
  4. On the Specify stack details screen all the information is pre-populated as shown in the screenshot below. Parameters:
    · Replace the PinpointProjectID field with the value you saved in Step 1
    · ValidatePhone: Choose true if you wish to validate the numbers via the Pinpoint API before importing the segment.
    · AssumeUS: Choose true if you want to assume US (+1) phone number for any phone 10 digits long or false if you want to import as-is.
    · AutoCreateCampaign: Choose true if you want to automatically create a campaign based on the imported file or false if you want to just import into the system without automatically scheduling any campaigns. This setting will be saved as an ImportSegment Lambda environment variable so you can adjust it later.
    · CampaignDelay: Number of minutes from the time of import to start of the campaign (if AutoCreateCampaign is set to true). Allows for the last-minute double check and/or pause as needed. Will be saved as CreateCampaign Lambda environment variable.
    · FileDropS3Bucket: Name of the existing Amazon S3 Bucket where new import files will be placed. Note that it has to be in the same region as you are running this template and the bucket should not have any existing notification configurations or they will be overwritten.
    · FileDropS3Prefix: Prefix (sub-folder name) of the Amazon S3 Bucket where you will be uploading new files to be imported.
  5. Settings on the configure stack options page are optional, click Next.

Select all acknowledgment boxes and click Create Stack. It takes a couple of minutes for the AWS CloudFormation to deploy all the resources.

The solution is now deployed and you can test it by uploading the sample CSV file to the Amazon S3 bucket. You will notice that the output CSV file is created in the “results” folder of the same S3 bucket, if you have validation enabled. You can also navigate to the Amazon Pinpoint console to check the Amazon Pinpoint segment. Once the deployment is complete and the segment is created, you can leverage Amazon Pinpoint campaigns to reach out to your customers.

Conclusion and Next Steps

Enabling solutions such as this provides an efficient and integrated mechanism to validate phone numbers and import customer contacts into Amazon Pinpoint. It saves time so that you can focus on creating effective campaigns to engage with your customers.

As the potential next steps, you can look into further expanding the solution by:

  1. Adjusting the default security of the Amazon S3 bucket by limiting who has access to new files. You can also adjust its encryption and the expiration of the files.
  2. Build out the lookup AWS Lambda to additionally fetch other information about the contact using your other systems of records and/or even 3rd party tools. You can also add business logic such as blocking numbers from certain countries (or vice versa, only allow certain countries).
  3. Add more dynamic segments and new endpoint (or user) attributes to more easily track the contacts based on their upload dates, type of phone number, etc.

Create a nice interface your users can use to interact with when needing to upload instead of using the S3 console directly. This “interface” may even be just a backend flow that simply integrates your system of records. This is so they don’t have to deal with any interface and uploads in the first place.

For this, and some other reference architectures you could consider, see https://github.com/aws-samples/digital-user-engagement-reference-architectures.

References

Amazon Pinpoint

https://aws.amazon.com/pinpoint/

Validating phone numbers in Amazon Pinpoint

https://docs.aws.amazon.com/pinpoint/latest/developerguide/validate-phone-numbers.html

Amazon Pinpoint Campaigns

https://docs.aws.amazon.com/pinpoint/latest/userguide/campaigns.html

Pinpoint Segment

https://docs.aws.amazon.com/pinpoint/latest/userguide/tutorials-create-a-segment.html

 

Send voice appointment reminders using Amazon Pinpoint custom channels and Amazon Connect

Post Syndicated from Ryan Lowe original https://aws.amazon.com/blogs/messaging-and-targeting/send-voice-appointment-reminders-using-amazon-pinpoint-custom-channels-and-amazon-connect/

Introduction

In this post, we will walk through setting up an always-on appointment reminder campaign in Amazon Pinpoint. No-show rates are a constant challenge for service providers. Industries such as hospitality estimate 20% of diners miss reservations in big cities,1 while salons average five missed appointments per week.2 Professional services such as financial institutions and sales teams have similar challenges to ensure clients do not miss meetings. To these businesses, an appointment missed represents lost revenue. As a result, the no-show rate is a key metric to improve. An outbound voice message provides another way to reach customers versus emails or SMS, and voice reminders give customers the choice of channels based on personal preferences.

Overview

Amazon Pinpoint is a multichannel communications service enabling customers to send both promotional and transactional messages across email, SMS, push notifications, voice, and custom channels. Amazon Connect is an easy to use omnichannel cloud contact center that helps companies provide superior customer service at a lower cost.

There are benefits of using these services together. Amazon Pinpoint allows you to build a segment of users which can be used within a campaign. Amazon Connect can enable customers to send outbound voice messages at scale should your user audience be large and require a high number of transactions per second (TPS).

To use these services together, you setup custom channels in Amazon Pinpoint, which can be created via an AWS Lambda function. These functions enable you to call APIs to trigger message sends as part of Amazon Pinpoint campaigns. Amazon Pinpoint has developed a new AWS Lambda function which can be used to send outbound voice messages via Amazon Connect. This configuration allows you to define the voice message to be sent, define the segment of users you would like to target, and send voice messages at scale through Amazon Connect via the Amazon Pinpoint custom channel.

The audience for this solution are technical customers who are used to working with multiple AWS services and are familiar with AWS Lambda functions. The solution built relies on the Amazon Pinpoint custom channel feature and targeting, along with the Amazon Connect outbound voice API called via a prepared AWS Lambda function. Once completed, you will be able to create an evergreen campaign which will send outbound voice messages to your patients who have an appointment the following day.

The costs associated with this solution will be:

  1. Amazon Connect outbound voice calls per minute
  2. Amazon Connect claimed phone number(s)
  3. Amazon Pinpoint Monthly Targeted Audience (MTA) costs.

The costs for a outbound voice reminder system that sends 10k messages per day, with an average length of 20 seconds per call, to an total monthly audience of 300k, in the US are as follows. Note that prices with vary for other countries. Complete Amazon Connect outbound call pricing can be found here.

Solution

Prerequisites:

For this walkthrough the article assumes:

  • An AWS account
  • Basic understanding of IAM and privileges required to create the following; IAM identity provider, roles, policies, and users
  • Basic understanding of Amazon Pinpoint and how to create a project
  • Basic understanding of Amazon Connect and experience in creating contact flows. More information on setup of Amazon Connect can be found here.

Step 1: Create an Appointment Reminder custom event

The first step in setting up this solution is to create and report a custom event to Amazon Pinpoint. There are multiple ways to report events in your application. Ffor demonstration purposes, below are two example event calls using the AWS SDK for Python (Boto3) from inside an AWS Lambda Function.

It is important to note that the Amazon Pinpoint events API can also be used to update endpoints when the event gets registered. In the below example, the first API call will update the endpoint attributes AppointmentDate and AppointmentTime with the details of the upcoming appointment. These attributes will be used in the outgoing message to the end-user

Sample Event: Appointment Coming Up

import boto3

client = boto3.client('pinpoint')
app_id = '[PINPOINT_PROJECT_ID]'
endpoint_id = '[ENDPOINT_ID]'
address = '[PHONE_NUMBER]'

def lambda_handler(event, context):

client.put_events(
ApplicationId = applicationId,
EventsRequest={
'BatchItem': {
endpoint_id: {
'Endpoint': {
'ChannelType': 'CUSTOM',
'Address': address,
'Attributes': {
'AppointmentDate': ['December 15th, 2020'],
'AppointmentTime': ['2:15pm']
}
},
'Events':{
'appointment-event': {
'Attributes':{},
'EventType': 'AppointmentReminder',
'Timestamp': datetime.datetime.fromtimestamp(time.time()).isoformat()
}
}
}
}
}
)

NOTE: The following steps assume that the AppointmentReminder event is being reported to Amazon Pinpoint. If you are unable to integrate the above API call into your application, you can manually create an AWS Lambda function using a Python runtime with the above code to trigger sample events.

Step 2: Create an Amazon Connect contact flow for outbound calls

This article assumes that you have an Amazon Connect contact center already setup and working. In this step, we will set up our Amazon Connect contact flow to dial our recipients and play read the message before hanging up.

  1. Log in to your Amazon Connect instance using your access URL (https://<alias>.awsapps.com/connect/login).
    Note: Replace alias with your instance’s alias.
  2. In the left navigation bar, pause on Routing, and then choose Contact flows.
  3. Under Contact flows, choose a template, or choose Create contact flow to design a contact flow from scratch. For more information, see Create a New Contact Flow.
  4. Download the sample JSON contact flow configuration file Outbound_calling.json.
  5. Choose the dropdown menu under Save and choose Import flow (beta).
  6. Select the Outbound_calling.json file in the Import flow (beta) dialog and choose Save.
  7. Choose Save to open the Save flow dialog. Then choose Save to close the dialog.
  8. Choose Publish to open the Publish dialog. Then choose Publish to close the dialog.
  9. In the contact flow designer, expand Show additional flow information.
  10. Under ARN, copy the Amazon Resource Name (ARN) contact flow. It looks like the following:
    arn:aws:connect:region:123456789012:instance/[ConnectInstanceId]/contact-flow/[ConnectContactFlowId]Note the ConnectInstanceId and ConnectContactFlowId from the ARN, they will be used in the next step.
  11. In the left navigation bar, pause on Routing and then choose Queues.
  12. Choose the queue you wish to use for the outbound calls.
  13. In the Edit queue screen, expand Show additional queue information.
  14. Under ARN, copy the Amazon Resource Name (ARN) for the queue. It looks like the following:
    arn:aws:connect:region:123456789012:instance/[ConnectInstanceId]/contact-flow/[ConnectQueueId]Note the ConnectQueueId from the ARN. It will be used in the next step.

Step 3: Deploy and modify the Amazon Pinpoint to the Amazon Connect custom channel with AWS Lambda function

Next, we will need to deploy an Amazon Pinpoint custom channel. Custom channels in Amazon Pinpoint allow you to send messages through any service with an API, including Amazon Connect. The AWS Serverless Application Repository contains an open-sourced AWS Lambda function that we will use for our custom channel. After deploying the AWS Lambda function, we will customize it to match our requirements.

  1. Navigate to the AWS Lambda Console, then choose Create function.
  2. Under Create function, Choose Browser serverless app repository.
  3. Under Public applications, choose the checkbox next to Show apps that create custom IAM roles or resource policies and enter amazon-pinpoint-connect-channel in the search box.
  4. Choose the amazon-pinpoint-connect-channel card from the list and review the Application details.
  5. Under Application settings enter the details for ConnectContactFlowId, ConnectInstanceId, and ConnectQueueId from the previous step.
  6. After reviewing all the details, choose the checkbox next to I acknowledge that this app creates custom IAM roles and resource policies and choose Deploy.
  7. Wait a couple minutes for the application to deploy two AWS Lambda functions and an AWS Simple Queue Service queue.
  8. Under Resources, choose the PinpointConnectQueuerFunction resource to open the AWS Lambda function configuration. This is the AWS Lambda function that Amazon Pinpoint will call when the message is crafted.
  9. Under Function code, scroll down to line 31 and replace
    message = "Hello World! -Pinpoint Connect Channel"
    with
    message = "This is a reminder of your upcoming appointment on {0} at {1}".format(endpoint_profile["Attributes"]["AppointmentDate"][0], endpoint_profile["Attributes"]["AppointmentTime"][0])
  10. Choose Deploy.

Step 4: (Optional) Modify the custom channel AWS Lambda function to meet change the rate of outgoing calls

By default, the custom channel we deployed in the previous step will place outbound calls through Amazon Connect at a rate of 1 call every 3 seconds. This allows you to configure how many active outbound calls to avoid running into service limits. Review your current service limits in Amazon Connect for more details.

  1. Navigate to the AWS Lambda Console, then choose AmazonPinpointConnectChannel-backgroundprocessor function.
  2. Under Function code, scroll down to line 73 and replace the sleep timer, currently set with 3 seconds, with your requirements.
  3. Choose Deploy.

Step 5: Create a Pinpoint custom campaign with your lambda function and segment

  1. Create a CSV file to import endpoints with the attributes of AppointmentDate and AppointmentTime.
    Example:
    Id,Address,ChannelType,Attributes.AppointmentDate,Attributes.AppointmentTime
    1,+1[PHONE_NUMBER],SMS,November 30 2020,9:00am
    2,+1[PHONE_NUMBER2],SMS,November 30 2020,10:00am
  2. Navigate to the Amazon Pinpoint console.
  3. In the All Projects list, select your project.
  4. In the navigation pane, choose Segments.
  5. Choose Create a Segment.
  6. Choose Import a segment and upload your CSV file and choose Create segment.
  7. In the navigation pane, choose Campaigns.
  8. Choose Create campaign.
  9. In the Create a campaign wizard, enter a name for campaign name.
  10. Under Channel choose Custom.
  11. Choose Next.
  12. On the Choose a segment screen, choose the segment created above, and choose Next.
  13. On the Create your message screen, do the following:
    a) For Lambda function choose AmazonPinpointConnectChannel that we deployed in Step 3 above.
    b) For endpoint Options choose SMS.
    c) Choose Next.
  14. On the Choose when to send the campaign screen, do the following:
    a) Choose When an event occurs.
    b) Under Events, choose the AppointmentReminder event.
    c) Under campaign dates, choose a Start date and time and an End date and time to be used as the campaign’s duration.
  15. Choose Next.
  16. Review the campaign details and choose Launch campaign.

Cleanup:

To remove the two AWS Lambda functions and the Amazon Simple Queue Service queue provisioned in the steps above in order not to incur further charges, please follow these steps below.

  1. Navigate to the Amazon CloudFormation Console.
  2. Choose severlessrepo-amazon-pinpoint-connect-channel and choose Delete.
  3. Choose Delete stack in the delete confirmation window.

 

Next Steps:

You can continue to iterate on this experience using Amazon Pinpoint and Amazon Connect to create a custom user experience.

To learn more about these services, please visit the Amazon Pinpoint or Amazon Connect web pages.

(1) https://www.scisolutions.com/uploads/news/Missed-Appts-Cost-HMT-Article-042617.pdf

(2) https://blog.carbonfreedining.org/the-ultimate-guide-to-restaurant-no-shows

Auto-reply to incoming emails using Amazon Simple Email Service (SES)

Post Syndicated from Ilya Pupko original https://aws.amazon.com/blogs/messaging-and-targeting/auto-reply-to-incoming-emails-using-amazon-simple-email-service-ses/

Both Amazon Pinpoint and Amazon Simple Email Service (SES) are known for their ability to send out transactional and promotional emails at scale and with ease. However, both are often not set up to receive email replies. Owners often assume that the “no-reply” addresses they are using do not require much consideration. This means that if a customer does reply, they would get an unhelpful server rejection indicating that the address is invalid. They would also not be able to unsubscribe via the simple reply, which is an otherwise established common practice. Automated guidance that the address is not monitored and who and how to reach for assistance would never be provided. In summary, a very unprofessional experience.

If you do have full control over the DNS and are not already receiving emails at the subdomain used for these emails, you can follow this short guide. It walks you through all the setup needed to have automated and templated responses to any address at the domain. This includes the address you use to send emails. Follow this post to ensure that your Amazon SES and Amazon Pinpoint are set up in accordance with common configuration and best business practices to have professional auto-reply to emails sent to the configured sending email addresses.

Solution overview

The proposed solution does not rely on any additional services. It does not add any additional charges beyond the cost directly associated with receiving and sending the emails and the minimal AWS Lambda function for the automated logic. It relies on SES built-in capability to receive emails, Amazon Pinpoint native templates, and uses Lambda for basic orchestration.

lambda diagram for response

Note, in this walkthrough and related code, we are using Amazon Pinpoint templates as they can be managed and maintained directly via the console, but you can choose to use SES templates (via the CreateTemplate API) or, if it makes better sense in your scenario, even just hardcode the template into the AWS Lambda function itself.

To complete the setup, all you must do is follow these steps:

      1. Confirm (Sub-) Domain setup in SES (even if you use Amazon Pinpoint to send your emails out, the SES portion of the console should show the validated domain as well). See SES Developer Guide.
      2. Ensure that your SES domain is verified and you are out of the sandbox. If still in the sandbox, you can only send emails to the Amazon SES mailbox simulator addresses and email addresses/domains that you have pre-verified. See Moving out of the Amazon SES sandbox.
      3. Configure SES to receive incoming emails. Please note that this must be done on the whole subdomain you use, not just a single email address. See Setting up Amazon SES email receiving.
      4. Create/add a new template you want to use via Amazon Pinpoint. Simply switch the console over to Amazon Pinpoint, select Message templates, click Create, select Email, and fill out the rest of the self-explanatory field.
        1. Plaintext portion is optional – you can either skip it or fill it out and enable in the Lambda function we are deploying in the next step.
        2. Similarly, if you prefer to use the SES template, you can instead. Just use the associated line in that same code.
        3. Same with a hardcoded template, if you prefer that for some reason.
      5. Have this pre-defined CloudFormation create the required SES receive rule, and Lambda function. This processes the incoming email and sends back the response, all using the code shared in the dedicated portion of our GitHub, AWS Digital User Engagement Reference Architectures repository. Specifically:
          1. Download the YAML from SES_Auto_Reply.yaml.
          2. Go to CloudFormation in AWS Management Console. (Remember to choose the region you want it deployed on)
          3. Click Create Stack and then choose With new resources
          4. Leave the default “Template is ready“, but switch to ”Upload a template file“ and choose the file you just downloaded
          5. Follow the wizard to give the “stack” a new name and enter the name of the template you created in step 4.
          6. Optionally you can also set the default response address, the addresses and/or domains you want to limit the auto-response to, and adjust the incoming email rule-set it should be stored under (the default should be fine, unless you have manually adjusted it in the past)
      6. Once deployed, the behavior is immediately active and you can further adjust any of these elements.

 

Conclusion and what’s next?

This architecture, once deployed, sends out the templated auto-response using the SES/Pinpoint domain/email address it received the original email on.

The new rule is added to the SES email receiving rule set to allow further customization:

  1. The rule can be limited to specific email address, specific domain, or just be set to be across all domains.
  2. It can also have the default response address set or reuse the address that the original rejected email was sent to.
  3. It can be moved down on the priority with other rules taking precedence and possibly even overriding it.
  4. It can have other actions added to it, like notifying SNS for additional tracking.

The Lambda function looks up the chosen Amazon Pinpoint template and uses it to reply. Here are some of the customizations you may want to consider within this function and the template:

  1. When sending the automated reply, by default, the template’s configured subject is appended with the original incoming email subject. You can adjust this to fit your company’s brand better.
  2. By default, the function supports an optional template tag %%NAME%% and %%ID%%. If the first appears in the template, it is automatically replaced with the original email’s FROM address. And if %%ID%% appears in the template, it is replaced with the SES’s original email message id, to help with any required audits.
  3. It is assumed that no additional tracking and actions are needed on such rejected and auto-replied emails, but you can further modify the flow by moving the rule around and adding more actions (as mentioned above), and even specify a particular/different SES Configuration Set for the outgoing emails.

Are you using this flow as a baseline for a more complex business flow or have other questions about it? We want to hear back – please comment here or file an issue in the GitHub repository. If you want to file a pull request to make it even more useful for others, please do so, we do appreciate community participation.

If you liked this article, we are continually expanding our Amazon Pinpoint and SES Architecture References and publish new solutions for these and other services. For most recent SES documentation, please see official SES documentation site, and for Amazon Pinpoint, please see Amazon Pinpoint documentation site.

 

 

 

Application integration patterns for microservices: Running distributed RFQs

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/application-integration-patterns-running-distributed-rfqs/

This post is courtesy of Dirk Fröhner, Principal Solutions Architect.

The first blog in this series introduces asynchronous messaging for building loosely coupled systems that can scale, operate, and evolve individually. It considers messaging as a communications model for microservices architectures. Part 2 dives into fan-out strategies and applies the respective patterns to a concrete use case.

In this post, I look at how to apply messaging patterns to help coordinate distributed requests and responses. Specifically, I focus on a composite pattern called scatter-gather, as presented in the book “Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions” (Hohpe and Woolf, 2004).

I also show how a client can communicate with a backend via synchronous REST API operations while asynchronous messaging is applied internally for processing.

Overview

The use case is for Wild Rydes, a fictional application that replaces traditional taxis with unicorns. It’s used in several hands-on AWS workshops that illustrate serverless development concepts.

Wild Rydes wants to allow customers to initiate requests for quotation (RFQs) for their rides. This allows unicorns to make special offers to potential customers within a defined schedule. A customer can send their ride details and ask for quotations from all unicorns that are within a certain vicinity. The customer can then choose the best offer.

Wild Rydes

The scatter-gather pattern

The scatter-gather pattern can be used to implement this use-case on the server side. This pattern is ideal for requesting responses from multiple parties, then aggregating and processing that data.

As presented by Hohpe and Woolf, the scatter-gather pattern is a composite pattern that illustrates how to “broadcast a message to multiple recipients and re-aggregate the responses back into a single message”. The pattern is illustrated in the following diagram.

Scatter-gather architecture

The flow starts with the Requester to initiate the broadcast to all potential Responders. This can be architected in a loosely coupled manner using pub-sub messaging with Amazon SNS or Amazon MQ, as shown in this blog post.

All responders must send their answers somewhere for aggregation and processing. This can also be architected in a loosely coupled manner using a message queue with Amazon SQS or Amazon MQ, as described in this blog post.

The Aggregator component consumes the individual responses from the response queue. It forwards the aggregate to the Processor component for final processing. Both Aggregator and Processor can be part of the same application or process. If separated, they can be decoupled through messaging. The Requester can also be part of the same application or process as Aggregator and Processor.

Explaining the architecture and API

In this section, I walk through the use-case and explain how it can be architected and implemented. I show how the scatter-gather pattern works in the backend, and the client-to-backend communication.

Submit instant ride RFQ

To initiate such an RFQ, the customer app communicates with the ride booking service on the backend. The ride booking service exposes a REST API. By default, an RFQ runs for five minutes, but Wild Rydes is working on a feature to let a customer individually set that value.

A request to submit an instant-ride RFQ contains start and destination locations for the ride and the customer ID:

POST /<submit-instant-ride-rfq-resource-path> HTTP/1.1
...

{
    "from": "...",
    "to": "...",
    "customer": "..."
}

The RFQ is a lengthy process so the client app should not expect an immediate response. Instead, the API accepts the RFQ, creates an RFQ task resource, and returns to the client. The response contains a URL to request an update for the status. It also provides an estimated time for the end of the RFQ:

HTTP/1.1 202 Accepted
...

{
    "links": {
        "self": "http://.../<rfq-task-resource-path>",
        "...": "..."
    },
    "status": "running",
    "eta": "..."
}

The following architecture shows this interaction, excluding the process after a new RFQ is submitted.

Client app interaction

Processing the RFQ

The backend uses the scatter-gather pattern to publish the RFQ to unicorns and collect responses for aggregation and processing.

Backend architecture

1. The ride booking service acts as the requester in the scatter-gather pattern. Following a new RFQ from the client app, it publishes the details into an SNS topic. This topic is related to the location of the ride’s starting point since customers need quotes from unicorns within the vicinity. These messages are the green request messages.

2. The unicorn management service maintains instances of unicorn management resources and subscribes them to RFQ topics related to their current location. These resources receive the RFQ request messages and handle the interaction with the Wild Rydes unicorn app.

3. The unicorns in the vicinity are notified through the Wild Rydes unicorn app about the new RFQ and can react if they are available. Notification options between the unicorn management service and the Wild Rydes unicorn app include push notifications and web sockets.

4. Every addressed unicorn can now submit their quote. All quotes go back through the unicorn management resources and the unicorn management service into the RFQ response queue. They act as the responders in the sense of the scatter-gather pattern.

5. The ride booking service also acts as aggregator and processor in the sense of the scatter-gather pattern. It uses SQS to consume messages from an RFQ response queue that eventually contains the RFQ responses from the involved unicorns. It starts doing so immediately after it publishes the details of a new RFQ into the RFQ topic. The messages from the RFQ response queue relate to the blue response messages.

The ride booking service consumes all incoming responses from that queue. This continues until the deadline or all participating unicorns have answered, whatever occurs first. The aggregator responsibility can be as simple as persisting the details of each incoming RFQ response into an Amazon DynamoDB table.

To match incoming responses to the right RFQ, it uses a fundamental integration pattern, correlation ID. In this pattern, a requester adds a unique ID to an outgoing message and each responder is asked to forward this ID in their response.

Also, responders must know where to send their responses to. To keep this dynamic, there is another fundamental integration pattern: return address. It suggests that a requester adds meta information into outgoing messages that indicate the address for their responses. In this architecture, this is the ARN of the SQS queue that acts as the RFQ response queue. This supports an option to simplify the response management: the RFQ response queue is a dedicated queue per customer.

Lastly, the processor responsibility in the ride booking service reads the RFQ responses from the DynamoDB table. It converts the data to JSON for the Wild Rydes customer app.

Check RFQ status

During the RFQ processing, a customer may want to know how many responses have already arrived, or if the results are already available. After submitting an instant ride RFQ, the client receives a representation of the running task. It can use the self-link to request an update:

GET /<rfq-task-resource-path> HTTP/1.1

While the task is running, a response from the ride booking service comes back with the respective status value and the count of responses that have already arrived:

HTTP/1.1 200 OK
...

{
    "links": {
        "self": "http://.../<rfq-task-resource-path>",
        "...": "..."
    },
    "status": "running",
    "responses-received": 2,
    "eta": "..."
}

After the RFQ is completed

An RFQ is completed if either the time is up or all unicorns have answered. The result of the RFQ is then available to the customer. If the client requests an update to the task representation, the response indicates this by redirecting to the RFQ result:

HTTP/1.1 303 See Other
Location: <url-of-rfq-result-resource>

Requesting a representation of the results resource, the client receives the quotes of all the participating unicorns. The frontend customer app can visualize these accordingly:

HTTP/1.1 200 OK
...

{
    "links": { ... },
    "from": "...",
    "to": "...",
    "customer": "...",
    "quotes": [ ... ]
}

The ride booking service can also use means of active notifications to make the customer app aware once the RFQ result is ready, including the link to the RFQ result. Examples for this include push notifications and web sockets.

Conclusion

In this blog, I present the scatter-gather pattern, which is a composite pattern based on pub-sub and point-to-point messaging channels. It also employs correlation ID and return address. I show how this is implemented in the Wild Rydes example application. You can use this integration pattern for communication in your microservices.

I cover how synchronous API communication between end user client and backend can work along with asynchronous messaging for request processing internally.

To learn more:

For more serverless learning resources, visit https://serverlessland.com.

Introducing Amazon SNS FIFO – First-In-First-Out Pub/Sub Messaging

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/introducing-amazon-sns-fifo-first-in-first-out-pub-sub-messaging/

When designing a distributed software architecture, it is important to define how services exchange information. For example, the use of asynchronous communication decouples components and simplifies scaling, reducing the impact of changes and making it easier to release new features.

The two most common forms of asynchronous service-to-service communication are message queues and publish/subscribe messaging:

  • With message queues, messages are stored on the queue until they are processed and deleted by a consumer. On AWS, Amazon Simple Queue Service (SQS) provides a fully managed message queuing service with no administrative overhead.
  • With pub/sub messaging, a message published to a topic is delivered to all subscribers to the topic. On AWS, Amazon Simple Notification Service (SNS) is a fully managed pub/sub messaging service that enables message delivery to a large number of subscribers. Each subscriber can also set a filter policy to receive only the messages that it cares about.

You can use topics when you want to fan out messages to multiple applications, and queues when you want to send messages to one application. Using topics and queues together, you can decouple microservices, distributed systems, and serverless applications.

With SQS, you can use FIFO (First-In-First-Out) queues to preserve the order in which messages are sent and received, and to avoid that a message is processed more than once.

Introducing SNS FIFO Topics
Today, we are adding similar capabilities for pub/sub messaging with the introduction of SNS FIFO topics, providing strict message ordering and deduplicated message delivery to one or more subscribers.

FIFO topics manage ordering and deduplication similar to FIFO queues:

Ordering – You configure a message group by including a message group ID when publishing a message to a FIFO topic. For each message group ID, all messages are sent and delivered in order of their arrival. For example, to ensure the delivery of messages related to the same customer in order, you can publish these messages to the topic using the customer’s account number as the message group ID. There is no limit in the number of message groups with FIFO topics and queues. You don’t need to declare in advance the message group ID, any value will work. If you don’t have a logical distinction between messages, you can simply use the same message group ID for all and have a single group of ordered messages. The message group ID is passed to any subscribed FIFO queue.

Deduplication – Distributed systems (like SNS) and client applications sometimes generate duplicate messages. You can avoid duplicated message deliveries from the topic in two ways: either by enabling content-based deduplication on the topic, or by adding a deduplication ID to the messages that you publish. With message content-based deduplication, SNS uses a SHA-256 hash to generate the message deduplication ID using the body of the message. After a message with a specific deduplication ID is published successfully, there is a 5-minute interval during which any message with the same deduplication ID is accepted but not delivered. If you subscribe a FIFO queue to a FIFO topic, the deduplication ID is passed to the queue and it is used by SQS to avoid duplicate messages being received.

You can use FIFO topics and queues together to simplify the implementation of applications where the order of operations and events is critical, or when you cannot tolerate duplicates. For example, to process financial operations and inventory updates, or to asynchronously apply commands that you receive from a client device. FIFO queues can use message filtering in FIFO topics to selectively receive only a subset of messages rather than every message published to the topic.

How to Use SNS FIFO Topics
A common scenario where FIFO topics can help is when you receive updates that need to be processed in order. For example, I can use a FIFO topic to receive updates from an application where my customers edit their account profiles. Then, I subscribe an SQS FIFO queue to the FIFO topic, and use the queue as trigger for a Lambda function that applies the account updates to an Amazon DynamoDB table used by my Customer management system that needs to be kept in sync.

The decoupling introduced by the FIFO topic makes it easier to add new functionality with minimal impact to existing applications. For example, to reward my loyal customers with additional promotions, I add a new Loyalty application that is storing information in a relational database managed by Amazon Aurora. To keep the customer’s information stored in the Loyalty database in sync with my other applications, I can subscribe a new FIFO queue to the same FIFO topic, and add a new Lambda function that receives customer updates in the same order as they are generated, and applies them to the Loyalty database. In this way, I don’t need to change code and configuration of other applications to integrate the new Loyalty app.

First, I create two FIFO queues in the SQS console, leaving all options to their defaults:

  • The customer.fifo queue to process updates in my Customer management system.
  • The loyalty.fifo queue to help me collect and store customer updates for the Loyalty application.

In the SNS console, I create the updates.fifo topic. I select FIFO as type, and enable Content-based message deduplication.

Then,  I subscribe the customer.fifo and loyalty.fifo queues to the topic.

To be able to receive messages, I add a statement to the access policy of both queues granting the updates.fifo topic permissions to send messages to the queues. For example, for the customer.fifo queue the statement is:

{
  "Effect": "Allow",
  "Principal": {
    "Service": "sns.amazonaws.com"
  },
  "Action": "SQS:SendMessage",
  "Resource": "arn:aws:sqs:us-east-2:123412341234:customer.fifo",
  "Condition": {
    "ArnLike": {
      "aws:SourceArn": "arn:aws:sns:us-east-2:123412341234:updates.fifo"
    }
  }
}

Now, I use the SNS console to publish 4 messages in sequence. For all messages, I use the same message group ID. In this way, they are all in the same message group. The only part that is different is the message body, where I use in order:

  • Update One
  • Update Two
  • Update Three
  • Update One

In the SQS console, I see that only 3 messages have been delivered to the FIFO queues:

Why is that? When I created the FIFO topics, I enabled content-based deduplication. The 4 messages were sent within the 5-minute deduplication window. The last message has been recognized as a duplicate of the first one and has not been delivered to the subscribed queues.

Let’s see the actual messages in the queues. I use the AWS Command Line Interface (CLI) to receive the messages from SQS, and the jq command-line JSON processor to format the output and get only the Message in the Body.

Here are the messages in the customer.fifo queue:

$ aws sqs receive-message --queue-url https://sqs.us-east-2.amazonaws.com/123412341234/customer.fifo --max-number-of-messages 10 | jq '.Messages[].Body | fromjson | .Message'

"Update One"
"Update Two"
"Update Three"

And these are the messages in the loyalty.fifo queue:

$ aws sqs receive-message --queue-url https://sqs.us-east-2.amazonaws.com/123412341234/loyalty.fifo --max-number-of-messages 10 | jq '.Messages[].Body | fromjson | .Message'

"Update One"
"Update Two"
"Update Three"

As expected, the 3 messages with unique content have been delivered to both queues in the same order as they were sent.

Available Now
You can use SNS FIFO topics in all commercial regions. You can process up to 300 transactions per second (TPS) per FIFO topic or FIFO queue. With SNS, you pay only for what you use, you can find more information in the pricing page.

To learn more, please see the documentation.

Danilo

Analyze and improve email campaigns with Amazon Simple Email Service and Amazon QuickSight

Post Syndicated from Apoorv Gakhar original https://aws.amazon.com/blogs/messaging-and-targeting/analyze-and-improve-email-campaigns-with-amazon-simple-email-service-and-amazon-quicksight/

Email is a popular channel for applications, used in both marketing campaigns and other outbound customer communications. The challenge with email is that it can become increasingly complex to manage for companies that must send large quantities of messages per month. This complexity is especially true when companies need to measure detailed email engagement metrics to track campaign success.

As a marketer, you want to monitor several metrics, including open rates, click-through rates, bounce rates, and delivery rates. If you do not track your email results, you could potentially be wasting your campaign resources. Monitoring and interpreting your sending results can help you deliver the best content possible to your subscribers’ inboxes, and it can also ensure that your IP reputation stays high. Mailbox providers prioritize inbox placement for senders that deliver relevant content. As a business professional, tracking your emails can also help you stay on top of hot leads and important clients. For example, if someone has opened your email multiple times in one day, it might be a good idea to send out another follow-up email to touch base.

Building a large-scale email solution is a complex and expensive challenge for any business. You would need to build infrastructure, assemble your network, and warm up your IP addresses. Alternatively, working with some third-party email solutions require contract negotiations and upfront costs.

Fortunately, Amazon Simple Email Service (SES) has a highly scalable and reliable backend infrastructure to reduce the preceding challenges. It has improved content filtering techniques, reputation management features, and a vast array of analytics and reporting functions. These features help email senders reach their audiences and make it easier to manage email channels across applications. Amazon SES also provides API operations to monitor your sending activities through simple API calls. You can publish these events to Amazon CloudWatch, Amazon Kinesis Data Firehose, or by using Amazon Simple Notification Service (SNS).

In this post, you learn how to build and automate a serverless architecture that analyzes email events. We explore how to track important metrics such as open and click rate of the emails.

Solution overview

 

The metrics that you can measure using Amazon SES are referred to as email sending events. You can use Amazon CloudWatch to retrieve Amazon SES event data. You can also use Amazon SNS to interpret Amazon SES event data. However, in this post, we are going to use Amazon Kinesis Data Firehose to monitor our user sending activity.

Enable Amazon SES configuration sets with open and click metrics and publish email sending events to Amazon Kinesis Data Firehose as JSON records. A Lambda function is used to parse the JSON records and publish the content in the Amazon S3 bucket.

Ingested data lands in an Amazon S3 bucket that we refer to as the raw zone. To make that data available, you have to catalog its schema in the AWS Glue data catalog. You create and run the AWS Glue crawler that crawls your data sources and construct your Data Catalog. The Data Catalog uses pre-built classifiers for many popular source formats and data types, including JSON, CSV, and Parquet.

When the crawler is finished creating the table definition and schema, you analyze the data using Amazon Athena. It is an interactive query service that makes it easy to analyze data in Amazon S3 using SQL. Point to your data in Amazon S3, define the schema, and start querying using standard SQL, with most results delivered in seconds.

Now you can build visualizations, perform ad hoc analysis, and quickly get business insights from the Amazon SES event data using Amazon QuickSight. You can easily run SQL queries using Amazon Athena on data stored in Amazon S3, and build business dashboards within Amazon QuickSight.

 

Deploying the architecture:

Configuring Amazon Kinesis Data Firehose to write to Amazon S3:

  1. Navigate to the Amazon Kinesis in the AWS Management Console. Choose Kinesis Data Firehose and create a delivery stream.
  2. Enter delivery stream name as “SES_Firehose_Demo”.
  3. Under the source category, select “Direct Put or other sources”.
  4. On the next page, make sure to enable Data Transformation of source records with AWS Lambda. We use AWS Lambda to parse the notification contents that we only process the required information as per the use case.
  5. Click the “Create New” Lambda function.
  6. Click on “General Kinesis Data FirehoseProcessing” Lambda blueprint and this opens up the Lambda console. Enter following values in Lambda
    • Name: SES-Firehose-Json-Parser
    • Execution role: Create a new role with basic Lambda permissions.
  7. Click “Create Function”. Now replace the Lambda code with the following provided code and save the function.
    • 'use strict';
      console.log('Loading function');
      exports.handler = (event, context, callback) => {
         /* Process the list of records and transform them */
          const output = event.records.map((record) => {
              console.log(record.recordId);
              const payload =JSON.parse((Buffer.from(record.data, 'base64').toString()))
              console.log("payload : " + payload);
              
              if (payload.eventType == "Click") {
              const resultPayLoadClick = {
                      eventType : payload.eventType,
                      destinationEmailId : payload.mail.destination[0],
                      sourceIp : payload.click.ipAddress,
                  };
              console.log("resultPayLoad : " + resultPayLoadClick.eventType + resultPayLoadClick.destinationEmailId + resultPayLoadClick.sourceIp);
              
              //const parsed = resultPayLoad[0];
              //console.log("parsed : " + (Buffer.from(JSON.stringify(resultPayLoad))).toString('base64'));
              
              
              return{
                  recordId: record.recordId,
                  result: 'Ok',
                  data: (Buffer.from(JSON.stringify(resultPayLoadClick))).toString('base64'),
              };
              }
              else {
                  const resultPayLoadOpen = {
                      eventType : payload.eventType,
                      destinationEmailId : payload.mail.destination[0],
                      sourceIp : payload.open.ipAddress,
                  };
              console.log("resultPayLoad : " + resultPayLoadOpen.eventType + resultPayLoadOpen.destinationEmailId + resultPayLoadOpen.sourceIp);
              
              //const parsed = resultPayLoad[0];
              //console.log("parsed : " + (Buffer.from(JSON.stringify(resultPayLoad))).toString('base64'));
              
              
              return{
                  recordId: record.recordId,
                  result: 'Ok',
                  data: (Buffer.from(JSON.stringify(resultPayLoadOpen))).toString('base64'),
              };
              }
          });
          console.log("Output : " + output.data);
          console.log(`Processing completed.  Successful records ${output.length}.`);
          callback(null, { records: output });
      };

      Please note:

      For this blog, we are only filtering out three fields i.e. Eventname, destination_Email, and SourceIP. If you want to store other parameters you can modify your code accordingly. For the list of information that we receive in notifications, you may check out the following document.

      https://docs.aws.amazon.com/ses/latest/DeveloperGuide/event-publishing-retrieving-firehose-examples.html

  8. Now, navigate back to your Amazon Kinesis Data Firehose console and choose the newly created Lambda function.
  9. Keep the convert record format disabled and click “Next”.
  10. In the destination, choose Amazon S3 and select a target Amazon S3 bucket. Create a new bucket if you do not want to use the existing bucket.
  11. Enter the following values for Amazon S3 Prefix and Error Prefix. When event data is published.
    • Prefix:
      fhbase/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/hour=!{timestamp:HH}/
    • Error Prefix:
      fherroroutputbase/!{firehose:random-string}/!{firehose:error-output-type}/!{timestamp:yyyy/MM/dd}/
  12. You may utilize the above values in the Amazon S3 prefix and error prefix. If you use your own prefixes make sure to accordingly update the target values in AWS Glue which you will see in further process.
  13. Keep the Amazon S3 backup option disabled and click “Next”.
  14. On the next page, under the Permissions section, select create a new role. This opens up a new tab and then click “Allow” to create the role.
  15. Navigate back to the Amazon Kinesis Data Firehose console and click “Next”.
  16. Review the changes and click on “Create delivery stream”.

Configure Amazon SES to publish event data to Kinesis Data Firehose:

  1. Navigate to Amazon SES console and select “Email Addresses” from the left side.
  2. Click on “Verify a New Email Address” on the top. Enter your email address to which you send a test email.
  3. Go to your email inbox and click on the verify link. Navigate back to the Amazon SES console and you will see verified status on the email address provided.
  4. Open the Amazon SES console and select Configuration set from the left side.
  5. Create a new configuration set. Enter “SES_Firehose_Demo”  as the configuration set name and click “Create”.
  6. Choose Kinesis Data Firehose as the destination and provide the following details.
    • Name: OpenClick
    • Event Types: Open and Click
  7. In the IAM Role field, select ‘Let SES make a new role’. This allows SES to create a new role and add sufficient permissions for this use case in that role.
  8. Click “Save”.

Sending a Test email:

  1. Navigate to Amazon SES console, click on “Email Addresses” on the left side.
  2. Select your verified email address and click on “Send a Test email”.
  3. Make sure you select the raw email format. You may use the following format to send out a test email from the console. Make sure you send out this email to a recipient inbox to which you have the access.
    • X-SES-CONFIGURATION-SET: SES_Firehose_Demo
      X-SES-MESSAGE-TAGS: Email=NULL
      From: [email protected]
      To: [email protected]
      Subject: Test email
      Content-Type: multipart/alternative;
          		boundary="----=_boundary"
      
      ------=_boundary
      Content-Type: text/html; charset=UTF-8
      Content-Transfer-Encoding: 7bit
      This is a test email.
      
      <a href="https://aws.amazon.com/">Amazon Web Services</a>
      ------=_boundary
  4. Once the email is received in the recipient’s inbox, open the email and click the link present in the same. This generates a click and open event and send the response back to SES.

Creating Glue Crawler:

  1. Navigate to the AWS Glue console, select “crawler” from the left side, and then click on “Add crawler” on the top.
  2. Enter the crawler name as “SES_Firehose_Crawler” and click “Next”.
  3. Under Crawler source type, select “Data stores” and click “Next”.
  4. Select Amazon S3 as the data source and prove the required path. Include the path until the “fhbase” folder.
  5. Select “no” under Add another data source section.
  6. In the IAM role, select the option to ‘Create an IAM role’. Enter the name as “SES_Firehose-Crawler”. This provides the necessary permissions automatically to the newly created role.
  7. In the frequency section, select run on demand and click “Next”. You may choose this value as per your use case.
  8. Click on add Database and provide the name as “ses_firehose_glue_db”. Click on create and then click “Next”.
  9. Review your Glue crawler setting and click on “Finish”.
  10. Run the above-created crawler. This crawls the data from the specified Amazon S3 bucket and create a catalog and table definition.
  11. Now navigate to “tables” on the left, and verify a “fhbase” table is created after you run the crawler.

If you want to analyze the data stored until now, you can use Amazon Athena and test the queries. If not, you can move to the Amazon Quicksight directly.

Analyzing the data using Amazon Athena:

  1. Open Athena console and select the database, which is created using AWS Glue
  2. Click on “setup a query result location in Amazon S3” as shown in the following screenshot.
  3. Navigate to the Amazon S3 bucket created in earlier steps and create a folder called “AthenaQueryResult”. We store our Athena query result in this bucket.
  4. Now navigate back to Amazon Athena and select the Amazon S3 bucket with the folder location as shown in the following screenshot and click “Save”.
  5. Run the following query to test the sample output and accordingly modify your SQL query to get the desired output.
    • Select * from “ses_firehose_glue_db”.”fhbase”

Note: If you want to track the opened emails by unique Ip addresses then you can modify your SQL query accordingly. This is because every time an email gets opened, you will receive a notification even if the same email was previously opened.

 

Visualizing the data in Amazon QuickSight dashboards:

  1. Now, let’s analyze this data using Amazon Athena via Amazon Quicksight.
  2. Log into Amazon Quicksight and choose Manage data, New dataset. Choose Amazon Athena as a new data source.
  3. Enter the data source name as “SES-Demo” and click on “Create the data source”.
  4. Select your database from the drop-down as “ses_firehose_glue_db” and table “fhbase” that you have created in AWS Glue.
  5. And add a custom SQL based on your use case and click on “Confirm query”. Refer to the example below.
  6. You can perform ad hoc analysis and modify your query according to your business needs as shown in the following image. Click “Save & Visualize”.
  7. You can now visualize your event data on Amazon Quicksight dashboard. You can use various graphs to represent your data. For this demo, the default graph is used and two fields are selected to populate on the graph, as shown below.

 

Conclusion:

This architecture shows how to track your email sending activity at a granular level. You set up Amazon SES to publish event data to Amazon Kinesis Data Firehose based on fine-grained email characteristics that you define. You can also track several types of email sending events, including sends, deliveries, bounces, complaints, rejections, rendering failures, and delivery delays. This information can be useful for operational and analytical purposes.

To get started with Amazon SES, follow this quick start guide and you can learn more about monitoring sending activity here.

About the Authors

Chirag Oswal is a solutions architect and AR/VR specialist working with the public sector India. He works with AWS customers to help them adopt the cloud operating model on a large scale.

Apoorv Gakhar is a Cloud Support Engineer and an Amazon SES Expert. He is working with AWS to help the customers integrate their applications with various AWS Services.

 

Additional Resources:

Amazon SES Dedicated IP Pools

Amazon Personalize optimizer using Amazon Pinpoint events

Template Personalization using Amazon Pinpoint

 

 

Building resilient serverless patterns by combining messaging services

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-resilient-no-code-serverless-patterns-by-combining-messaging-services/

In “Choosing between messaging services for serverless applications”, I explain the features and differences between the core AWS messaging services. Amazon SQS, Amazon SNS, and Amazon EventBridge provide queues, publish/subscribe, and event bus functionality for your applications. Individually, these are robust, scalable services that are fundamental building blocks of serverless architectures.

However, you can also combine these services to solve specific challenges in distributed architectures. By doing this, you can use specific features of each service to build sophisticated patterns with little code. These combinations can make your applications more resilient and scalable, and reduce the amount of custom logic and architecture in your workload.

In this blog post, I highlight several important patterns for serverless developers. I also show how you use and deploy these integrations with the AWS Serverless Application Model (AWS SAM).

Examples in this post refer to code that can be downloaded from this GitHub repo. The README.md file explains how to deploy and run each example.

SNS to SQS: Adding resilience and throttling to message throughput

SNS has a robust retry policy that results in up to 100,010 delivery attempts over 23 days. If a downstream service is unavailable, it may be overwhelmed by retries when it comes back online. You can solve this issue by adding an SQS queue.

Adding an SQS queue between the SNS topic and its subscriber has two benefits. First, it adds resilience to message delivery, since the messages are durably stored in a queue. Second, it throttles the rate of messages to the consumer, helping smooth out traffic bursts caused by the service catching up with missed messages.

To build this in an AWS SAM template, you first define the two resources, and the SNS subscription:

  MySqsQueue:
    Type: AWS::SQS::Queue

  MySnsTopic:
    Type: AWS::SNS::Topic
    Properties:
      Subscription:
        - Protocol: sqs
          Endpoint: !GetAtt MySqsQueue.Arn

Finally, you provide permission to the SNS topic to publish to the queue, using the AWS::SQS::QueuePolicy resource:

  SnsToSqsPolicy:
    Type: AWS::SQS::QueuePolicy
    Properties:
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Sid: "Allow SNS publish to SQS"
            Effect: Allow
            Principal: "*"
            Resource: !GetAtt MySqsQueue.Arn
            Action: SQS:SendMessage
            Condition:
              ArnEquals:
                aws:SourceArn: !Ref MySnsTopic
      Queues:
        - Ref: MySqsQueue

To test this, you can publish a message to the SNS topic and then inspect the SQS queue length using the AWS CLI:

aws sns publish --topic-arn "arn:aws:sns:us-east-1:123456789012:sns-sqs-MySnsTopic-ABC123ABC" --message "Test message"
aws sqs get-queue-attributes --queue-url "https://sqs.us-east-1.amazonaws.com/123456789012/sns-sqs-MySqsQueue- ABC123ABC " --attribute-names ApproximateNumberOfMessages

This results in the following output:

CLI output

Another usage of this pattern is when you want to filter messages in architectures using an SQS queue. By placing the SNS topic in front of the queue, you can use the message filtering capabilities of SNS. This ensures that only the messages you need are published to the queue. To use message filtering in AWS SAM, use the AWS:SNS:Subcription resource:

  QueueSubcription:
    Type: 'AWS::SNS::Subscription'
    Properties:
      TopicArn: !Ref MySnsTopic
      Endpoint: !GetAtt MySqsQueue.Arn
      Protocol: sqs
      FilterPolicy:
        type:
        - orders
        - payments 
      RawMessageDelivery: 'true'

EventBridge to SNS: combining features of both services

Both SNS and EventBridge have different characteristics in terms of targets, and integration with broader features. This table compares the major differences between the two services:

Amazon SNS Amazon EventBridge
Number of targets 10 million (soft) 5
Limits 100,000 topics. 12,500,000 subscriptions per topic. 100 event buses. 300 rules per event bus.
Input transformation No Yes – see details.
Message filtering Yes – see details. Yes, including IP address matching – see details.
Format Raw or JSON JSON
Receive events from AWS CloudTrail No Yes
Targets HTTP(S), SMS, SNS Mobile Push, Email/Email-JSON, SQS, Lambda functions 15 targets including AWS LambdaAmazon SQSAmazon SNSAWS Step FunctionsAmazon Kinesis Data StreamsAmazon Kinesis Data Firehose.
SaaS integration No Yes – see integration partners.
Schema Registry integration No Yes – see details.
Dead-letter queues supported Yes No
Public visibility Can create public topics Cannot create public buses
Cross-Region You can subscribe your AWS Lambda functions to an Amazon SNS topic in any Region. Targets must be same Region. You can publish across Region to another event bus.

In this pattern, you configure an SNS topic as a target of an EventBridge rule:

SNS topic as a target for an EventBridge rule

In the AWS SAM template, you declare the resources in the preceding diagram as follows:

Resources:
  MySnsTopic:
    Type: AWS::SNS::Topic

  EventRule: 
    Type: AWS::Events::Rule
    Properties: 
      Description: "EventRule"
      EventPattern: 
        account: 
          - !Sub '${AWS::AccountId}'
        source:
          - "demo.cli"
      Targets: 
        - Arn: !Ref MySnsTopic
          Id: "SNStopic"

The default bus already exists in every AWS account, so there is no need to declare it. For the event bus to publish matching events to the SNS topic, you define permissions using the AWS::SNS::TopicPolicy resource:

  EventBridgeToToSnsPolicy:
    Type: AWS::SNS::TopicPolicy
    Properties: 
      PolicyDocument:
        Statement:
        - Effect: Allow
          Principal:
            Service: events.amazonaws.com
          Action: sns:Publish
          Resource: !Ref MySnsTopic
      Topics: 
        - !Ref MySnsTopic       

EventBridge has a limit of five targets per rule. In cases where you must send events to hundreds or thousands of targets, publishing to SNS first and then subscribing those targets to the topic works around this limit. Both services have different targets, and this pattern allows you to deliver EventBridge events to SMS, HTTP(s), email and SNS mobile push.

You can transform and filter the message using these services, often without needing an AWS Lambda function. SNS does not support input transformation but you can do this in an EventBridge rule. Message filtering is possible in both services but EventBridge provides richer content filtering capabilities.

AWS CloudTrail can log and monitor activity across services in your AWS account. It can be a useful source for events, allowing you to respond dynamically to objects in Amazon S3 or react to changes in your environment, for example. This natively integrates with EventBridge, allowing you to ingest events at scale from dozens of services.

Using EventBridge enables you to source events from outside your AWS account, offering integrations with a list of software as a service (SaaS) providers. This capability allows you to receive events from your accounts with SaaS providers like Zendesk, PagerDuty, and Auth0. These events are delivered to a partner event bus in your account, and can then be filtered and routed to an SNS topic.

Additionally, this pattern allows you to deliver events to Lambda functions in other AWS accounts and in other AWS Regions. You can invoke Lambda from SNS topics in other Regions and accounts. It’s also possible to make SNS topics publicly read-only, making them extensible endpoints that other third parties can consume from. SNS has comprehensive access control, which you can incorporate into this pattern.

Cross-account publishing

EventBridge to SQS: Building fault-tolerant microservices

EventBridge can route events to targets such as microservices. In the case of downstream failures, the service retries events for up to 24 hours. For workloads where you need a longer period of time to store and retry messages, you can deliver the events to an SQS queue in each microservice. This durably stores those events until the downstream service recovers. Additionally, this pattern protects the microservice from large bursts of traffic by throttling the delivery of messages.

Fault-tolerant microservices architecture

The resources declared in the AWS SAM template are similar to the previous examples, but it uses the AWS::SQS::QueuePolicy resource to grant the appropriate permission to EventBridge:

  EventBridgeToToSqsPolicy:
    Type: AWS::SQS::QueuePolicy
    Properties:
      PolicyDocument:
        Statement:
        - Effect: Allow
          Principal:
            Service: events.amazonaws.com
          Action: SQS:SendMessage
          Resource:  !GetAtt MySqsQueue.Arn
      Queues:
        - Ref: MySqsQueue

Conclusion

You can combine these services in your architectures to implement patterns that solve complex challenges, often with little code required. This blog post shows three examples that implement message throttling and queueing, integrating SNS and EventBridge, and building fault tolerant microservices.

To learn more building decoupled architectures, see this Learning Path series on EventBridge. For more serverless learning resources, visit https://serverlessland.com.

Choosing between messaging services for serverless applications

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/choosing-between-messaging-services-for-serverless-applications/

Most serverless application architectures use a combination of different AWS services, microservices, and AWS Lambda functions. Messaging services are important in allowing distributed applications to communicate with each other, and are fundamental to most production serverless workloads.

Messaging services can improve the resilience, availability, and scalability of applications, when used appropriately. They can also enable your applications to communicate beyond your workload or even the AWS Cloud, and provide extensibility for future service features and versions.

In this blog post, I compare the primary messaging services offered by AWS and how you can use these in your serverless application architectures. I also show how you use and deploy these integrations with the AWS Serverless Application Model (AWS SAM).

Examples in this post refer to code that can be downloaded from this GitHub repository. The README.md file explains how to deploy and run each example.

Overview

Three of the most useful messaging patterns for serverless developers are queues, publish/subscribe, and event buses. In AWS, these are provided by Amazon SQS, Amazon SNS, and Amazon EventBridge respectively. All of these services are fully managed and highly available, so there is no infrastructure to manage. All three integrate with Lambda, allowing you to publish messages via the AWS SDK and invoke functions as targets. Each of these services has an important role to play in serverless architectures.

SNS enables you to send messages reliably between parts of your infrastructure. It uses a robust retry mechanism for when downstream targets are unavailable. When the delivery policy is exhausted, it can optionally send those messages to a dead-letter queue for further processing. SNS uses topics to logically separate messages into channels, and your Lambda functions interact with these topics.

SQS provides queues for your serverless applications. You can use a queue to send, store, and receive messages between different services in your workload. Queues are an important mechanism for providing fault tolerance in distributed systems, and help decouple different parts of your application. SQS scales elastically, and there is no limit to the number of messages per queue. The service durably persists messages until they are processed by a downstream consumer.

EventBridge is a serverless event bus service, simplifying routing events between AWS services, software as a service (SaaS) providers, and your own applications. It logically separates routing using event buses, and you implement the routing logic using rules. You can filter and transform incoming messages at the service level, and route events to multiple targets, including Lambda functions.

Integrating an SQS queue with AWS SAM

The first example shows an AWS SAM template defining a serverless application with two Lambda functions and an SQS queue:

Producer-consumer example

You can declare an SQS queue in an AWS SAM template with the AWS::SQS::Queue resource:

  MySqsQueue:
    Type: AWS::SQS::Queue

To publish to the queue, the publisher function must have permission to send messages. Using an AWS SAM policy template, you can apply policy that enables send messaging to one specific queue:

      Policies:
        - SQSSendMessagePolicy:
            QueueName: !GetAtt MySqsQueue.QueueName

The AWS SAM template passes the queue name into the Lambda function as an environment variable. The function uses the sendMessage method of the AWS.SQS class to publish the message:

const AWS = require('aws-sdk')
AWS.config.region = process.env.AWS_REGION 
const sqs = new AWS.SQS({apiVersion: '2012-11-05'})

// The Lambda handler
exports.handler = async (event) => {
  // Params object for SQS
  const params = {
    MessageBody: `Message at ${Date()}`,
    QueueUrl: process.env.SQSqueueName
  }
  
  // Send to SQS
  const result = await sqs.sendMessage(params).promise()
  console.log(result)
}

When the SQS queue receives the message, it publishes to the consuming Lambda function. To configure this integration in AWS SAM, the consumer function is granted the SQSPollerPolicy policy. The function’s event source is set to receive messages from the queue in batches of 10:

  QueueConsumerFunction:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: code/
      Handler: consumer.handler
      Runtime: nodejs12.x
      Timeout: 3
      MemorySize: 128
      Policies:  
        - SQSPollerPolicy:
            QueueName: !GetAtt MySqsQueue.QueueName
      Events:
        MySQSEvent:
          Type: SQS
          Properties:
            Queue: !GetAtt MySqsQueue.Arn
            BatchSize: 10

The payload for the consumer function is the message from SQS. This is an array of messages up to the batch size, containing a body attribute with the publishing function’s MessageBody. You can see this in the CloudWatch log for the function:

CloudWatch log result

Integrating an SNS topic with AWS SAM

The second example shows an AWS SAM template defining a serverless application with three Lambda functions and an SNS topic:

SNS fanout to Lambda functions

You declare an SNS topic and the subscribing Lambda functions with the AWS::SNS:Topic resource:

  MySnsTopic:
    Type: AWS::SNS::Topic
    Properties:
      Subscription:
        - Protocol: lambda
          Endpoint: !GetAtt TopicConsumerFunction1.Arn    
        - Protocol: lambda
          Endpoint: !GetAtt TopicConsumerFunction2.Arn

You provide the SNS service with permission to invoke the Lambda functions but defining an AWS::Lambda::Permission for each:

  TopicConsumerFunction1Permission:
    Type: 'AWS::Lambda::Permission'
    Properties:
      Action: 'lambda:InvokeFunction'
      FunctionName: !Ref TopicConsumerFunction1
      Principal: sns.amazonaws.com

The SNSPublishMessagePolicy policy template grants permission to the publishing function to send messages to the topic. In the function, the publish method of the AWS.SNS class handles publishing:

const AWS = require('aws-sdk')
AWS.config.region = process.env.AWS_REGION 
const sns = new AWS.SNS({apiVersion: '2012-11-05'})

// The Lambda handler
exports.handler = async (event) => {
  // Params object for SNS
  const params = {
    Message: `Message at ${Date()}`,
    Subject: 'New message from publisher',
    TopicArn: process.env.SNStopic
  }
  
  // Send to SQS
  const result = await sns.publish(params).promise()
  console.log(result)
}

The payload for the consumer functions is the message from SNS. This is an array of messages, containing subject and message attributes from the publishing function. You can see this in the CloudWatch log for the function:

CloudWatch log result

Differences between SQS and SNS configurations

SQS queues and SNS topics offer different functionality, though both can publish to downstream Lambda functions.

An SQS message is stored on the queue for up to 14 days until it is successfully processed by a subscriber. SNS does not retain messages so if there are no subscribers for a topic, the message is discarded.

SNS topics may broadcast to multiple targets. This behavior is called fan-out. It can be used to parallelize work across Lambda functions or send messages to multiple environments (such as test or development). An SNS topic can have up to 12,500,000 subscribers, providing highly scalable fan-out capabilities. The targets may include HTTP/S endpoints, SMS text messaging, SNS mobile push, email, SQS, and Lambda functions.

In AWS SAM templates, you can retrieve properties such as ARNs and names of queues and topics, using the following intrinsic functions:

Amazon SQS Amazon SNS
Channel type Queue Topic
Get ARN !GetAtt MySqsQueue.Arn !Ref MySnsTopic
Get name !GetAtt MySqsQueue.QueueName !GetAtt MySnsTopic.TopicName

Integrating with EventBridge in AWS SAM

The third example shows the AWS SAM template defining a serverless application with two Lambda functions and an EventBridge rule:

EventBridge integration with AWS SAM

The default event bus already exists in every AWS account. You declare a rule that filters events in the event bus using the AWS::Events::Rule resource:

  EventRule: 
    Type: AWS::Events::Rule
    Properties: 
      Description: "EventRule"
      EventPattern: 
        source: 
          - "demo.event"
        detail: 
          state: 
            - "new"
      State: "ENABLED"
      Targets: 
        - Arn: !GetAtt EventConsumerFunction.Arn
          Id: "ConsumerTarget"

The rule describes an event pattern specifying matching JSON attributes. Events that match this pattern are routed to the list of targets. You provide the EventBridge service with permission to invoke the Lambda functions in the target list:

  PermissionForEventsToInvokeLambda: 
    Type: AWS::Lambda::Permission
    Properties: 
      FunctionName: 
        Ref: "EventConsumerFunction"
      Action: "lambda:InvokeFunction"
      Principal: "events.amazonaws.com"
      SourceArn: !GetAtt EventRule.Arn

The AWS SAM template uses an IAM policy statement to grant permission to the publishing function to put events on the event bus:

  EventPublisherFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: code/
      Handler: publisher.handler
      Timeout: 3
      Runtime: nodejs12.x
      Policies:
        - Statement:
          - Effect: Allow
            Resource: '*'
            Action:
              - events:PutEvents      

The publishing function then uses the putEvents method of the AWS.EventBridge class, which returns after the events have been durably stored in EventBridge:

const AWS = require('aws-sdk')
AWS.config.update({region: 'us-east-1'})
const eventbridge = new AWS.EventBridge()

exports.handler = async (event) => {
  const params = {
    Entries: [ 
      {
        Detail: JSON.stringify({
          "message": "Hello from publisher",
          "state": "new"
        }),
        DetailType: 'Message',
        EventBusName: 'default',
        Source: 'demo.event',
        Time: new Date 
      }
    ]
  }
  const result = await eventbridge.putEvents(params).promise()
  console.log(result)
}

The payload for the consumer function is the message from EventBridge. This is an array of messages, containing subject and message attributes from the publishing function. You can see this in the CloudWatch log for the function:

CloudWatch log result

Comparing SNS with EventBridge

SNS and EventBridge have many similarities. Both can be used to decouple publishers and subscribers, filter messages or events, and provide fan-in or fan-out capabilities. However, there are differences in the list of targets and features for each service, and your choice of service depends on the needs of your use-case.

EventBridge offers two newer capabilities that are not available in SNS. The first is software as a service (SaaS) integration. This enables you to authorize supported SaaS providers to send events directly from their EventBridge event bus to partner event buses in your account. This replaces the need for polling or webhook configuration, and creates a highly scalable way to ingest SaaS events directly into your AWS account.

The second feature is the Schema Registry, which makes it easier to discover and manage OpenAPI schemas for events. EventBridge can infer schemas based on events routed through an event bus by using schema discovery. This can be used to generate code bindings directly to your IDE for type-safe languages like Python, Java, and TypeScript. This can help accelerate development by automating the generation of classes and code directly from events.

This table compares the major features of both services:

Amazon SNS Amazon EventBridge
Number of targets 10 million (soft) 5
Availability SLA 99.9% 99.99%
Limits 100,000 topics. 12,500,000 subscriptions per topic. 100 event buses. 300 rules per event bus.
Publish throughput Varies by Region. Soft limits. Varies by Region. Soft limits.
Input transformation No Yes – see details.
Message filtering Yes – see details. Yes, including IP address matching – see details.
Message size maximum 256 KB 256 KB
Billing Per 64 KB
Format Raw or JSON JSON
Receive events from AWS CloudTrail No Yes
Targets HTTP(S), SMS, SNS Mobile Push, Email/Email-JSON, SQS, Lambda functions. 15 targets including AWS LambdaAmazon SQSAmazon SNSAWS Step FunctionsAmazon Kinesis Data StreamsAmazon Kinesis Data Firehose.
SaaS integration No Yes – see integrations.
Schema Registry integration No Yes – see details.
Dead-letter queues supported Yes No
FIFO ordering available No No
Public visibility Can create public topics Cannot create public buses
Pricing $0.50/million requests + variable delivery cost + data transfer out cost. SMS varies. $1.00/million events. Free for AWS events. No charge for delivery.
Billable request size 1 request = 64 KB 1 event = 64 KB
AWS Free Tier eligible Yes No
Cross-Region You can subscribe your AWS Lambda functions to an Amazon SNS topic in any Region. Targets must be in the same Region. You can publish across Regions to another event bus.
Retry policy
  • For SQS/Lambda, exponential backoff over 23 days.
  • For SMTP, SMS and Mobile push, exponential backoff over 6 hours.
At-least-once event delivery to targets, including retry with exponential backoff for up to 24 hours.

Conclusion

Messaging is an important part of serverless applications and AWS services provide queues, publish/subscribe, and event routing capabilities. This post reviews the main features of SNS, SQS, and EventBridge and how they provide different capabilities for your workloads.

I show three example applications that publish and consume events from the three services. I walk through AWS SAM syntax for deploying these resources in your applications. Finally, I compare differences between the services.

To learn more building decoupled architectures, see this Learning Path series on EventBridge. For more serverless learning resources, visit https://serverlessland.com.

Send real-time alerts using Amazon Pinpoint

Post Syndicated from Dhiraj Thakur original https://aws.amazon.com/blogs/messaging-and-targeting/send-real-time-alerts-using-amazon-pinpoint/

Businesses need to send real-time notifications in order to take action when alerted of a critical situation. Examples could include anomaly detection, healthcare emergencies, operations failures, and fraud transactions. Email, SMS, and push notifications are often used to notify stakeholders in real-time. However, building a large-scale, real-time notification solution can be a complex and costly challenge for a business.

Amazon Pinpoint enables you to engage with your stakeholders in real-time by sending email, SMS and push notifications. Your app can use the Amazon Pinpoint API and the AWS SDKs to send direct messages. With transactional messages, you send alerts to specific recipients, as opposed to messages that you send to segments. There is no minimum fee, no setup cost, and no fixed monthly cost with Amazon Pinpoint.

In this blog, we explore a solution to notify stakeholders of a large customer transaction. This requires immediate attention as our stakeholders want to ensure that there is enough inventory available to deliver goods with no delay.

Solution Overview

The solution that we build to handle this use case can be deployed in one hour. The following diagram illustrates the AWS services integrated in this solution:

At a high level, the solution uses the following workflow:

1. Define large value transaction threshold in rule table in Amazon DynamoDB.
2. Setup Amazon Pinpoint and configure to send email and SMS.
3. Setup AWS Lambda and implement the logic to send SMS and email if a customer makes a large value order transaction.
4. Create a test transaction in an order table using Amazon DynamoDB.
5.  Check details of SMS and email received.

Setting up the solution

Step 1: Set up Amazon Pinpoint

The first step in setting up this solution is to create a new Amazon Pinpoint project and configure the SMS and Email channel.
1. Navigate to Services -> Pinpoint.
2.Click on create a project.
3. Provide a name and click on the Create button.
4. Select Email in left panel and click on Edit.
5.  Select “Enable the email channel for this project”. Select “Verify a new email address” and provide a default sender address. Click on verify email address. Click on save. You should get a verification mail. Verify the email by clicking on the verification link.

Once your email is verified, it should look like this:

6. Repeat the same step to verify receiver email address as well.

Please note: By enabling the email channel, we can send up to 200 email per day in sandbox mode. We must verify an email address or domain identity in order to send any email. While we remain in sandbox, we also must verify all recipient email addresses before we proceed — this does not apply in production.

7.  Select SMS and voice in left panel and select “Enable the SMS channel for this project”. Please make sure “Transactional” is selected for critical or time-sensitive messages. Amazon Pinpoint optimizes the delivery of these messages for highest reliability.

Please note: We do not need to verify any phone numbers for this channel. However, we can request dedicated codes (either short or long) for our own exclusive use. Otherwise AWS will automatically allocate codes when we send SMS.

8. Note down the project ID.

We have done all the configurations in Amazon Pinpoint. Now, it’s time to setup Amazon DynamoDB tables.

Step 2: Set up Amazon DynamoDB table

1. Navigate to Services -> DynamoDB.
2. Click Create table.
3. Create a record similar to this:

This table stores your definition of large-value order transaction.

{
		"transaction_type" : "premium"
		"min_transaction_value" : 10000
		"max_transaction_value" : 50000
		"send_notification" : ""
		"unit" : "$"
		"email" : "[email protected]"
		"phone" : "+91123456789"
		
	
	}

4. Create order_detail table. Provide order_id as partition key. You can select “String” as data type.
5. Now we must enable the stream in this table. Once we enable a stream on a table, Amazon DynamoDB captures information about every modification to data items in the table. We will integrate this table with AWS Lambda to validate the order value. Click on Manage Stream.
6. Select appropriate view type and click on Enable.

Step 3: Set up AWS Lambda

In this step, we create an AWS Lambda function and then integrate it with the order_detail table. After, we will check the order and send a notification if large enough.

1. Navigate to Service>Lambda.
2.Click on Create function.
3. Select runtime as Python 3.6. Select an execution role. Please make sure that your role gives read access to Amazon DynamoDB table.
4. Click on Add Trigger.
5.  Select the Amazon DynamoDB table from the drop-down. Select order_detail table from the drop-down. Keep everything default. This integrates order_detail streaming to our AWS Lambda function.

Our Lambda function is invoked every time a transaction happens in order_detail table.

6. Now copy the source code.

The AWS Lambda code is available on GitHub here: https://github.com/aws-samples/send-real-time-notification-using-amazon-pinpoint-samples 

Step 4: Testing

1. That’s it! We have built the solution. Now it’s time to do an end to end test by creating an order transaction in order_detail table.

2. Run the Amazon DynamoDB put-item command to create a new item over our threshhold, or replace an old item with a new one. In our case it will be a new order transaction in order_detail table. Please make sure you have configured AWS CLI. You can refer to the quickstart guide to learn more.

aws dynamodb put-item --table-name order_detail --item "{""order_id"":{""S"":""O0001""},""customer_id"":{""S"":""C0001""},""customer_name"":{""S"":""JOHN MILLER""},""order_date"":{""S"":""2020-06-13T17:42:34Z""},""item_id"":{""S"":""P0001""},""item_quantity"":{""N"":""12""},""order_value"":{""N"":""20000""},""unit"":{""S"":""$""},""delivery_date"":{""S"":""2020-06-20""}}" --return-consumed-capacity TOTAL

3. It returns a success message similar to this:

4.  The order value of $20,000 is falling in the range of our large value order transaction definition (between $10,000 to $50,000). You should receive an SMS on the mobile number you have provided in the transaction_alert_rule table.

5. You will also receive an email as per configuration in the transaction_alert_rule table.

Step 4: Clean up

You’ve now successfully built our real-time notification solution using Amazon Pinpoint. Delete the resource you created in this blog like the Amazon DynamoDB tables to avoid ongoing charges. You can use the AWS CLI, AWS Management Consoles, or the AWS APIs to perform the cleanup.

Conclusion

Customers can use Amazon Pinpoint to help scale communications across use cases, including real-time notifications. Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service. You can connect with customers and stakeholders over channels like email, SMS, push, or voice.

 

Serverless Stream-Based Processing for Real-Time Insights

Post Syndicated from Justin Pirtle original https://aws.amazon.com/blogs/architecture/serverless-stream-based-processing-for-real-time-insights/

Building on our previous posts regarding messaging patterns and queue-based processing, we now explore stream-based processing and how it helps you achieve low-latency, near real-time data processing in your applications. AWS offers two managed services for streaming, Amazon Kinesis and Amazon Managed Streaming for Apache Kafka (Amazon MSK).

What is streaming data?

At AWS, we define streaming data as data that is emitted at high volume in a continuous, incremental manner with the goal of low-latency processing. Whereas traditional batch-oriented business intelligence would offer insights in retrospect after months, days, or hours have passed, stream-based processing can offer actionable insights in real time. Stream-based processing is commonly used to respond to clickstream events, rapidly ingest various types of logs, and extract, transform, and load (ETL) data in real-time into data lakes and data warehouses.

Amazon Kinesis is the AWS service that makes it easy to collect, process, and analyze such real-time, streaming data with four different capabilities:

For this blog post, we focus on Kinesis Data Streams and Kinesis Data Firehose, since both of these services are foundational for streaming, ingestion, buffering, and processing in your streaming data pipeline.

Kinesis Data Streams

Amazon Kinesis Data Streams is a massively scalable service that can continuously capture gigabytes of data per second from hundreds of thousands of sources. Like many distributed systems, Kinesis Data Streams achieves this level of scalability by partitioning or sharding your data where records are simultaneously written to and read from different shards in parallel. All Kinesis Data Streams require allocation of at least one shard and you choose how many shards you want to allocate to a given stream.

When writing to a shard in a Kinesis Data Stream, each shard supports ingestion of up to 1 MB of data per second or 1,000 records written per second. When reading from a shard, each shard supports output of 2 MB of data per second. You choose an initial number of shards to allocate for your Kinesis Data Stream, then can update your shard allocation over time. Increasing your shard allocation enables your application to easily scale from thousands of records to millions of records written per second.

Producing streaming data

Streaming data producers are processes that put records onto a Kinesis stream by calling the putRecord API to write a single record or putRecords API to write multiple records in a single invocation. Common approaches for producing messages including direct use of AWS tools, including:

  • AWS SDK, which simplifies authentication and other semantics of invoking AWS service APIs
  • Amazon Kinesis Agent, which enables local file/log monitoring and rotation sending in real time
  • Amazon Kinesis Producer Library, which simplifies aggregating records into larger payloads to improve throughput.

Additionally, several AWS services natively integrate with Amazon Kinesis as a data producer:

There are also several third-party services that offer native integration as data producers, including:

Regardless of the producer service/tool of choice, all data producers put records onto a stream by providing a partition key, stream name, and the data itself, which altogether must not exceed 1 MB in size. The partition key provided is used to determine which shard the data should be written to on the stream. Amazon Kinesis Data Streams offers ordering guarantees and maintain message ordering within a given shard in a stream using sequence numbers to track the unique position of each message sent.

Consuming streaming data

Once records are written to a Kinesis Data Stream, they are buffered in their respective shards for consumption. Unlike queue-based processing, the records are buffered until the data retention period set on the stream elapses, enabling one or more consumers to replay all in the messages in the shards of the stream. If your application must deliver your records to a data lake, data warehouse, Elasticsearch Service cluster, or Splunk, Kinesis Data Firehose can natively deliver your records to the following without needing to write any custom code:

  • Amazon S3
  • Amazon Redshift
  • Amazon Elasticsearch Service
  • Splunk

You simply indicate the desired delivery destination and configuration regarding how to batch and deliver the messages. Kinesis Data Firehose can also use your configured S3 desired object naming, Amazon Redshift table name, Amazon Elasticsearch index name, and more.

For custom processing or destinations outside of the Amazon Kinesis Data Firehose supported services above, you will need to write and execute custom code to consume data from the stream. Though you can use the Kinesis Client Library (KCL) to run your own custom processing application on persistent virtual machines or container instances, AWS Lambda offers serverless computing with native event source integration with Amazon Kinesis Data Streams. AWS Lambda as a stream consumer takes care of the operational overhead of reading shards, maintaining record order, check pointing as records are processed, and parallelizing processing.

Serverless stream processing with AWS Lambda

When configured with a Kinesis Stream as its event source, AWS Lambda continuously polls every shard in your stream at no extra charge and only invokes your Lambda code if and when there are messages in the stream. It additionally scales up the number of concurrent executions to parallelize reading all shards of a stream at the same time (and can have multiple executions reading the same shard simultaneously for a higher parallelization factor, if desired). AWS Lambda automatically checkpoints which records were successfully processed and handles retries and any failures automatically according to your desired configuration.

Best of all, there is no additional cost of the Lambda service handling all of these operational needs for you. You only pay for compute time when your function is invoked and messages are available on the stream for processing. You’re able to focus on processing your data with your business logic directly in your code since your records are sent as an array to your Lambda code. There is no additional code to author/manage regarding checkpointing, shard splits/merges, or other complexities.

Conclusion

In this blog, we defined streaming data and explored the Amazon Kinesis service and its various capabilities. We then reviewed the various options available for producing and consuming real-time streaming data with Amazon Kinesis, including using AWS Lambda for serverless streaming data processing. Please refer to the following resources for further learning on AWS streaming data processing:

Application Integration Using Queues and Messages

Post Syndicated from Mithun Mallick original https://aws.amazon.com/blogs/architecture/application-integration-using-queues-and-messages/

In previous blog posts in this messaging series, we provided an overview of messaging and we also explained the common characteristics to consider when  evaluating messaging channel technologies. In this post, we will explain some of the semantics of queue-based processing, its use in designing flexible systems, and how to apply it to your use cases. AWS offers two queue-based services: Amazon Simple Queue Service (SQS) and Amazon MQ. We will focus on SQS in this blog.

Building Blocks

In the digital world, even the most basic design of web based systems requires the use of queues to integrate applications. SQS is a secure, serverless, durable, and highly available message queue service. It provides a simple REST API to create a queue as well as send, receive, and delete messages.

Producing Messages

Message producers are processes that call the SendMessage APIs of SQS. SQS supports two types of queues: standard and first-in-first-out (FIFO). Standard queues provide best effort ordering while FIFO provides first-in-first-out ordering. Messages can be sent either as single messages or in batches. Standard queues can support unlimited throughput by adding as many concurrent producers as needed, whereas FIFO queues can support up to 300 TPS without batching and 3000 TPS with batching.

app integration- producing messages

In terms of message delivery, SQS standard supports “at-least-once” delivery, meaning that messages will be delivered at least once but occasionally, more than one copy of the message will be delivered. SQS FIFO provides “exactly once” delivery, meaning it can detect duplicate messages.

Consuming Messages

Message consumers are processes that make the ReceiveMessage API call on SQS. Messages from queues can be processed either in batch or as a single message at a time. Each approach has its advantages and disadvantages.

  • Batch processing: This is where each message can be processed independently, and an error on a single message is not required to disrupt the entire batch. Batch processing provides the most throughput for processing, and also provides the most optimizations for resources involved in reading messages.
  • Single message processing: Single message processing is commonly used in scenarios where each message may trigger multiple processes within the consumer. In case of errors, the retry is confined to the single message.

To maintain the order of processing, FIFO queues are typically consumed by a single process. Messages across message groups can still be processed in parallel. However, the overall throughput limit will still apply for FIFO queues.

SQS supports long and short polling on queues. Short polling will sample a subset of servers and will return the messages or provide an empty response if there are no messages in those servers. Long polling will query all servers and return immediately if there are messages. Long polling can reduce cost by reducing the number of calls in cases of empty responses. Get more details and sample code for sending and receiving messages.

It’s simple to poll for messages but it does have an overhead of running a process continuously. In some situations it may be hard to monitor such processes and troubleshoot in case of errors. SQS integration with AWS Lambda takes the undifferentiated heavy lifting of polling logic into a Lambda agent.

app integration- consuming messages

Get more details for SQS as an event source for Lambda in this blog post: AWS Lambda Adds Amazon Simple Queue Service to Supported Event Sources. This is supported for both standard as well as FIFO queues.

Benefits

Queue-based processing can protect backend systems like relational databases from unexpected surge in front-end traffic. You can decouple the processing logic by using an intermediate queue.

Consider a scenario that involves a web application to order products related to a TV show. It’s possible that the ordering and payment systems rely on a traditional relational database. During peak show seasons, queues can be used to control the traffic to the ordering and payment system. In the following diagram we show how the web application requests are staged in the queue, while the consumption of the users on the backend is based on the capacity of the database.

app integration - benefits

Error Handling

Asynchronous message processing presents unique challenges with error handling since they just manifest as log entries on the producer or consumer and create backlogs in processing. In order to handle errors elegantly, the first requirement is to address the nature of the error. In case the error is transient, it may help to retry after a brief delay and eventually move to a dead letter queue if repeated attempts fail. The other error scenario can be cases where the data itself is bad and the error won’t fix, even after repeated attempts. In such cases, consumer process needs to make this determination and needs to move the message to an error queue.

In some cases, due to its highly distributed architecture, Standard SQS can deliver a duplicate message, which it can result in duplicate key errors for the consumer. The best way to mitigate it is to make the consumers idempotent so that processing the same message multiple times produces the same result.

Conclusion

In this blog, we explained the importance of messaging in building distributed applications, various aspects of queue based processing using SQS like sending and receiving messages, FIFO versus Standard SQS, and common error handling scenarios. We also covered using SQS as an event source for AWS Lambda. Please refer to following blog posts for using messaging in different integration patterns:

Introduction to Messaging for Modern Cloud Architecture

Post Syndicated from Sam Dengler original https://aws.amazon.com/blogs/architecture/introduction-to-messaging-for-modern-cloud-architecture/

We hope you’ve enjoyed reading our posts on best practices for your serverless applications. The posts in the following series will focus on best practices when introducing messaging patterns into your applications. Let’s review some core messaging concepts and see how they can be used to address challenges when designing modern cloud architectures.

Introduction

Applications can communicate information with each other using messages, a mechanism for packaging a data payload and associated metadata. The application that sends a message is called the producer and the application that receives the message is called the consumer. Producers and consumers exchange messages using a variety of transportation channels, for example point-to-point requests, message queues, subscription topics, or event buses. These transportation channels have differently characteristics that make them useful when implementing message communication patterns. Dependencies emerge when producers and consumers exchange messages, which is called coupling.

Synchronous Communication

synchronous communication

Message communication is called synchronous when the producer sends a message to the consumer and waits for a response before the producer continues its processing logic. An example of synchronous communication over a point-to-point channel is when a HTTP client makes a request to a HTTP service, waits for the service to process the request, and then applies logic to the HTTP response to determine how to proceed.

Synchronous communication patterns are more straightforward to implement, however they create tight coupling between producers and consumers. Tight coupling can cause problems due to traffic spikes and failures propagating directly throughout the application. For example, in a three-tier architecture, when the application experiences a spike in client traffic, the web tier directly translates the traffic spike as pressure on downstream resources (the logic and data tiers), which may not scale to meet the sudden demand. Likewise, downstream resource failure in the logic or data tier directly impacts the web tier from responding to client requests. Applications can mimic a synchronous experience, for example a status spinner, using asynchronous communication with a polling or push notification strategy.

Asynchronous Communication

Asynchronous communication

Message communication is called asynchronous when the producer sends a message to the consumer and proceeds without waiting for the response. An example of asynchronous communication over a message queue channel is when a client publishes a message to a queue, and after the queue acknowledges receipt of the message, the publisher proceeds without waiting for the consumer to process the message.

Asynchronous communication patterns are implemented using transportation channels such as queues, topics, and event buses to create loose coupling between producers and consumers. Loose coupling increases an architecture’s resiliency to failure and ability to handle traffic spikes because it creates an indirection between producer and consumer communication, enabling them to operate independently of each other. Using the three-tier architecture example, a message queue can be introduced between the web, logic, and data tiers to enable each to scale independently of each other. When the application experiences a spike in client traffic, the web tier translates the traffic spike as more messages to the queue for processing, however the logic tier may continue to process messages off the queue without being directly impacted.

Considerations and Next Steps

Although asynchronous communication patterns can benefit modern cloud architectures, there are tradeoffs to consider. Asynchronous messaging adds latency to end-to-end processing time due to the addition of middleware. Producers and consumers take a dependency on the middleware stack, which must also scale to meet demand and be resilient to failure. Care must be taken to appropriately configure producers, consumers, and middleware to handle errors so that messages are not lost, more monitoring is required to ensure proper operations, and multiple logs must be correlated to troubleshoot and diagnose problems.

Amazon MQ, Amazon KinesisAmazon Simple Queue Service (SQS), Amazon Simple Notification Service (SNS), and Amazon EventBridge are highly available, large scale, failure resistant managed services that can be used to implement asynchronous messaging patterns. You can explore these services at the AWS Messaging page and their integration into Serverless Architectures in the free new digital course, Architecting Serverless Solutions. You can also visit the AWS Event-Driven Architecture page to learn how to apply messaging patterns to build event-driven solutions. The upcoming posts in this series will explore these AWS services to help ensure message patterns are implemented using best practices when applied to modern cloud architecture.

The value of using an Email Service Provider (ESP) for end customer communications

Post Syndicated from Heidi Gloudemans original https://aws.amazon.com/blogs/messaging-and-targeting/the-value-of-using-an-email-service-provider/

In the world of pervasive consumer chat and messaging applications, email has retained its place as a ubiquitous channel for end customer communications. The Radicati Group reports that email usage will continue to grow from an estimate of 3.9 billion users in 2019 to 4.4 billion in 2023.1 Email is a welcome (and preferred) conduit for many consumers to stay connected to their favorite brands.2 Marketers and developers have found that email has the highest ROI of any end customer communications channel because of its low barrier to entry, affordability and ability to target specific recipients.

However, the very strength of email may also be its biggest weakness. The ubiquity of email as a communication channel means that many consumer mailboxes are inundated with potentially malicious or unwanted messages. Brands must then take action to ensure that their marketing and transactional messages are received by the end customer in a timely manner. While personalized content is one of the fastest growing parts of facilitating deeper customer engagement, organizations should first understand the value of partnering with a mature and trusted email service provider (ESP.)

Many organizations have built internal competencies around business email. Some businesses use an on-premises email server instance or subscribe to a managed email cloud offering (like Amazon WorkMail), while others use email servers on cloud-hosted infrastructure like Amazon EC2. While the fundamentals of email transport is shared between business and marketing email platforms, marketing and transactional email has unique parameters that require additional consideration. ESPs have built specialized expertise in delivering email at scale.

Security and Scale

ESPs have the ability to send email on behalf of an organization’s domain or subdomains. ESPs facilitate mail delivery securely through both domain-key identified mail (DKIM) records and sender policy framework (SPF) records. SPF validates for the mail received that the IP address is part of an accepted senders list. The second authentication protocol used is DKIM which provides a signature to verify each mail using an encrypted key pairing to validate the domain purported is the actual domain sending the message. The combination of both security methods is the first step in authenticating a message from an ESP to the mailbox provider or recipient domain. An additional optional authentication reporting mechanism is Domain Message Reporting and Conformance (DMARC), which can report back any attempts of non-authenticated parties to spoof sending domains.

In addition to security, the best ESPs can also deliver mail at a very large scale. Scale is not just the compute power to deliver hundreds of millions of email a day. It also includes the ability to sustain significant sending volumes of email over time while receiving end-point delivery messages. However, in addition to ESP infrastructure, there are other insights that ESPs can provide around deliverability of a message.

Deliverability

ESPs typically give mail senders the ability to manage their reputation assigned by major internet service providers (ISPs) that include Google Mail, Outlook.com, etc. Reputation is assigned to sending IP addresses and domains with a combination of influencing factors. Evaluation criteria includes, but is not limited to, the following:

  • Type of message and content – Marketing vs. Transactional
  • Valid email addresses and hard bounce rates
  • Feedback from end users (ie. Spam complaints and unsubscribes)
  • Open rates and engagement from end users
  • Time of delivery and cadence of delivery
  • Major blacklist listings, like Spamhaus

Reputation impacts deliverability, which impacts whether a message ends up in an end customer’s mailbox. Transactional messages and marketing messages are typically treated differently by ISPs. Transactional messages (like order confirmations, password resets, etc.) need to be sent and received within a short period of time after a transaction and has less strict rules when it comes to content and scheduling. Marketing messages must adhere to specific regulatory rules per country.

A poor deliverability designation may throttle or even block your message’s ability to reach its destination, instead ending up in the junk folder or blocked altogether. Top ESPs monitor and provide feedback, ensuring mail senders have the ability to take action. However, ESPs also have a responsibility to keep bad actors off their platforms and will block bad senders utilizing them as an ESP resource.

Most ESPs give mail senders the flexibility to manage their sender reputation through both deployment options and delivery statistics. Common deployment options include shared and dedicated IP addresses and pools. Shared IP pools comprise of many individual senders all using the same IP space. The IP reputation is shared across the pool and can both positively and negatively influence senders in the pool based on the majority of traffic. Dedicated IPs are managed directly by a single customer, and can even be dedicated to specific functions. For example a set of IPs can be dedicated to transaction emails vs. pure marketing communications. Many customers prefer dedicated environments because of the ability directly influence deliverability and prevent external influence on their sending reputation.

Statistics that may prove to be leading indicators of deliverability problems include:

  • Mailbox placement (Inbox vs. Junk folder)
  • Detailed email deliverability statistics (bounce rates, etc.)
  • Blacklist monitoring

However, the most important factor impacting mailbox placement is having explicit permission from the end user, with agreed upon content and frequency. End users should know when they are opting into email communications with senders. It is not recommended to use purchased or rented lists, and doing so is typically in violation of the use policy of most ESPs. To stay compliant, it is important to keep your contact preferences up to date and quickly react to unsubscribes and complaints.

Email Analytics

In addition to measuring your sender reputation, metrics are also an important part of measuring the success of your email campaign. Part of calculating the ROI of your outbound email effort is understanding how many emails were opened and whether the end user completed any calls to action (CTAs) in the mail, like clicking a link.

Email analytics from your ESP should include the end-to-end lifecycle of the mail, including:

  • Deliverability rates
  • Delivery rates per ISP
  • Unique Open rates
  • Unique Click through rates
  • Unsubscribes
  • Complaints

Ultimately, in combination with web commerce integration, these metrics can help you determine conversion rates of your campaign to a paying customer.

Conclusion

Connecting to end customers and driving engagement is the goal of every brand. But to make each connection count, brands must ensure that their messages are being delivered through purpose-built infrastructure. Email Service Providers (ESPs) should be part of every organization’s successful email marketing strategy.

Amazon SES has been the trustworthy, flexible and affordable email service provider for developers and marketers since 2012. Learn more here: https://aws.amazon.com/ses/

1https://www.statista.com/statistics/255080/number-of-e-mail-users-worldwide/

2https://www.statista.com/statistics/984615/consumer-brand-communications-channels/

 

About the Author
Heidi Gloudemans is a Senior PMM on Amazon SES and Amazon Pinpoint