Tag Archives: Deliverability

Updating opt-in status for Amazon Pinpoint channels

Post Syndicated from Varinder Dhanota original https://aws.amazon.com/blogs/messaging-and-targeting/updating-opt-in-status-for-amazon-pinpoint-channels/

In many real-world scenarios, customers are using home-grown or 3rd party systems to manage their campaign related information. This includes user preferences, segmentation, targeting, interactions, and more. To create customer-centric engagement experiences with such existing systems, migrating or integrating into Amazon Pinpoint is needed. Luckily, many AWS services and mechanisms can help to streamline this integration in a resilient and cost-effective way.

In this blog post, we demonstrate a sample solution that captures changes from an on-premises application’s database by utilizing AWS Integration and Transfer Services and updates Amazon Pinpoint in real-time.

If you are looking for a serverless, mobile-optimized preference center allowing end users to manage their Pinpoint communication preferences and attributes, you can also check the Amazon Pinpoint Preference Center.

Architecture

Architecture

In this scenario, users’ SMS opt-in/opt-out preferences are managed by a home-grown customer application. Users interact with the application over its web interface. The application, saves the customer preferences on a MySQL database.

This solution’s flow of events is triggered with a change (insert / update / delete) happening in the database. The change event is then captured by AWS Database Migration Service (DMS) that is configured with an ongoing replication task. This task continuously monitors a specified database and forwards the change event to an Amazon Kinesis Data Streams stream. Raw events that are buffered in this stream are polled by an AWS Lambda function. This function transforms the event, and makes it ready to be passed to Amazon Pinpoint API. This API call will in turn, change the opt-in/opt-out subscription status of the channel for that user.

Ongoing replication tasks are created against multiple types of database engines, including Oracle, MS-SQL, Postgres, and more. In this blog post, we use a MySQL based RDS instance to demonstrate this architecture. The instance will have a database we name pinpoint_demo and one table we name optin_status. In this sample, we assume the table is holding details about a user and their opt-in preference for SMS messages.

userid phone optin lastupdate
user1 +12341111111 1 1593867404
user2 +12341111112 1 1593867404
user2 +12341111113 1 1593867404

Prerequisites

  1. AWS CLI is configured with an active AWS account and appropriate access.
  2. You have an understanding of Amazon Pinpoint concepts. You will be using Amazon Pinpoint to create a segment, populate endpoints, and validate phone numbers. For more details, see the Amazon Pinpoint product page and documentation.

Setup

First, you clone the repository that contains a stack of templates to your local environment. Make sure you have configured your AWS CLI with AWS credentials. Follow the steps below to deploy the CloudFormation stack:

  1. Clone the git repository containing the CloudFormation templates:
    git clone https://github.com/aws-samples/amazon-pinpoint-rds-integration.git
    cd amazon-pinpoint-rds-integration
  2. You need an S3 Bucket to hold the template:
    aws s3 create-bucket –bucket <YOUR-BUCKET-NAME>
  3. Run the following command to package the CloudFormation templates:
    aws cloudformation package --template-file template_stack.yaml --output-template-file template_out.yaml --s3-bucket <YOUR-BUCKET-NAME>
  4. Deploy the stack with the following command:
    aws cloudformation deploy --template-file template_out.yaml --stack-name pinpointblogstack --capabilities CAPABILITY_AUTO_EXPAND CAPABILITY_NAMED_IAM

The AWS CloudFormation stack will create and configure resources for you. Some of the resources it will create are:

  • Amazon RDS instance with MySQL
  • AWS Database Migration Service replication instance
  • AWS Database Migration Service source endpoint for MySQL
  • AWS Database Migration Service target endpoint for Amazon Kinesis Data Streams
  • Amazon Kinesis Data Streams stream
  • AWS Lambda Function
  • Amazon Pinpoint Application
  • A Cloud9 environment as a bastion host

The deployment can take up to 15 minutes. You can track its progress in the CloudFormation console’s Events tab.

Populate RDS data

A CloudFormation stack will output the DNS address of an RDS endpoint and Cloud9 environment upon completion. The Cloud9 environment acts as a bastion host and allows you to reach the RDS instance endpoint deployed into the private subnet by CloudFormation.

  1. Open the AWS Console and navigate to the Cloud9 service.
    Cloud9Console
  2. Click on the Open IDE button to reach your IDE environment.
    Cloud9Env
  3. At the console pane of your IDE, type the following to login to your RDS instance. You can find the RDS Endpoint address at the outputs section of the CloudFormation stack. It is under the key name RDSInstanceEndpoint.
    mysql -h <YOUR_RDS_ENDPOINT> -uadmin -pmypassword
    use blog_db;
  4. Issue the following command to create a table that holds the user’s opt-in status:
    create table optin_status (
      userid varchar(50) not null,
      phone varchar(50) not null,
      optin tinyint default 1,
      lastupdate TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
    );
  5. Next, load sample data into the table. The following inserts nine users for this demo:
    
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user1', '+12341111111', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user2', '+12341111112', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user3', '+12341111113', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user4', '+12341111114', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user5', '+12341111115', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user6', '+12341111116', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user7', '+12341111117', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user8', '+12341111118', 1);
    INSERT INTO optin_status (userid, phone, optin) VALUES ('user9', '+12341111119', 1);
  6. The table’s opt-in column holds the SMS opt-in status and phone number for a specific user.

Start the DMS Replication Task

Now that the environment is ready, you can start the DMS replication task and start watching the changes in this table.

  1. From the AWS DMS Console, go to the Database Migration Tasks section.
    DMSMigTask
  2. Select the Migration task named blogreplicationtask.
  3. From the Actions menu, click on Restart/Resume to start the migration task. Wait until the task’s Status transitions from Ready to Starting and Replication ongoing.
  4. At this point, all the changes on the source database are replicated into a Kinesis stream. Before introducing the AWS Lambda function that will be polling this stream, configure the Amazon Pinpoint application.

Inspect the AWS Lambda Function

An AWS Lambda function has been created to receive the events. The Lambda function uses Python and Boto3 to read the records delivered by Kinesis Data Streams. It then performs the update_endpoint API calls in order to add, update, or delete endpoints in the Amazon Pinpoint application.

Lambda code and configuration is accessible through the Lambda Functions Console. In order to inspect the Python code, click the Functions item on the left side. Select the function starting with pinpointblogstack-MainStack by clicking on the function name.

Note: The PINPOINT_APPID under the Environment variables section. This variable provides the Lambda function with the Amazon Pinpoint application ID to make the API call.

LambdaPPAPPID

Inspect Amazon Pinpoint Application in Amazon Pinpoint Console

A Pinpoint application is needed by the Lambda Function to update the endpoints. This application has been created with an SMS Channel by the CloudFormation template. Once the data from the RDS database has been imported into Pinpoint as SMS endpoints, you can validate this import by creating a segment in Pinpoint.

PinpointProject

Testing

With the Lambda function ready, you now test the whole solution.

  1. To initiate the end-to-end test, go to the Cloud9 terminal. Perform the following SQL statement on the optin_table:
    UPDATE optin_status SET optin=0 WHERE userid='user1';
    UPDATE optin_status SET optin=0 WHERE userid='user2';
    UPDATE optin_status SET optin=0 WHERE userid='user3';
    UPDATE optin_status SET optin=0 WHERE userid='user4';
  2. This statement will cause four changes in the database which is collected by DMS and passed to Kinesis Data Streams stream.
  3. This triggers the Lambda function that construct an update_endpoint API call to the Amazon Pinpoint application.
  4. The update_endpoint operation is an upsert operation. Therefore, if the endpoint does not exist on the Amazon Pinpoint application, it creates one. Otherwise, it updates the current endpoint.
  5. In the initial dataset, all the opt-in values are 1. Therefore, these endpoints will be created with an OptOut value of NONE in Amazon Pinpoint.
  6. All OptOut=NONE typed endpoints are considered as active endpoints. Therefore, they are available to be used within segments.

Create Amazon Pinpoint Segment

  1. In order to see these changes, go to the Pinpoint console. Click on PinpointBlogApp.
    PinpointConsole
  2. Click on Segments on the left side. Then click Create a segment.
    PinpointSegment
  3. For the segment name, enter US-Segment.
  4. Select Endpoint from the Filter dropdown.
  5. Under the Choose an endpoint attribute dropdown, select Country.
  6. For Choose values enter US.
    Note: As you do this, the right panel Segment estimate will refresh to show the number of endpoints eligible for this segment filter.
  7. Click Create segment at the bottom of the page.
    PinpointSegDetails
  8. Once the new segment is created, you are directed to the newly created segment with configuration details. You should see five eligible endpoints corresponding to database table rows.
    PinpointSegUpdate
  9. Now, change one row by issuing the following SQL statement. This simulates a user opting out from SMS communication for one of their numbers.
    UPDATE optin_status SET optin=0 WHERE userid='user5';
  10. After the update, go to the Amazon Pinpoint console. Check the eligible endpoints again. You should only see four eligible endpoints.

PinpointSegUpdate

Cleanup

If you no longer want to incur further charge, delete the Cloudformation stack named pinpointblogstack. Select it and click Delete.

PinpointCleanup

Conclusion

This solution walks you through how opt-in change events are delivered from Amazon RDS to Amazon Pinpoint. You can use this solution in other use cases as well. Some examples are importing segments from a 3rd party application like Salesforce and importing other types of channels like e-mail, push, and voice. To learn more about Amazon Pinpoint, visit our website.

Amazon SES celebrates 10 years of email sending and deliverability

Post Syndicated from Simon Poile original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-ses-celebrates-10-years-of-email-sending-and-deliverability/

Amazon Simple Email Service (Amazon SES) turns 10 years old today. Back on January 25th 2011, Amazon Web Services (AWS) had only 15 services. Today, AWS has grown to over 180 services. Jeff Barr launched Amazon SES as part of his web evangelist blog. Much of what he wrote about then is still true today. Even 10 years later, email is an important channel for customer communications. Developers still want to rely on a trusted global partner to deliver email at scale. However, mailbox providers are even more protective of their end users’ security. They actively work to ensure that any perceived, unwanted email doesn’t make it to the inbox.

Inbox providers use several factors to determine the legitimacy of email traffic. Over the last decade, we have worked diligently to measure many of those factors in Amazon SES to help our customers achieve great deliverability. The focus for much of that work has been a combination of investments into reputation, engagement, and trust. I want to outline what we’ve accomplished to improve your email sending over the last 10 years.

Reputation

Reputation is the measurement mailbox providers use to determine how closely you follow their sending standards. Amazon SES measures perceived reputation through metrics such as bounce rate or complaint rate in the reputation dashboard. The reputation dashboard also shares overall Amazon SES account sending status like “Healthy” or “Under Review.” Some Inbox providers, or ISPs, also provide feedback to help us measure the effectiveness of a specific IP or domain in sending trustworthy traffic.

You can influence reputation in Amazon SES through:

  • Setting up dedicated IPs: Set up IPs in Amazon SES for your own specific sending with appropriate warm-up plans. Split IPs out by use case such as separating password resets from marketing messages.
  • Customer owned IPs (New in 2020): You can now transition IPs you’ve invested in through your own data center or with another ESP to Amazon SES without interruption.
  • Following sending volume best practices: Nothing can flag your IP addresses faster than non-predictable sending patterns. We help you manage this through sending quotas.
  • Use our SES email simulator: Test your application sending without messages leaving the sandbox.

 

Engagement

Engagement is the rate by which customers are interacting with your content. Amazon SES helps you measure engagement through conversion rates (such as open or click-through) and unsubscribe rates. These are measured in the event publishing click stream. This area is more of an art in our deliverability calculus because success varies by industry and use case.

You can influence engagement in Amazon SES through:

  • Customizing content as much as possible, but follow content best practices to avoid setting off content filters. Mailbox providers often utilize behavioral content filtering using AI to determine if your content is relevant based on engagement behavior.
  • Use consent and list management (New in 2020) with customized topics and opt-out pages. It’s important to offer recipients a way to select what emails they want to receive from you and give them an option to opt-out. This is a great new feature that we’ve added based on customer feedback.
  • Remove emails that are not engaging from your lists. Some customers have a time limit, for example, 60 days, before they are automatically removed from an active email list.

 

Trust

Earning trust on email sending is done through the adherence to proper sending behavior, as measured by both individual ISPs as well as industry watch-groups. Trust is closely related to reputation.  We measure trust through messages in the reputation dashboard based on feedback loops, Real-time Blocklists (RBLs), and spam-traps. You can also see the complaint rate associated to your sending in the complaint area of the reputation dashboard. It has statuses like healthy or under review.

You can influence trust in Amazon SES through:

 

Deliverability is a multi-dimensional part of email sending, beyond just setting up an SMTP (Simple Mail Transfer Protocol) endpoint, with constant complexities. But, we’re here to help. In addition to these investments in deliverability, we’ve also expanded Amazon SES to 18 regions, including the government cloud. It’s been an exciting time at AWS, and we look forward to supporting all of our customers in the years to come with Amazon SES.

 

 

 

 

How Amazon Simple Email Service supported the growth of email in 2020

Post Syndicated from Simon Poile original https://aws.amazon.com/blogs/messaging-and-targeting/how-amazon-simple-email-service-supported-the-growth-of-email-in-2020/

Over the last 12 months, organizations of all types have increasingly needed to stay connected to their customers. With the move to virtual interactions accelerating across industries, email has remained a trusted channel for customer communications. Amazon Simple Email Service (SES) has seen record outbound email traffic in 2020, supporting critical customer communications during COVID and commercial moments like Black Friday and Cyber Monday.

The importance of email during COVID

Unlike real-time communications like voice or live chat, email is asynchronous. It can be read and consumed at the customer’s leisure. In some geographies like North America, email also represents an individual’s unique identity, persisting longer than mobile phone numbers or social networking accounts. Even with the importance of email established before 2020, it was important to most organizations to send only the right messages during the COVID crisis.

Many organizations chose to decrease promotional or marketing emails during the pandemic voluntarily. This decrease in sending was to recognize the increased stress most individuals were facing in their personal lives. However, even with the drop in marketing emails across organizational types, there was an increased need to communicate and maintain customer engagement. Most organizations went through three distinct customer communication phases with email in 2020: React, Respond, and Reimagine.

  • React – These were the initial emails sent to acknowledge the COVID crisis, occurring early in 2020. These emails included messages reinforcing commitment to customer health, employee safety, or communicating new cleaning protocols.
  • Respond – These messages often included communication on the status of the business or event. Most businesses needed to communicate their transition to remote work, temporary closures, and many in-person events canceled.
  • Reimagine – Throughout the crisis, organizations were reimagining how to do business. Healthcare started operating video consultations, and restaurants shifted to pick up/take out only. Email communication was vital to take customers on the journey into this “new normal,” even as some businesses started to reopen.

To send these customer communications at scale, many organizations worked with Amazon SES.

How Amazon SES scaled and supported customers in 2020

Amazon SES saw several sending spikes that aligned with organizations working to communicate with their customers during COVID. Nine times in 2020, transactions per second (TPS) in Amazon SES exceeding 150% of the previous record held by 2019 Black Friday. This over 150% TPS spike also occurred on 2020 Black Friday and Cyber Monday.

In addition to supporting those upsurges in throughput, the Amazon SES team also responded to customer feedback on increasing the global footprint of Amazon SES. Since January, Amazon SES increased the total number of regions supported from 7 to 14, including the US government cloud. These additional regions were deployed during 2020 as the team worked remotely. This regional expansion enabled customers to adhere to local data sovereignty requirements for email sending while also improving performance.

Customers also told us they needed tools to help them manage compliance with important governance laws like CAN-SPAM and GDPR. Amazon SES released list management to help organizations manage their customer’s contact information and preferences.

Looking forward

As we move into 2021, email will remain at the forefront of customer communication channels. Enterprise customers like Netflix and Duolingo rely on Amazon SES to deliver their email at scale. For more information on how you can use Amazon SES, visit our website.

Retrying Undelivered Voice Messages with Amazon Pinpoint

Post Syndicated from Heidi Gloudemans original https://aws.amazon.com/blogs/messaging-and-targeting/retrying-undelivered-voice-messages-with-amazon-pinpoint/

Note: This post was written by Murat Balkan, an AWS Senior Solutions Architect.


Many of our customers use voice notifications to deliver mission-critical and time-sensitive messages to their users. Customers often configure their systems to retry delivery when these voice messages aren’t delivered the first time around. Other customers set up their systems to fall back to another channel in this situation.

This blog post shows you how to retry the delivery of a voice message if the initial attempt wasn’t successful.

Architecture

By completing the steps in this post, you can create a system that uses the architecture illustrated in the following image:

 

First, a Lambda function calls the SendMessage operation in the Amazon Pinpoint API. The SendMessage operation then initiates a phone call to the recipient and generates a unique message ID, which is returned to the Lambda function. The Lambda function then adds this message ID to a DynamoDB table.

While Amazon Pinpoint attempts to deliver a message to a recipient, it emits several event records. These records indicate when the call is initiated, when the phone is ringing, when the call is answered, and so forth. Amazon Pinpoint publishes these events to an Amazon SNS topic. In this example, we’re only interested in the BUSY, FAILED, and NO_ANSWER event types, so we add some filtering criteria.

An Amazon SQS queue then subscribes to the Amazon SNS topic and monitors the incoming events. The Delivery Delay attribute of this queue is also set at the queue level. This configuration provides a back-off retry mechanism for failed voice messages.

When the Delivery Delay timer is reached, another Lambda function polls the queue and extracts the MessageId attribute from the polled message. It uses this attribute to locate the DynamoDB record for the original call. This record also tells us how many times Amazon Pinpoint has attempted to deliver the message.

The Lambda function compares the number of retries to a MAX_RETRY environment variable to determine whether it should attempt to send the message again.

Prerequisites

To start sending transactional voice messages, create an Amazon Pinpoint project and then request a phone number. Next, clone this GitHub repository to your local machine.

After you add a long code to your account, use AWS SAM to deploy the remaining parts of this serverless architecture. You provide the long number as an input parameter to this template.

The AWS SAM template creates the following resources:

  • A Lambda function (CallGenerator) that initiates a voice call using the Amazon Pinpoint API.
  • An Amazon SNS topic that collects state change events from Amazon Pinpoint.
  • An Amazon SQS queue that queues the messages.
  • A Lambda function (RetryCallGenerator) that polls the Amazon SQS queue and re-initiates the previously failed call attempt by calling the CallGenerator function.
  • A DynamoDB table that contains information about previous attempts to deliver the call.

The template also defines a custom Lambda resource, CustomResource, which creates a configuration set in Amazon Pinpoint. This configuration set specifies the events to send to Amazon SNS. A Lambda environment variable, CONFIG_SET_NAME, contains the name of the configuration set.

This architecture consists of two Lambda functions, which are represented as two different apps in the AWS SAM template. These functions are named CallGenerator and RetryCallGenerator. The CallGenerator function initiates the voice message with Amazon Pinpoint. The SendMessage API in Amazon Pinpoint returns a MessageId. The architecture uses this ID as a key to connect messages to the various events that they generate. The CallGenerator function also retains this ID in a DynamoDB table called call_attempts. The RetryCallGenerator function looks up the MessageId in the call_attempts table. If necessary, the function tries to send the message again by invoking the CallGenerator function.

 

Deploying and Testing

Start by downloading the template from the GitHub repository. AWS SAM requires you to specify an Amazon Simple Storage Service (Amazon S3) bucket to hold the deployment artifacts. If you haven’t already created a bucket for this purpose, create one now. The bucket should be reachable by a AWS Identity and Access Management (IAM) user.

At the command line, enter the following command to package the application:

sam package --template template.yaml --output-template-file output_template.yaml --s3-bucket BUCKET_NAME_HERE

In the preceding command, replace BUCKET_NAME_HERE with the name of the Amazon S3 bucket that should hold the deployment artifacts.

AWS SAM packages the application and copies it into the Amazon S3 bucket. This AWS SAM template requires you to specify three parameters: longCode, the phone number that’s used to make the outbound calls; maxRetry, which is used to set the MAX_RETRY environment variable for the RetryCallGenerator application; and retryDelaySeconds, which sets the delivery delay time for the Amazon SQS queue.

When the AWS SAM package command finishes running, enter the following command to deploy the package:

sam deploy --template-file output_template.yaml --stack-name blogstack --capabilities CAPABILITY_IAM --parameter-overrides maxRetry=2 longCode=LONG_CODE retryDelaySeconds=60

In the preceding command, replace LONG_CODE with the dedicated phone number that you acquired earlier.

When you run this command, AWS SAM shows the progress of the deployment. When the deployment finishes, you can test it by sending a sample event to the CallGenerator Lambda function. Use the following sample event to test the Lambda function:

{
"Message": "<speak>Thank you for visiting the AWS <emphasis>Messaging and Targeting Blog</emphasis>.</speak>",
"PhoneNumber": "DESTINATION_PHONE_NUMBER",
"RetryCount": 0
}

In the preceding event, replace DESTINATION_PHONE_NUMBER with the phone number to which you want to send a test message.

Important: Telecommunication providers take several steps to limit unsolicited voice messages. For example, providers in the United States only deliver a certain number of automated voice messages to each recipient per day. For this reason, you can only use Amazon Pinpoint to send 10 calls per day to each recipient. Keep this limit in mind during the testing process.

Conclusion

This architecture shows how Amazon Pinpoint can deliver state change events to Amazon SNS and how a serverless application can use it. You can adapt this architecture to apply to other use cases, such as call auditing, advanced call analytics and more.