Tag Archives: Amazon Pinpoint

Build AI and ML into Email & SMS for customer engagement

Post Syndicated from Vinay Ujjini original https://aws.amazon.com/blogs/messaging-and-targeting/build-ai-and-ml-into-email-sms-for-customer-engagement/

Build AI and ML into Email & SMS for customer engagement

Customers engage with businesses through various channels like email, SMS, Push, and in-app. With the availability and ease of usage of mobile phones, businesses can use 2-way Short Service Messages (SMS) to engage with their customers. Text messaging does not need applications and provides immediate interaction with your customers. Amazon Pinpoint enables businesses & organizations to interact in 2-way SMS messages with their customers. Since it is not practical and scalable for organizations to have people responding to millions of their customer’s texts, we can leverage Amazon Lex which helps build the conversational AI into the 2-way SMS. Amazon Lex is a fully managed artificial intelligent (AI) AWS service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications. Machine Learning (ML) is used in digital marketing to help businesses detect patterns in customer bhevaior.

Today, if customers want to know the latest status on their order, they have to send an email, which is hard for businesses to monitor and respond, and time consuming for the customer to call regarding their order status and also expensive for businesses to field the calls.

This blog post shows how you can elevate your customer’s experience using Amazon Pinpoint’s omni-channel capabilities, Amazon Lex’s AI powered chat, and ML-powered personalization using Amazon Personalize.

The solution presented in this blog helps resolve all the above issues. The example I have used to depict this where a customer orders a bike and since the delivery has been delayed, he wants to get timely updates on the progress. He has been given a phone number by the bike company to text them with any questions. This solution elevates the customer’s experience by providing him with timely update by checking the latest from the database and also sending additional product recommendations, predicting what the customer might need.

Architecture

This solution uses Amazon Pinpoint, Amazon Lex, AWS Lambda, Amazon Dynamo DB, Amazon Simple Notification Services, Amazon Personalize.

AWS architecture diagram AI/ML, Email, SMS.

  1. The customer sends a message to the number provided by the store asking about their order status.
  2. Pinpoint 2-way SMS has as SNS topic tied to it.
  3. The SNS topic relays the message to the Lex integration Lambda.
  4. This Lex integration lambda has the integration between Pinpoint & Lex.
  5. When the customer checks on their order status, Lex taps into the fulfillment lambda that is tied to it.
  6. That lambda checks on the order status from the DynamoDB and sends it back to Lex.
  7. Lex sends the order details to Amazon Pinpoint and Amazon Pinpoint delivers the SMS with the order details to the customer’s phone number.
  8. Amazon Lex lets fulfillment Lambda know to send an email to the customer with the order details.
  9. Fulfillment Lambda create an event called ‘Order Status’ for Amazon Pinpoint Journey to consume in its Journey.
  10. Amazon Pinpoint’s message template reaches out to Amazon Personalize to get the 3 recommendations.
  11. Amazon Pinpoint’s Journey triggers an email message to the customer with the order information and recommendations

Prerequisites

To deploy this solution, you must have the following:

  1. An AWS account.
  2. An Amazon Pinpoint project.
  3. An originating identity that supports 2 way SMS in the country you are planning to send SMS to – Supported countries and regions (SMS channel).
  4. A mobile number to send and receive text messages.
  5. An SMS customer segment – Download the example CSV, that contains a sample SMS & email endpoints. Replace the phone number (column C) with yours, and email with your email and import it to Amazon Pinpoint – How to import an Amazon Pinpoint segment.
  6. Add your mobile number in the Amazon Pinpoint SMS sandbox.
  7. Verify your email address that needs to receive messages from this account.
  8. Download the LexIntegration.zip & RE_Order_Validation.zip Lambda files from this Github location.

Preparation:

  1. Download the CloudFormation template.
  2. Go to Amazon S3 console and create a bucket. I created one for this example as ‘pinpointreinventaiml-code’. Under that S3 bucket, create a sub-folder and name it Lambda.
  3. Upload the 2 zip files you downloaded earlier from the Github.
  4. In Amazon Pinpoint > Phone numbers, Check to make sure the phone number you are using is enabled for SMS and its status is active.
  5. Add the machine learning generated product recommendations using Amazon Personalize.
Check if phone number is enabled & active in Pinpoint console

Phone numbers in Pinpoint console

Solution implementation

Create a Lex Chat bot:

  1. Now it’s time to create your bot. To create your bot, sign in to the Lex console at https://console.aws.amazon.com/lex.
  2. For more information about creating bots in Lex, see https://docs.aws.amazon.com/lex/latest/dg/gs-console.html.
  3. Click on Create bot button. Next steps:
    1. Select Create a blank bot radio button.
    2. Give a Bot name ‘Order Status’ under Bot name Configuration. (Use the same Bot name as mentioned here. If you change the Bot name here, your CloudFormation will fail)
    3. Under IAM permissions, select the radio button Create a role with basic Amazon Lex permissions.
    4. For COPPA, choose No. Click Next
    5. Under Language dropdown, choose the language of usage. I chose Language as English in my example.
    6. Click Done, to complete the Bot creation.
  4. You have to create an Intent within the Bot you just created
    1. Click on the Bot you just created. Click on Intents and click the dropdown Add intent and select Add empty intent.
    2. Give an intent name and click Ok.
  5. Once the intent is created, go to the intent and open the Conversation flow section in the intent and create a flow that that has the following info and looks like below image:
    1. Click on Sample utterance and it takes you to Sample Utterance and type in Order status.
    2. Click on initial response and type in Okay, I can help with that. What is your order number?
    3. Click on the slot value and click on Add a slot. Name: OrderNumber and Slot type is AMAZON.AlphaNumeric. In the prompt, enter Please enter your order number.
    4. Click on Save Intent button. The conversation flow should look like the below screenshot:

Amazon Lex intent

6. Go back to the Intent you just created and click on the Build button that is to the right side of the page.

Build intent

7. Once the build is successfully completed, go back to the Bot you created and click on Aliases on the left frame. Click on the Alias that was created earlier, TestBotAlias.

Bot Alias

8. In the Languages section, click on the English language that we created earlier.
9. Open the Lambda function – optional section and point the source to RE_Order_Validation Lambda that we downloaded earlier.
10. For Lambda function version or alias, select $LATEST. Click on Save.

Add Lambda to Alias

11. Go to Intents, choose the intent you just built and click on Build button again. Once build is complete, you can test the intent.

Import and execute CloudFormation:

  1. Navigate to the Amazon CloudFormation console in the AWS region you want to deploy this solution.
  2. Select Create stack and With new resources. Choose Template is ready as Prerequisite – Prepare template and Upload a template file as Specify template. Upload the template downloaded in step 1 under Preparation section of this document. Click Next.
  3. Fill the AWS CloudFormation parameters as shown below:
  4. Stack name: Give a name to this stack.
    1. Under Parameters, for BotAlias: The Bot Alias that you created as part of Amazon Lex above.
    2. BotId: The Bot ID for the bot that you created as part of Amazon Lex above.
    3. CodeS3Bucket: Give the name of the S3 bucket you created in step3 of the Preparation topic above.
    4. OriginationNumber: This is the origination identity phone number you created in step4 of the Preparation topic above.
    5. PinpointProjectId: Use the ProjectID you have from step2 of the Prerequisites phase above.
  5. After entering all the parameter info, it would look something like this below:
  6. CloudFormation parameters
  7. Click Next. Leave the default options on the next page and click Next again.
  8. Check the box I acknowledge that AWS CloudFormation might create IAM resources with custom names. Click Submit.

Set up data in Amazon Dynamo DB

  1. We are using DynamoDB table here as the transactional database that stores order information for the bike store.
  2. Once the solution has been successfully deployed, navigate to the Amazon DynamoDB console and access the OrderStatus DynamoDB table. Each row created in this table represents an order and it’s details. Each row should have a unique Order_Num that holds the order number and it’s related information. You can put additional information about the order like the example below:
  3. {
       "Order_Num":{
          "Value":"ABC123"
       },
       "Delivery_Dt":{
          "Value":"12/01/2022"
       },
       "Order_Dt":{
          "Value":"11/01/2022"
       }
       "Shipping_Dt":{
          "Value":"11/24/2022"
       }
       "UserId":{
          "Value":"example-iser-id-3"
       }
    }
  4. Once you enter the data, it should look like the image below. Click on Create item.
  5. Dynamo DB values

Set up Amazon Simple Notification Service (SNS) topic

  1. We need the Amazon Simple Notification Service here, to provide internal message delivery from publishers (customer’s text message) to subscribers (Amazon Lex in this example). This is used for internal notifications in this use case.
  2. As part of the CloudFormation above, check if you have an SNS topic created by the name LexPinpointIntegrationDemo.
  3. Now, we have successfully created an Amazon SNS topic.

Set up Lambda Functions

  1. Go to AWS Lambda console and open the Lambda function LexIntegration. Under the Function overview, click on the Add trigger. Under Trigger configuration dropdown, select SNS and under SNS Topic select LexPinpointIntegrationDemo topic. Click on Add.
  2. Note: In this example, I used Node.js in a Lambda and Python in another, to show how AWS Lambda functions are flexible to use the scripting language of your choice.

Setting up 2-way SMS in Amazon Pinpoint

  1. Go to Amazon Pinpoint console and click on Phone numbers under SMS & Voice in the left frame. If you don’t see any phone numbers, please refer to #3 under prerequisites section above.
  2. This is how your screen should look like
  3. Phone numbers in Pinpoint
  4. Click on the number.
  5. On the right frame, expand Two-way SMS drop down arrow.
  6. Click on the check box ‘Enable two-way SMS’.
  7. In the ‘Incoming message destination’ select the radio button ‘Choose an existing SNS topic’ and in the drop down below, choose the SNS topic you built above.
  8. The result would look like the screenshot below:
  9. 2-way SMS
  10. Click on Save.

Import Machine Learning model into Pinpoint

  1. Go to Amazon Pinpoint.
  2. Click on Machine Learning Models. Click on Add recommender model.
  3. Give a recommender model name and description under model details.
  4. Under Model configuration, choose the radio button ‘Automatically create a role’ and give an IAM role name in the textbox below.
  5. Under recommender model, choose the recommender model campaign that you created in Amazon Personalize earlier in the project.
    1. If you did not create it, use this Pinpoint workshop to create a recommender model in Amazon Personalize.
    2. The data used in this example is for retail industry, please edit the data as needed for your use case and industry.
  6. Under the settings section:
    1. Select ‘User Id’ as identifier.
    2. Click on the drop down ‘Number of recommendations per message’ and select 3.
  7. For Processing method, choose ‘Use value returned by model’.
  8. Click on Next.
  9. You are presented with attributes section. Give a display name as ‘product_name’ for the attributes and click next.
  10. On the next screen, you can review all the information provided and click on Publish.
  11. The completed model after publishing looks like the screen below:
  12. Personalize model in Pinpoint

Create a Message Template in Amazon Pinpoint

  1. Use chapter 6.4 in this workshop Amazon Pinpoint Workshop to create a message template.
  2. Once the template is created, you need to add recommendations to the message template using this Amazon Pinpoint Workshop details. Change the type of data needed for your use case and industry in this workshop. I used sample retail data.
  3. To create a Amazon Pinpoint Journey, navigate to the Amazon Pinpoint console , select Journeys and click on Create journey.
  4. Give a name, click on Set entry condition in the Journey entry block.
  5. Choose the radio button Add participants when they perform an activity.
  6. Click in the ‘Events’ text box and type in OrderStatus.
  7. Pinpoint Journey entry
  8. Click on Add activity and select Send an email.
  9. Click on choose an email template and select the email message template we created earlier in this blog. Click on choose template button.
  10. Select a Sender email address from the drop down list.
  11. Choose sender email here
  12. Click Save. The final journey should look like this:
  13. This is the final journey
  14. Click on Actions > Settings where you will review the journey settings. There you set the start and end date of the journey if applicable as well as other advanced settings. Configure your journey settings to look like the screenshot below and click Save.
  15. Journey settings
  16. To publish your journey click on Review. On the Review your journey click Next > Mark as reviewed > Publish. A 5 minutes countdown will begin after, which your journey will be live.
  17. Once the journey is live, we need to pass the event OrderStatus and the endpoints will go through that journey and will receive an email.

Testing the solution

  1. Use a phone with a valid number (in this example, I took a US phone number) and send a text ‘Order Status’ to the number generated in Amazon Pinpoint above.
  2. You should get a response “Okay, I can help with that. What is your order number?”
  3. You should type in the order number you generated earlier and stored it in Amazon DynamoDB table.
  4. You should get a response “Your order <order number> was shipped on <shipped_dt> and is expected to be delivered to your address on <delivery_dt>. Your order details have been emailed to you.”
  5. Text message flow
  6. Alternatively, you can test this solution from the Lex bot.
  7. In Amazon Lex, go to the intent you created above and click on the Test button. Next steps:
    1. In the text box, enter Order Status.
    2. Bot should respond with Okay, I can help with that. What is your order number?
    3. You can respond with the order number you entered in the DynamoDB table.
    4. Bot should respond with Your order <Order_Num> was shipped on <Shipping_Dt> and is expected to be delivered to your address on <Delivery_Dt>. Your order details have been emailed to you.
    5. Testing the 2 way messaging in Lex console

Conclusion

Using this blog post, you can elevate your customer’s experience by using Amazon Lex’s AI chat capabilities, Amazon Personalize’s ML recommendation models and trigger a Pinpoint Journey. This blog highlights how organizations can interact in a 2-way SMS with their customers and convert that engagement to a triggered email, with product recommendations, if needed.

Next Steps

You can use the above solution and modify it easily to use it across different verticals and applicable use cases. You can also extend this solution to Amazon Connect to an agent via SMS chat, using this blog.

Clean-up

  1. To delete the solution, go to CloudFormation you created as part od this project. Click on the stack and click Delete.
  2. Navigate to Amazon Pinpoint and stop the Journey you ran in this solution. Delete the Journey, Machine learning models, Message templates you created for this solution. Delete the Project you created for this solution.
  3. In Amazon Lex, delete the intent and bot you created for this solution.
  4. Delete the folder and bucket you created in S3 as part of this project.
  5. Amazon Personalize resources like Dataset groups, datasets, etc. are not created via AWS Cloudformation, thus you have to delete them manually. Please follow the instructions in the AWS documentation on how to clean up the created resources.

Additional resources

Retry delivering failed SMS using Pinpoint

How to target customers using ML, based on their interest in a product

 About the Authors

Vinay Ujjini

Vinay Ujjini is an Amazon Pinpoint and Amazon Simple Email Service Principal Specialist Solutions Architect at AWS. He has been solving customer’s omni-channel challenges for over 15 years. In his spare time, he enjoys playing tennis & cricket.

Shift management using Amazon Pinpoint’s 2-way-SMS

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/shift-management-using-amazon-pinpoints-2-way-sms/

Businesses with physical presence such as retail, restaurants and airlines need reliable solutions for shift management when demand fluctuates, employees call in sick, or other unforeseen circumstances arise. Their linear dependence on staff forces them to create overtime shifts as a way to cope with demand.

The overtime shift communication between the business and employees needs to be immediate and scalable. Employees in such industries might not have access to internet, which rules out communication channels like email and push notification. Furthermore once the employees receive the available shifts, they need an easy way to book them and if required request further support.

In these situations, businesses need communication methods that are accessible by any mobile device, available without internet connection, and allow for replies. SMS fills all of these requirements and more. This blog showcases how with Amazon Pinpoint SMS channel and other AWS services you can develop an application that communicates available shifts to employees who are interested while allowing them to book by replying.

The solution presented in this blog is for shift management but with small changes it can also communicate available appointments to customers / patients. An example of such use case can be found in health care, where there is a waiting list to see a doctor and patients might cancel the very last moment. The solution can communicate these available slots to all patients in the waiting list and allow them to book via SMS.

Architecture

The solution uses Amazon Pinpoint two way SMS, Amazon DynamoDB, AWS Lambda, Amazon Simple Notification Service (SNS) and Amazon Connect (optional). The next section dives deeper into the architecture diagram and logic flow.

shift_management_architecture

  1. The operator adds an item to the Shift’s campaigns Amazon DynamoDB table. The item consists of a unique key that is used as the Amazon Pinpoint Campaign Id and an attribute Campaign Message, which is used as the SMS message text that the campaign recipients will receive. The Campaign Message needs to include all available shifts that the operator wants to notify the employees about.
    1. Note: This can be done either from the AWS console or programmatically via API.
  2. Amazon DynamoDB streams invokes an AWS Lambda function upon creation of a new Amazon DynamoDB item. The AWS Lambda function uses the Amazon DynamoDB data to create and execute an Amazon Pinpoint SMS Campaign based on an existing customer segment of employees who are interested in overtime shifts. The customer segment is a prerequisite and it includes SMS endpoints of the employees who are interested in receiving shift updates.
  3. Employees who belong in that segment receive an SMS with the available shifts and they can reply to book the ones they are interested in.
  4. Any inbound SMS is published on an Amazon SNS topic.
  5. The 2 way SMS AWS Lambda function subscribes to the Amazon Simple Notification Service and processes all inbound SMS based on their message body.
  6. The Shift’s status Amazon DynamoDB table stores the status of the shifts, which gets updated depending on the inbound SMS.
  7. If the employee requires further assistance, they can trigger an Amazon Connect outbound call via SMS.

The diagram below illustrates the four possible messages an employee can send to the application. To safeguard the application from outsiders and bad actors, the 2 way SMS AWS Lambda function looks up if the senders mobile number is in an allow list. In this solution, the allow list is hardcoded as an AWS Lambda environment variable but it can be stored in a data base like Amazon DynamoDB.

shift management inbound-sms-business-logic

Solution implementation

Prerequisites

To deploy this solution, you must have the following:

  1. An originating identity that supports 2 way SMS in the country you are planning to send SMS to – Supported countries and regions (SMS channel).
  2. A mobile phone to send and receive SMS.
  3. An AWS account.
  4. An Amazon Pinpoint project – How to create an Amazon Pinpoint project.
  5. An SMS customer segment – Download the example CSV, that contains one SMS endpoint. Replace the phone number (column C) with yours and import it to Amazon Pinpoint – How to import an Amazon Pinpoint segment.
  6. Add your mobile number in the Amazon Pinpoint SMS sandbox – Amazon Pinpoint SMS sandbox.
  7. An Amazon Connect instance, number & contact flow if you want your employees to be able to request an agent call back. Download the example Connect contact flow that you can import to your Amazon Connect instance.

Note: UK numbers with a +447 prefix are not allowed by default. Before you can dial these UK mobile numbers, you must submit a service quota increase request. For more information, see Amazon Connect Service Quotas in the Amazon Connect Administrator Guide.

Deploy the solution

  1. Download the CloudFormation template and navigate to the AWS CloudFormation console in the AWS region you want to deploy the solution.
  2. Select Create stack and With new resources. Choose Template is ready as Prerequisite – Prepare template and Upload a template file as Specify template. Upload the template downloaded in step 1.
  3. Fill the AWS CloudFormation parameters as shown below:
    1. ApprovedNumbers: The mobile numbers that are allowed to use this service. The format should be E164 and if there is more than one number separate them by comma e.g. +4457434243,+432434324.
    2. OriginationNumber: The mobile number that you have in your Amazon Pinpoint account in E164 format e.g. +44384238975.
    3. PinpointProjectId: The existing Amazon Pinpoint project Id.
    4. SegmentId: The Amazon Pinpoint existing segment Id that you want to send the SMS notifications to.
    5. ConnectEnable: Select YES if you already have an Amazon Connect instance with a Contact Flow and Queue. If you select NO ignore all the fields below, the solution will still be deployed but employees won’t be able to request a call back.
    6. InstanceId: The Amazon Connect InstanceId. Follow this link to learn how to find your Amazon Connect InstanceId.
    7. ContactFlowID: The Amazon Connect Contact Flow Id that you want this solution to use. Follow this link to learn how to find your Amazon Connect ContactFlow id.
    8. QueueID: The Amazon Connect Queue Id. To obtain the Amazon Connect Queue Id navigate to your Amazon Connect instance > Routing > Queues and it should appear on the browser URL, see example: https://your-instance.awsapps.com/connect/queues/edit?id=0c7fed63-815b-4040-8dbc-255800fca6d7.
    9. SourcePhoneNumber: The Amazon Connect number in E164 format that is connected to the Contact Flow provided in step 7.
  4. Once the solution has been successfully deployed, navigate to the Amazon DynamoDB console and access the ShiftsStatus DynamoDB table. Each item created represents a shift and should have a unique shift_id that employees use to book the shifts, a column shift_status with value = available and a column shift_info where you can put additional information about the shift – see example below:
    {
       "shift_id":{
          "S":"XYZ1234"
       },
       "shift_info":{
          "S":"15/08 5h nightshift"
       },
       "shift_status":{
          "S":"available"
       }
    }

    Pinpoint_shift_status_dynamoDB

  5. Navigate to Amazon Pinpoint console > SMS and voice > Phone numbers, select the phone number that you used as OriginationNumber for this solution and enable Two-way SMS. Under the Incoming messages destination section, select Choose an existing SNS topic and select the one containing the name TwoWaySMSSNSTopic.
  6. Navigate to the Amazon DynamoDB console and access the ShiftsCampaignDynamoDB table. Each item you create represents an Amazon Pinpoint SMS campaign. Create an Amazon DynamoDB item and provide a unique campaign_id, which will be used as the Amazon Pinpoint Campaign name. Create a new attribute (string) with the name campaign_message and type all available shifts that you want to communicate via this campaign. It is important to include the shift id(s) for each of shifts you want your employees to be able to request – see example below.
    • Note: By completing this step, you will trigger an Amazon Pinpoint SMS Campaign. You can access the campaign information and analytics from the Amazon Pinpoint console.
{
  "campaign_id": {
    "S": "campaign_id1"
  },
  "campaign_message": {
    "S": "15/08 5h nightshift XYZ123, 18/08 3h dayshift XYZ124"
  }
}

shift_campaign_dynamoDB

Testing the solution

  • Make sure you have created the shifts in the ShiftsStatusDynamoDB Amazon DynamoDB table.
  • To test the SMS Campaign, replicate step 6 under Deploy the solution section.
  • Reply to the SMS received with the options below:
    1. Send a shift_id that doesn’t exist to receive an automatic response “This is not a valid shift code, please reply by typing REQUEST to view the available shifts”.
    2. Send a valid & available shift_id to book the shift and then check the ShiftsStatusDynamoDB Amazon DynamoDB table, where the shift_status should change to taken and there should be a new column employee with the mobile number of the employee who has requested it.
    3. Send REQUEST to receive all shifts with shift_status = available.
    4. If you have deployed the solution along with Amazon Connect, send AGENT and await for the call.

Next steps

This solution can be extended to support SMS sending to multiple countries by acquiring the respective originating identities. Using Amazon Pinpoint phone number validate service API the application can identify the country for each recipient and choose the correct originating identity accordingly.

Depending your business requirements you might want to change the agent call back option to chat via SMS. You can extend this solution to connect the agent via SMS chat by following the steps in this blog.

Clean-up

To delete the solution, navigate to the AWS CloudFormation console and delete the stack deployed.

About the Authors

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Senior Specialist Solutions Architect at AWS. He loves diving deep into his customer’s technical issues and help them design communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Retry delivering failed SMS using Amazon Pinpoint

Post Syndicated from satyaso original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-utilise-amazon-pinpoint-to-retry-unsuccessful-sms-delivery/

Organizations in many sectors and verticals have user bases to whom they send transactional SMS messages such as OTPs (one-time passwords), Notices, or transaction/purchase confirmations, among other things. Amazon Pinpoint enables customers to send transactional SMS messages to a global audience through a single API endpoint, and the messages are routed to recipients by the service. Amazon Pinpoint relies on downstream SMS providers and telecom operators to deliver the messages to end user’s device. While most of the times the SMS messages gets delivered to recipients but sometimes these messages could not get delivered due to  carrier/telecom related issues which are transient in nature. This impacts customer’s brand name. As a result, customers need to implement a solution that allows them to retry the transmission of SMS messages that fail due to transitory problems caused by downstream SMS providers or telecom operators.

In this blog post, you will discover how to retry sending unsuccessfully delivered SMS messages caused by transitory problems at the downstream SMS provider or telecom operator side.

Prerequisites

For this post, you should be familiar with the following:

Managing an AWS account
Amazon Pinpoint
Amazon Pinpoint SMS events
AWS Lambda
AWS CloudFormation
Amazon Kinesis Firehose
Kinesis Streams
Amazon DynamoDB WCU and RCU accordingly

Architecture Overview

The architecture depicted below is a potential architecture for re-sending unsuccessful SMS messages at real time. The application sends the SMS message to Amazon Pinpoint for delivery using sendMessge API. Pinpoint receives the message and returns a receipt notification with the Message ID; the application records the message content and ID to a Datastore or DynamoDB. Amazon Pinpoint delivers messages to users and then receives SMS engagement events. The same SMS engagement events are provided to Amazon Kinesis Data Streams which as an event source for Lambda function that validates the event type, If the event type indicates that the SMS message was unable to be sent and it make sense to retry, the Lambda function logic retrieves respective “message id” from the SMS events and then retrieves the message body from the database. Then it sends the SMS message to Amazon  Pinpoint for redelivery, you can choose same or an alternative origination number as origination identity while resending the SMS to end users. We recommend configuring the number of retries and adding a retry message tag within Pinpoint to analyse retries and also to avoid infinite loops. All events are also sent to Amazon Kinesis Firehose which then saved to your S3 data lake for later audit and analytics purpose.

Note: The Lambda concurrency and DynamoDB WCU/RCUs need to be provisioned accordingly. The AWS CloudFormation template provided in this post automatically sets up the different architecture components required to retry unsuccessful SMS messages

Retry delivering failed SMS using Amazon Pinpoint

At the same time, if you use Amazon Kinesis Firehose delivery stream instead of Kinesis data stream to stream data to a storage location, you might consider utilising Transformation lambda as part of the kinesis Firehose delivery stream to retry unsuccessful messages. The architecture is as follows; application sends the SMS payload to Amazon Pinpoint using SendMessage API/SDK while also writing the message body to a persistent data store, in this case a DynamoDB database. The SMS related events are then sent to Amazon Kinesis Firehose, where a   transformation lambda is setup. In essence, if SMS event type returns no errors, the event is returned to Firehose as-is. However, if an event type fails and it makes sense to retry, lambda logic sends another SendMessage until the retry count (specified to 5 within the code) is reached. If just one retry attempt is made, S3 storage is not loaded with an event (thus the result=Dropped). Since Pinpoint event do not contain actual SMS content, a call to DynamoDB is made to get the message body for a new SendMessage.

Retry SMS diagram

Amazon Pinpoint provides event response for each transactional SMS communications for retrying unsuccessful SMS connections, there are primarily two factors to consider in this architecture. 1/ Type of event (event_type) 2/ Record Status (record_status). So whenever the event_type is “_SMS.FAILURE” and record_status is any of “UNREACHABLE”, “UNKNOWN”, “CARRIER_UNREACHABLE”, “EXPIRED”. Then surely customer application need to retry the SMS message delivery. Following pseudo code snippet explains the conditional flow for failed SMS sending logic within the lambda function.

Code Sample:
If event.event_type = '_SMS.FAILURE': and event.record_status == 'UNREACHABLE' 
	'| UNKNOWN | CARRIER_UNREACHABLE | TTL_EXPIRED'
	sendMessage(message content, Destination) # resend the SMS message then 
	output_record = { "recordId": record["recordId"], 'result': 'Dropped', 'data': 
		base64.b64encode(payload.encode('utf-8')) } 
else 
	output_record = { "recordId": record["recordId"], 'result': 'Ok', 
						'data': base64.b64encode(payload.encode('utf-8')) }

Getting started with solution deployment

Prerequisite tasks to be completed before deploying the logging solution

  1. Go to CloudFormation Console and Click Create Stack.
  2. Select Amazon S3 Url redio button and provide the cloud formation linkAWS console creating a Pinpoint template
  3. Click Next on Create Stack screen.
  4. Specify Stack Name, for example “SMS-retry-stack”
  5. Specify event stream configuration option, this will trigger the respective child cloud formation stack . There are three Event stream configuration you can choose from.
    • No Existing event stream setup – Select this option if you don’t have any event stream setup for Amazon Pinpoint.
    • Event stream setup with Amazon Kinesis Stream – Select this option if your Amazon Pinpoint project already have Amazon Kinesis as event stream destination.
    • Event stream setup with Amazon Kinesis Firehose – Select this option if you have configured Kinesis Firehose delivery stream as event stream destination.AWS console specifying Pinpoint stack details
  6. Specify the Amazon Pinpoint project app ID (Pinpoint project ID), and click Next.
  7. Click Next on Configure stack options screen.
  8. Select “I acknowledge that AWS CloudFormation might create IAM resources” and click Create Stack.
  9. Wait for the CloudFormation template to complete and then verify resources in the CloudFormation stack has been created. Click on individual resources and verify.
    • Parent stack-SMS retry parent stack
    • Child Stack –SMS retry child stack
  10. As described in the architectural overview session, the maxRetries configuration inside “RetryLambdaFunction” ensures that unsuccessful SMS messages are tried resending repeatedly. This number is set to 3 by default.” If you want to adjust the maxRetry count, go to the settings “RetryLambdaFunction” and change it to the desired number.SMS retry lambda

Notes :- The Cloudformation link in the blog specifically points to the parent cloudformation template, which has links to the child Cloudformation stack, these child stacks will be deployed automatically as you go through the patent stack.

Testing the solution

You can test the solution using “PinpointDDBProducerLambdaFunction” and SMS simulator numbers . PinpointDDBProducerLambdaFunction has sample code that shall trigger the SMS using Amazon Pinpoint.

testing SMS retry solution

Now follow the steps below to test the solution.

  1. Go to environment variables for PinpointDDBProducerLambdaFunction­­
  2. Update “destinationNumber” and “pinpointApplicationID,” where destination number is the recipient number for whom you wish to send the SMS as a failed attempt and Amazon Pinpoint application id is the Pinpoint Project ID for which the Pinpoint SMS channel has already been configured.
  3. Deploy and test the Lambda function.
  4. Check the “Pinpoint Message state” DyanamoDB table and open the Latest table ITEM.
  5. If you observe the table Items, it states the retry_count=2 (SMS send retry has been attempted 2 times and all_retries_failed=true ( for both of the times the SMS could not get delivered.)
Notes :
  • If existing Kinesis stream has pre-defined destination lambda then current stack will not replace it but exit gracefully.
  • If existing Kinesis firehose has pre-existing transformation lambda then current stack shall not replace the current stack.

Remarks

This SMS retry solution is based on best effort. This means that the solution is dependent on event response data from SMS aggregators. If the SMS aggregator data is incorrect, this slotion may not produce the desired effec

Cost

Considering that the retry mechanism is applicable for 1000000 unsuccessful SMS messages per month, this solution will approximately cost around $20 per month. Here is AWS calculator link for reference

Clean up

When you’re done with this exercise, complete the following steps to delete your resources and stop incurring costs:

  • On the CloudFormation console, select your stack and choose Delete.
  • This cleans up all the resources created by the stack.

Conclusion

In this blog post, we have demonstrated how customers can retry sending the undelivered/failed SMS messages via Amazon Pinpoint. We explained how to leverage the Amazon kinesis data streams and AWS Lambda functions to assess the status of unsuccessful SMS messages and retry delivering them in an automatic manner.

Extending the solution

This blog provides a rightful frame work to Implement a solution to retry sending failed SMS messages. You can download the AWS Cloudformation templates, code, and scripts for this solution from our GitHub repository and modify it to fit your needs.


About the Authors
Satyasovan Tripathy works as a Senior Specialist Solution Architect at AWS. He is situated in Bengaluru, India, and focuses on the AWS Digital User Engagement product portfolio. He enjoys reading and travelling outside of work.

Nikhil Khokhar is a Solutions Architect at AWS. He specializes in building and supporting data streaming solutions that help customers analyze and get value out of their data. In his free time, he makes use of his 3D printing skills to solve everyday problems.

Amazon Personalize customer outreach on your ecommerce platform

Post Syndicated from Sridhar Chevendra original https://aws.amazon.com/blogs/architecture/amazon-personalize-customer-outreach-on-your-ecommerce-platform/

In the past, brick-and-mortar retailers leveraged native marketing and advertisement channels to engage with consumers. They have promoted their products and services through TV commercials, and magazine and newspaper ads. Many of them have started using social media and digital advertisements. Although marketing approaches are beginning to modernize and expand to digital channels, businesses still depend on expensive marketing agencies and inefficient manual processes to measure campaign effectiveness and understand buyer behavior. The recent pandemic has forced many retailers to take their businesses online. Those who are ready to embrace these changes have embarked on a technological and digital transformation to connect to their customers. As a result, they have begun to see greater business success compared to their peers.

Digitizing a business can be a daunting task, due to lack of expertise and high infrastructure costs. By using Amazon Web Services (AWS), retailers are able to quickly deploy their products and services online with minimal overhead. They don’t have to manage their own infrastructure. With AWS, retailers have no upfront costs, have minimal operational overhead, and have access to enterprise-level capabilities that scale elastically, based on their customers’ demands. Retailers can gain a greater understanding of customers’ shopping behaviors and personal preferences. Then, they are able to conduct effective marketing and advertisement campaigns, and develop and measure customer outreach. This results in increased satisfaction, higher retention, and greater customer loyalty. With AWS you can manage your supply chain and directly influence your bottom line.

Building a personalized shopping experience

Let’s dive into the components involved in building this experience. The first step in a retailer’s digital transformation journey is to create an ecommerce platform for their customers. This platform enables the organization to capture their customers’ actions, also referred to as ‘events’. Some examples of events are clicking on the shopping site to browse product categories, searching for a particular product, adding an item to the shopping cart, and purchasing a product. Each of these events gives the organization information about their customer’s intent, which is invaluable in creating a personalized experience for that customer. For instance, if a customer is browsing the “baby products” category, it indicates their interest in that category even if a purchase is not made. These insights are typically difficult to capture in an in-store experience. Online shopping makes gaining this knowledge much more straightforward and scalable.

The proposed solution outlines the use of AWS services to create a digital experience for a retailer and consumers. The three key areas are: 1) capturing customer interactions, 2) making real-time recommendations using AWS managed Artificial Intelligence/Machine Learning (AI/ML) services, and 3) creating an analytics platform to detect patterns and adjust customer outreach campaigns. Figure 1 illustrates the solution architecture.

Digital shopping experience architecture

Figure 1. Digital shopping experience architecture

For this use case, let’s assume that you are the owner of a local pizzeria, and you manage deliveries through an ecommerce platform like Shopify or WooCommerce. We will walk you through how to best serve your customer with a personalized experience based on their preferences.

The proposed solution consists of the following components:

  1. Data collection
  2. Promotion campaigns
  3. Recommendation engine
  4. Data analytics
  5. Customer reachability

Let’s explore each of these components separately.

Data collection with Amazon Kinesis Data Streams

When a customer uses your web/mobile application to order a pizza, the application captures their activity as click-stream ‘events’. These events provide valuable insights about your customers’ behavior. You can use these insights to understand the trends and browsing pattern of prospects who visited your web/mobile app, and use the data collected for creating promotion campaigns. As your business scales, you’ll need a durable system to preserve these events against system failures, and scale based on unpredictable traffic on your platform.

Amazon Kinesis is a Multi-AZ, managed streaming service that provides resiliency, scalability, and durability to capture an unlimited number of events without any additional operational overhead. Using Kinesis producers (Kinesis Agent, Kinesis Producer Library, and the Kinesis API), you can configure applications to capture your customer activity. You can ingest these events from the frontend, and then publish them to Amazon Kinesis Data Streams.

Let us start by setting up Amazon Kinesis Data Streams to capture the real-time sales transactions from the online channels like a portal or mobile app. For this blog post, we have used the Kaggle’s public data set as a reference. Figure 2 illustrates a snapshot of sample data to build personalized recommendations for a customer.

Sample sales transaction data

Figure 2. Sample sales transaction data

Promotion campaigns with AWS Lambda

One way to increase customer conversion is by offering discounts. When the customer adds a pizza to their cart, you want to make sure they are receiving the best deal. Let’s assume that by adding an additional item, your customer will receive the best possible discount. Just by knowing the total cost of added items to the cart, you can provide these relevant promotions to this customer.

For this scenario, the AWS Lambda service polls the Amazon Kinesis Data Streams to read all the events in the stream. It then matches the events based on your criteria of items in the cart. In turn, these events will be processed by the Lambda function. The Lambda function will read your up-to-date promotions stored in Amazon DynamoDB. As an option, caching recent or most popular promotions will improve your application response time, as well as improve the customer experience on your platform. Amazon DynamoDB DAX is an integrated caching for DynamoDB that caches the most recent or popular promotions or items.

For example, when the customer added the items to their shopping cart, Lambda will send promotion details to them based on the purchase amount. This can be for free shipping or discount of a certain percentage. Figure 3 illustrates the snapshot of sample promotions.

Promotions table in DynamoDB

Figure 3. Promotions table in DynamoDB

Recommendations engine with Amazon Personalize

In addition to sharing these promotions with your customer, you may also want to share the recommended add-ons. In order to understand your customer preferences, you must gather historical datasets to determine patterns and generate relevant recommendations. Since web activity consists of millions of events, this would be a daunting task for humans to review, determine the patterns, and make recommendations. And since user preferences change, you need a system that can use all this volume of data and provide accurate predictions.

Amazon Personalize is a managed AI/ML service that will help you to train an ML model based on datasets. It provides an inference point for real-time recommendations prior to having ML experience. Based on the datasets, Amazon Personalize also provides recipes to generate recommendations. As your customers interact on the ecommerce platform, your frontend application calls Amazon Personalize inference endpoints. It then retrieves a set of personalized recommendations based on your customer preferences.

Here is the sample Python code to display the list of available recommenders, and associated recommendations.

import boto3
import json
client = boto3.client('personalize')

# Connect to the personalize runtime for the customer recommendations

recomm_endpoint = boto3.client('personalize-runtime')
response = recomm_endpoint.get_recommendations(itemId='79323P',
  recommenderArn='arn:aws:personalize:us-east-1::recommender/my-items',
  numResults=5)

print(json.dumps(response['itemList'], indent=2))

[
  {
    "itemId": "79323W"
  },
  {
    "itemId": "79323GR"
  },
  {
    "itemId": "79323LP"
  },
  {
  "itemId": "79323B"
  },
  {
    "itemId": "79323G"
  }
]

You can use Amazon Kinesis Data Firehose to read the data near real time from the Amazon Kinesis Data Streams collected the data from the front-end applications. Then you can store this data in Amazon Simple Storage Service (S3). Amazon S3 is peta-byte scale storage help you scale and acts as a repository and single source of truth. We use S3 data as seed data to build a personalized recommendation engine using Amazon Personalize. As your customers interact on the ecommerce platform, call the Amazon Personalize inference endpoint to make personalized recommendations based on user preferences.

Customer reachability with Amazon Pinpoint

If a customer adds products to their cart but never checks out, you may want to send them a reminder. You can set up an email to suggest they re-order after a period of time after their first order. Or you may want to send them promotions based on their preferences. And as your customers’ behavior changes, you probably want to adapt your messaging accordingly.

Your customer may have a communication preference, such as phone, email, SMS, or in-app notifications. If an order has an issue, you can inform the customer as soon as possible using their preferred method of communication, and perhaps follow it up with a discount.

Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service. You can add users to Audience Segments, create reusable content templates integrated with Amazon Personalize, and run scheduled campaigns. With Amazon Pinpoint journeys, you can send action or time-based notifications to your users.

The following workflow shown in Figure 4, illustrates customer communication workflow for promotion. A journey is created for a cohort of college students: a “Free Drink” promotion is offered with a new order. You can send this promotion over email. If the student opens the email, you can immediately send them a push notification reminding them to place an order. But if they didn’t open this email, you could wait three days, and follow up with a text message.

Promotion workflow in Amazon Pinpoint

Figure 4. Promotion workflow in Amazon Pinpoint

Data analytics with Amazon Athena and Amazon QuickSight

To understand the effectiveness of your campaigns, you can use S3 data as a source for Amazon Athena. Athena is an interactive query service that analyzes data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

There are different ways to create visualizations in Amazon QuickSight. For instance, you can use Amazon S3 as a data lake. One option is to import your data into SPICE (Super-fast, Parallel, In-memory Calculation Engine) to provide high performance and concurrency. You can also create a direct connection to the underlying data source. For this use case, we choose to import to SPICE, which provides faster visualization in a production setup. Schedule consistent refreshes to help ensure that dashboards are referring to the most current data.

Once your data is imported to your SPICE, review QuickSight’s visualization dashboard. Here, you’ll be able to choose from a wide variety of charts and tables, while adding interactive features like drill downs and filters.

The process following illustrates how to create a customer outreach strategy using ZIP codes, and allocate budgets to the marketing campaigns accordingly. First, we use this sample SQL command that we ran in Athena to query for top 10 pizza providers. The results are shown in Figure 5.

SELECT name, count(*) as total_count FROM "analyticsdemodb"."fooddatauswest2"
group by name
order by total_count desc
limit 10

Athena query results for top 10 pizza providers

Figure 5. Athena query results for top 10 pizza providers

Second, here is the sample SQL command that we ran in Athena to find Total pizza counts by postal code (ZIP code). Figure 6 shows a visualization to help create customer outreach strategy per ZIP codes and budget the marketing campaigns accordingly.

SELECT postalcode, count(*) as total_count FROM "analyticsdemodb"."fooddatauswest2"
where postalcode is not null
group by postalcode
order by total_count desc limit 50;

QuickSight visualization showing pizza orders by zip codes

Figure 6. QuickSight visualization showing pizza orders by zip codes

Conclusion

AWS enables you to build an ecommerce platform and scale your existing business with minimal operational overhead and no upfront costs. You can augment your ecommerce platform by building personalized recommendations and effective marketing campaigns based on your customer needs. The solution approach provided in the blog will help organizations build re-usable architecture pattern and personalization using AWS managed services.

Target your customers with ML based on their interest in a product or product attribute.

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/use-machine-learning-to-target-your-customers-based-on-their-interest-in-a-product-or-product-attribute/

Customer segmentation allows marketers to better tailor their efforts to specific subgroups of their audience. Businesses who employ customer segmentation can create and communicate targeted marketing messages that resonate with specific customer groups. Segmentation increases the likelihood that customers will engage with the brand, and reduces the potential for communications fatigue—that is, the disengagement of customers who feel like they’re receiving too many messages that don’t apply to them. For example, if your business wants to launch an email campaign about business suits, the target audience should only include people who wear suits.

This blog presents a solution that uses Amazon Personalize to generate highly personalized Amazon Pinpoint customer segments. Using Amazon Pinpoint, you can send messages to those customer segments via campaigns and journeys.

Personalizing Pinpoint segments

Marketers first need to understand their customers by collecting customer data such as key characteristics, transactional data, and behavioral data. This data helps to form buyer personas, understand how they spend their money, and what type of information they’re interested in receiving.

You can create two types of customer segments in Amazon Pinpoint: imported and dynamic. With both types of segments, you need to perform customer data analysis and identify behavioral patterns. After you identify the segment characteristics, you can build a dynamic segment that includes the appropriate criteria. You can learn more about dynamic and imported segments in the Amazon Pinpoint User Guide.

Businesses selling their products and services online could benefit from segments based on known customer preferences, such as product category, color, or delivery options. Marketers who want to promote a new product or inform customers about a sale on a product category can use these segments to launch Amazon Pinpoint campaigns and journeys, increasing the probability that customers will complete a purchase.

Building targeted segments requires you to obtain historical customer transactional data, and then invest time and resources to analyze it. This is where the use of machine learning can save time and improve the accuracy.

Amazon Personalize is a fully managed machine learning service, which requires no prior ML knowledge to operate. It offers ready to use models for segment creation as well as product recommendations, called recipes. Using Amazon Personalize USER_SEGMENTATION recipes, you can generate segments based on a product ID or a product attribute.

About this solution

The solution is based on the following reference architectures:

Both of these architectures are deployed as nested stacks along the main application to showcase how contextual segmentation can be implemented by integrating Amazon Personalize with Amazon Pinpoint.

High level architecture

Architecture Diagram

Once training data and training configuration are uploaded to the Personalize data bucket (1) an AWS Step Function state machine is executed (2). This state machine implements a training workflow to provision all required resources within Amazon Personalize. It trains a recommendation model (3a) based on the Item-Attribute-Affinity recipe. Once the solution is created, the workflow creates a batch segment job to get user segments (3b). The job configuration focuses on providing segments of users that are interested in action genre movies

{ "itemAttributes": "ITEMS.genres = \"Action\"" }

When the batch segment job finishes, the result is uploaded to Amazon S3 (3c). The training workflow state machine publishes Amazon Personalize state changes on a custom event bus (4). An Amazon Event Bridge rule listens on events describing that a batch segment job has finished (5). Once this event is put on the event bus, a batch segment postprocessing workflow is executed as AWS Step Function state machine (6). This workflow reads and transforms the segment job output from Amazon Personalize (7) into a CSV file that can be imported as static segment into Amazon Pinpoint (8). The CSV file contains only the Amazon Pinpoint endpoint-ids that refer to the corresponding users from the Amazon Personalize recommendation segment, in the following format:

Id
hcbmnng30rbzf7wiqn48zhzzcu4
tujqyruqu2pdfqgmcgkv4ux7qlu
keul5pov8xggc0nh9sxorldmlxc
lcuxhxpqh/ytkitku2zynrqb2ce

The mechanism to resolve an Amazon Pinpoint endpoint id relies on the user id that is set in Amazon Personalize to be also referenced in each endpoint within Amazon Pinpoint using the user ID attribute.

State machine for getting Amazon Pinpoint endpoints

The workflow ensures that the segment file has a unique filename so that the segments within Amazon Pinpoint can be identified independently. Once the segment CSV file is uploaded to S3 (7), the segment import workflow creates a new imported segment within Amazon Pinpoint (8).

Datasets

The solution uses an artificially generated movies’ dataset called Bingewatch for demonstration purposes. The data is pre-processed to make it usable in the context of Amazon Personalize and Amazon Pinpoint. The pre-processed data consists of the following:

  • Interactions’ metadata created out of the Bingewatch ratings.csv
  • Items’ metadata created out of the Bingewatch movies.csv
  • users’ metadata created out of the Bingewatch ratings.csv, enriched with invented data about e-mail address and age
  • Amazon Pinpoint endpoint data

Interactions’ dataset

The interaction dataset describes movie ratings from Bingewatch users. Each row describes a single rating by a user identified by a user id.

The EVENT_VALUE describes the actual rating from 1.0 to 5.0 and the EVENT_TYPE specifies that the rating resulted because a user watched this movie at the given TIMESTAMP, as shown in the following example:

USER_ID,ITEM_ID,EVENT_VALUE,EVENT_TYPE,TIMESTAMP
1,1,4.0,Watch,964982703 
2,3,4.0,Watch,964981247
3,6,4.0,Watch,964982224
...

Items’ dataset

The item dataset describes each available movie using a TITLE, RELEASE_YEAR, CREATION_TIMESTAMP and a pipe concatenated list of GENRES, as shown in the following example:

ITEM_ID,TITLE,RELEASE_YEAR,CREATION_TIMESTAMP,GENRES
1,Toy Story,1995,788918400,Adventure|Animation|Children|Comedy|Fantasy
2,Jumanji,1995,788918400,Adventure|Children|Fantasy
3,Grumpier Old Men,1995,788918400,Comedy|Romance
...

Users’ dataset

The users dataset contains all known users identified by a USER_ID. This dataset contains artificially generated metadata that describe the users’ GENDER and AGE, as shown in the following example:

USER_ID,GENDER,E_MAIL,AGE
1,Female,[email protected],21
2,Female,[email protected],35
3,Male,[email protected],37
4,Female,[email protected],47
5,Agender,[email protected],50
...

Amazon Pinpoint endpoints

To map Amazon Pinpoint endpoints to users in Amazon Personalize, it is important to have a consisted user identifier. The mechanism to resolve an Amazon Pinpoint endpoint id relies that the user id in Amazon Personalize is also referenced in each endpoint within Amazon Pinpoint using the userId attribute, as shown in the following example:

User.UserId,ChannelType,User.UserAttributes.Gender,Address,User.UserAttributes.Age
1,EMAIL,Female,[email protected],21
2,EMAIL,Female,[email protected],35
3,EMAIL,Male,[email protected],37
4,EMAIL,Female,[email protected],47
5,EMAIL,Agender,[email protected],50
...

Solution implementation

Prerequisites

To deploy this solution, you must have the following:

Note: This solution creates an Amazon Pinpoint project with the name personalize. If you want to deploy this solution on an existing Amazon Pinpoint project, you will need to perform changes in the YAML template.

Deploy the solution

Step 1: Deploy the SAM solution

Clone the GitHub repository to your local machine (how to clone a GitHub repository). Navigate to the GitHub repository location in your local machine using SAM CLI and execute the command below:

sam deploy --stack-name contextual-targeting --guided

Fill the fields below as displayed. Change the AWS Region to the AWS Region of your preference, where Amazon Pinpoint and Amazon Personalize are available. The Parameter Email is used from Amazon Simple Notification Service (SNS) to send you an email notification when the Amazon Personalize job is completed.

Configuring SAM deploy
======================
        Looking for config file [samconfig.toml] :  Not found
        Setting default arguments for 'sam deploy'     =========================================
        Stack Name [sam-app]: contextual-targeting
        AWS Region [us-east-1]: eu-west-1
        Parameter Email []: [email protected]
        Parameter PEVersion [v1.2.0]:
        Parameter SegmentImportPrefix [pinpoint/]:
        #Shows you resources changes to be deployed and require a 'Y' to initiate deploy
        Confirm changes before deploy [y/N]:
        #SAM needs permission to be able to create roles to connect to the resources in your template
        Allow SAM CLI IAM role creation [Y/n]:
        #Preserves the state of previously provisioned resources when an operation fails
        Disable rollback [y/N]:
        Save arguments to configuration file [Y/n]:
        SAM configuration file [samconfig.toml]:
        SAM configuration environment [default]:
        Looking for resources needed for deployment:
        Creating the required resources...
        [...]
        Successfully created/updated stack - contextual-targeting in eu-west-1
======================

Step 2: Import the initial segment to Amazon Pinpoint

We will import some initial and artificially generated endpoints into Amazon Pinpoint.

Execute the command below to your AWS CLI in your local machine.

The command below is compatible with Linux:

SEGMENT_IMPORT_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
aws s3 sync ./data/pinpoint s3://$SEGMENT_IMPORT_BUCKET/pinpoint

For Windows PowerShell use the command below:

$SEGMENT_IMPORT_BUCKET = (aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
aws s3 sync ./data/pinpoint s3://$SEGMENT_IMPORT_BUCKET/pinpoint

Step 3: Upload training data and configuration for Amazon Personalize

Now we are ready to train our initial recommendation model. This solution provides you with dummy training data as well as a training and inference configuration, which needs to be uploaded into the Amazon Personalize S3 bucket. Training the model can take between 45 and 60 minutes.

Execute the command below to your AWS CLI in your local machine.

The command below is compatible with Linux:

PERSONALIZE_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 sync ./data/personalize s3://$PERSONALIZE_BUCKET

For Windows PowerShell use the command below:

$PERSONALIZE_BUCKET = (aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 sync ./data/personalize s3://$PERSONALIZE_BUCKET

Step 4: Review the inferred segments from Amazon Personalize

Once the training workflow is completed, you should receive an email on the email address you provided when deploying the stack. The email should look like the one in the screenshot below:

SNS notification for Amazon Personalize job

Navigate to the Amazon Pinpoint Console > Your Project > Segments and you should see two imported segments. One named endpoints.csv that contains all imported endpoints from Step 2. And then a segment named ITEMSgenresAction_<date>-<time>.csv that contains the ids of endpoints that are interested in action movies inferred by Amazon Personalize

Amazon Pinpoint segments created by the solution

You can engage with Amazon Pinpoint customer segments via Campaigns and Journeys. For more information on how to create and execute Amazon Pinpoint Campaigns and Journeys visit the workshop Building Customer Experiences with Amazon Pinpoint.

Next steps

Contextual targeting is not bound to a single channel, like in this solution email. You can extend the batch-segmentation-postprocessing workflow to fit your engagement and targeting requirements.

For example, you could implement several branches based on the referenced endpoint channel types and create Amazon Pinpoint customer segments that can be engaged via Push Notifications, SMS, Voice Outbound and In-App.

Clean-up

To delete the solution, run the following command in the AWS CLI.

The command below is compatible with Linux:

SEGMENT_IMPORT_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
PERSONALIZE_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 rm s3://$SEGMENT_IMPORT_BUCKET/ --recursive
aws s3 rm s3://$PERSONALIZE_BUCKET/ --recursive
sam delete

For Windows PowerShell use the command below:

$SEGMENT_IMPORT_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
$PERSONALIZE_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 rm s3://$SEGMENT_IMPORT_BUCKET/ --recursive
aws s3 rm s3://$PERSONALIZE_BUCKET/ --recursive
sam delete

Amazon Personalize resources like Dataset groups, datasets, etc. are not created via AWS Cloudformation, thus you have to delete them manually. Please follow the instructions in the official AWS documentation on how to clean up the created resources.

About the Authors

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. He loves to dive deep into his customer’s technical issues and help them design communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Christian Bonzelet

Christian Bonzelet

Christian Bonzelet is an AWS Solutions Architect at DFL Digital Sports. He loves those challenges to provide high scalable systems for millions of users. And to collaborate with lots of people to design systems in front of a whiteboard. He uses AWS since 2013 where he built a voting system for a big live TV show in Germany. Since then, he became a big fan on cloud, AWS and domain driven design.

Updated requirements for US toll-free phone numbers

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/updated-requirements-for-us-toll-free-phone-numbers/

Many Amazon Pinpoint customers use toll-free phone numbers to send messages to their customers in the United States. A toll-free number is a 10-digit number that begins with one of the following three-digit codes: 800, 888, 877, 866, 855, 844, or 833. You can use toll-free numbers to send both SMS and voice messages to recipients in the US.

What’s changing

Historically, US toll-free numbers have been available to purchase with no registration required. To prevent spam and other types of abuse, the United States mobile carriers now require new toll-free numbers to be registered as well. The carriers also require all existing toll-free numbers to be registered by September 30, 2022. The carriers will block SMS messages sent from unregistered toll-free numbers after this date.

If you currently use toll-free numbers to send SMS messages, you must complete this registration process for both new and existing toll-free numbers. We’re committed to helping you comply with these changing carrier requirements.

Information you provide as part of this registration process will be provided to the US carriers through our downstream partners. It can take up to 15 business days for your registration to be processed. To help prevent disruptions of service with your toll-free number, you should submit your registration no later than September 12th, 2022.

Requesting new toll-free numbers

Starting today, when you request a United States toll-free number in the Amazon Pinpoint console, you’ll see a new page that you can use to register your use case. Your toll-free number registration must be completed and verified before you can use it to send SMS messages. For more information about completing this registration process, see US toll-free number registration requirements and process in the Amazon Pinpoint User Guide.

Registering existing toll-free numbers

You can also use the Amazon Pinpoint console to register toll-free numbers that you already have in your account. For more information about completing the registration process for existing toll-free numbers, see US toll-free number registration requirements and process in the Amazon Pinpoint User Guide.

In closing

Change is a constant factor in the SMS and voice messaging industry. Carriers often introduce new processes in order to protect their customers. The new registration requirements for toll-free numbers are a good example of these kinds of changes. We’ll work with you to help make sure that these changes have minimal impact on your business. If you have any concerns about these changing requirements, open a ticket in the AWS Support Center.

How ERGO built an on-call support solution in a week

Post Syndicated from Sid Singh original https://aws.amazon.com/blogs/architecture/how-ergo-built-an-on-call-support-solution-in-a-week/

ERGO’s Technology & Services S.A. (ET&S) Cloud Solutions Department is a specialist team of cloud engineers who provide technical support for business owners, project managers, and engineering leads. The support team deals with complex issues, such as failed deployments, security vulnerabilities, environment availability, etc.

When an issue arises, it’s categorized as Priority 1 (P1) or Priority 2 (P2). For urgent P1 incidents, users contact the support team directly via phone. For P2 incidents, the workflow sends an issue description to the support team via SMS.

Originally, the SMS and voice forwarding systems were manually updated every Monday. For SMS, an operator manually updated the phone numbers in the system for the assigned support team members. For voice forwarding, support team members used physical phones, which were handed off from engineer to engineer per the support team roster.

These manual processes were time consuming and occasionally error prone. Additionally, with COVID-19 physical distancing measures in place, handing off physical devices was complicated. To keep up with the increasing number of support cases and the growth of their Cloud Solutions Department, ERGO worked with AWS to modernize and automate their manual workflow. We’ll show you how ERGO implemented a production-ready, on-call support solution with SMS and voice features in just one week using Amazon Connect and Amazon Pinpoint.

Automating the SMS on-call system

Let’s look at how we automated the SMS on-call support system, as shown in Figure 1 and summarized as follows:

  1. We use an open-source orchestration tool, Red Hat Ansible Automation Platform (Ansible), as a frontend to run the template “Assign to On-call SMS”.
  2. The template sets the parameter to a subset of support team members who are assigned to support P1/P2 cases. The assignment is based on the on-call shift schedule.
  3. Next, support team members are subscribed to the Amazon Simple Notification Service (Amazon SNS) topic subscriber’s list using an Ansible playbook.

Now the support team will receive SMS alerts.

Assign to on-call SMS workflow

Figure 1. Assign to on-call SMS workflow

Next, we integrated the SMS workflow with our ZIS IT monitoring tool to capture critical events and forward them via SMS to the support team, as shown in Figure 2:

  1. The Amazon Pinpoint phone number is set as the SMS destination in our monitoring tool.
  2. The monitoring tool then sends the SMS to Amazon Pinpoint, where:
    • We extract the messageBody from the payload that Amazon Pinpoint prepared by sending the message to Amazon SNS “Before Processing Message”, which is subscribed by our AWS Lambda function “Extract messageBody”.
    • The extracted message is then sent to Amazon SNS as “After Processing Message”, which uses the Amazon Pinpoint “Two-way SMS” feature to send the SMS to support team members who are assigned to the Amazon SNS topic.
On-call SMS workflow integration with Amazon Pinpoint

Figure 2. On-call SMS workflow integration with Amazon Pinpoint

Also shown in Figure 2, we track our monthly SMS spending using Amazon CloudWatch. The SMSMonthToDateSpentUSD metric shows the amount spent sending SMS messages during the current month.

Why extract the messageBody before sending the SMS to the support team?

Amazon Pinpoint captures SMS from the monitoring tool in JSON format, which includes additional information, such as the origin and destination numbers, the message ID and related data, as shown in the following example:

{

"originationNumber":"+14255550182",

"destinationNumber":"+12125550101",

"messageKeyword":"JOIN",

"messageBody":"EXAMPLE",

"inboundMessageId":"cae173d2-66b9-564c-8309-21f858e9fb84",

"previousPublishedMessageId":"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"

}

The support team only needs the messageBody, and the JSON format makes it difficult to read on a mobile phone. Therefore, we use a Lambda function for the “messageBody” extraction.

Automating the voice forwarding system

The other half of our on-call solution is voice forwarding. As mentioned in the introduction, we had a physical phone and updated the call forwarding every Monday. This allowed us to forward calls to a single number, but this system had two main problems: it wasn’t scalable and it was prone to human errors.

In our automated system, shown in Figure 3, all calls to the physical phone are forwarded to Amazon Connect, so we do not need to change the number of the phone.

This is how it’s set up:

  • The assigned phone numbers in Amazon Connect are attached to the Contact Flow “ERGO On-call Forwarding Voice”, which starts at the “Entry point” rectangle on the left side of the diagram.
  • In the next step, “Set logging behavior” captures the calling number. This allows us to see the number to return any missed calls.
  • Finally, the set working queue contains routing profiles (in this case, we use a main line and secondary line). The main line has support team members who are assigned to address P1 cases. The secondary line is for managers who will take the call if the support team members are not available.

When a customer is in a queue, the Amazon Connect contact flow tries to route the call to a support team member. If there’s no answer, the service re-routes the call to the next available support team member. After 30 seconds, if there is no answer on the first line (and no other support team members have become available), the service tries the secondary line.

To set this up:

  • Every support team member requires an Amazon Connect account. You can import their data via CSV to automate provisioning.
  • If a support team member is shown as online but does not answer a call, Amazon Connect changes their status to offline. This way, an Amazon Connect admin can see the time and number of the missed call in the Amazon Connect Real-time metrics reports and can return the call when another team member or supervisor is available.
  • Figure 3 shows how Amazon Connect and CloudWatch monitor contact center health metrics like “MissedCalls” and generate alerts via Amazon Simple Notification Service (SNS) to send notifications via email to ensure calls are returned promptly. For more details on this integration pattern, refer to the Monitor and trigger alerts using Amazon CloudWatch for Amazon Connect blog post.
On-call voice forwarding workflow with Amazon Connect

Figure 3. On-call voice forwarding workflow with Amazon Connect

Lessons learned

After creating an Amazon Connect instance, we claimed a phone number to place or receive calls. Requesting phone numbers from Amazon Connect to serve different customers in different countries was the most time intensive part of the setup. Be aware that some countries have regulatory requirements, and this can increase the time and effort required. For example, requesting a German number and a Polish number will require different documents. To save time, we used international toll-free numbers. This allows us to provide support to people in all other countries without the caller incurring additional charges.

To help you with your implementation, you can find the list of ID requirements by country or AWS Region here and AWS support can provide more information.

Conclusion

Using managed services like Amazon Connect and Amazon Pinpoint allowed us to implement a scalable and pay-as-you-go on-call solution for technical support. The new automated setup is a huge improvement over the previous manual and error-prone workflow and enables us to easily onboard customers from new countries.

Looking ahead, we plan to explore using the Amazon Connect APIs to automate the management of an agent’s online/offline status, as well as building a skills-based routing workflow to accommodate a multi-lingual support team. You can read more about AWS Customer Engagement services here.

AWS Week In Review – July 11, 2022

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/aws-week-in-review-july-11/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

In France, we know summer has started when you see the Tour de France bike race on TV or in a city nearby. This year, the tour stopped in the city where I live, and I was blocked on my way back home from a customer conference to let the race pass through.

It’s Monday today, so let’s make another tour—a tour of the AWS news, announcements, or blog posts that captured my attention last week. I selected these as being of interest to IT professionals and developers: the doers, the builders that spend their time on the AWS Management Console or in code.

Last Week’s Launches
Here are some launches that got my attention during the previous week:

Amazon EC2 Mac M1 instances are generally available – this new EC2 instance type allows you to deploy Mac mini computers with M1 Apple Silicon running macOS using the same console, API, SDK, or CLI you are used to for interacting with EC2 instances. You can start, stop them, assign a security group or an IAM role, snapshot their EBS volume, and recreate an AMI from it, just like with Linux-based or Windows-based instances. It lets iOS developers create full CI/CD pipelines in the cloud without requiring someone in your team to reinstall various combinations of macOS and Xcode versions on on-prem machines. Some of you had the chance the enter the preview program for EC2 Mac M1 instances when we announced it last December. EC2 Mac M1 instances are now generally available.

AWS IAM Roles Anywhere – this is one of those incremental changes that has the potential to unlock new use cases on the edge or on-prem. AWS IAM Roles Anywhere enables you to use IAM roles for your applications outside of AWS to access AWS APIs securely, the same way that you use IAM roles for workloads on AWS. With IAM Roles Anywhere, you can deliver short-term credentials to your on-premises servers, containers, or other compute platforms. It requires an on-prem Certificate Authority registered as a trusted source in IAM. IAM Roles Anywhere exchanges certificates issued by this CA for a set of short-term AWS credentials limited in scope by the IAM role associated to the session. To make it easy to use, we do provide a CLI-based signing helper tool that can be integrated in your CLI configuration.

A streamlined deployment experience for .NET applications – the new deployment experience focuses on the type of application you want to deploy instead of individual AWS services by providing intelligent compute recommendations. You can find it in the AWS Toolkit for Visual Studio using the new “Publish to AWS” wizard. It is also available via the .NET CLI by installing AWS Deploy Tool for .NET. Together, they help easily transition from a prototyping phase in Visual Studio to automated deployments. The new deployment experience supports ASP.NET Core, Blazor WebAssembly, console applications (such as long-lived message processing services), and tasks that need to run on a schedule.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
This week, I also learned from these blog posts:

TLS 1.2 to become the minimum TLS protocol level for all AWS API endpointsthis article was published at the end of June, and it deserves more exposure. Starting in June 2022, we will progressively transition all our API endpoints to TLS 1.2 only. The good news is that 95 percent of the API calls we observe are already using TLS 1.2, and only five percent of the applications are impacted. If you have applications developed before 2014 (using a Java JDK before version 8 or .NET before version 4.6.2), it is worth checking your app and updating them to use TLS 1.2. When we detect your application is still using TLS 1.0 or TLS 1.1, we inform you by email and in the AWS Health Dashboard. The blog article goes into detail about how to analyze AWS CloudTrail logs to detect any API call that would not use TLS 1.2.

How to implement automated appointment reminders using Amazon Connect and Amazon Pinpoint this blog post guides you through the steps to implement a system to automatically call your customers to remind them of their appointments. This automated outbound campaign for appointment reminders checked the campaign list against a “do not call” list before making an outbound call. Your customers are able to confirm automatically or reschedule by speaking to an agent. You monitor the results of the calls on a dashboard in near real time using Amazon QuickSight. It provides you with AWS CloudFormation templates for the parts that can be automated and detailed instructions for the manual steps.

Using Amazon CloudWatch metrics math to monitor and scale resources AWS Auto Scaling is one of those capabilities that may look like magic at first glance. It uses metrics to take scale-out or scale-in decisions. Most customers I talk with struggle a bit at first to define the correct combination of metrics that allow them to scale at the right moment. Scaling out too late impacts your customer experience while scaling out too early impacts your budget. This article explains how to use metric math, a way to query multiple Amazon CloudWatch metrics, and use math expressions to create new time series based on these metrics. These math metrics may, in turn, be used to trigger scaling decisions. The typical use case would be to mathematically combine CPU, memory, and network utilization metrics to decide when to scale in or to scale out.

How to use Amazon RDS and Amazon Aurora with a static IP address – in the cloud, it is better to access network resources by referencing their DNS name instead of IP addresses. IP addresses come and go as resources are stopped, restarted, scaled out, or scaled in. However, when integrating with older, more rigid environments, it might happen, for a limited period of time, to authorize access through a static IP address. You have probably heard that scary phrase: “I have to authorize your IP address in my firewall configuration.” This new blog post explains how to do so for Amazon Relational Database Service (Amazon RDS) database. It uses a Network Load Balancer and traffic forwarding at the Linux-kernel level to proxy your actual database server.

Amazon S3 Intelligent-Tiering significantly reduces storage costs – we estimate our customers saved up to $250 millions in storage costs since we launched S3 Intelligent-Tiering in 2018. A recent blog post describes how Amazon Photo, a service that provides unlimited photo storage and 5 GB of video storage to Amazon Prime members in eight marketplaces world-wide, uses S3 Intelligent-Tiering to significantly save on storage costs while storing hundreds of petabytes of content and billions of images and videos on S3.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS re:Inforce is the premier cloud security conference, July 26-27. This year it is hosted at the Boston Convention and Exhibition Center, Massachusetts, USA. The conference agenda is available and there is still time to register.

AWS Summit Chicago, August 25, at McCormick Place, Chicago, Illinois, USA. You may register now.

AWS Summit Canberra, August 31, at the National Convention Center, Canberra, Australia. Registrations are already open.

That’s all for this week. Check back next Monday for another tour of AWS news and launches!

— seb

New – High Volume Outbound Communication with Amazon Connect Outbound Campaigns

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-high-volume-outbound-communication-with-amazon-connect-outbound-campaigns/

The new high volume outbound communication capability in Amazon Connect which was announced at Enterprise Connect last year, is now generally available to all. It is named Amazon Connect outbound campaigns.

If you haven’t heard about Amazon Connect, it is an easy-to-use cloud contact center service that helps companies of any size deliver superior customer service at lower cost. You can read the original blog post Jeff wrote at launch in 2017, with amazing Lego art 🙂

Contact centers not only receive calls and communications, but they also send outbound communications to customers. There are a variety of reasons to send outbound communication: appointment reminders, telemarketing, subscription renewals, and billing reminders. The vast majority of these communications are phone calls, and in many contact centers, agents make the calls manually using customer contact lists in external systems. Since customers only answer about ten percent of calls, these agents can spend nearly half of their time dialing and waiting. This can result in millions of dollars in lost productivity each year for a contact center with as few as 200 agents.

To help you to address this challenge, today we are adding to Amazon Connect outbound campaigns a set of high-volume outbound communication capabilities that allows you to proactively reach more of your customers across voice, SMS, and email. When using this capability, you will have a scalable way for proactive outreach for hundreds to millions of your customers, and you will increase your agents’ productivity and lower your operational costs.

Amazon Connect outbound campaigns delivers a predictive phone dialer. The dialer includes an answering machine detection system powered by machine learning. It allows the automatic detection of answering machines for voice calls and passes calls to agents only when the call is answered by a human. The dialer also adjusts the call rate depending on factors such as percentage of human-answered the calls, call duration, and agent availability. There is no integration required to get the benefit of existing Amazon Connect features, such as automated workflows, routing, and machine learning capabilities like Contact Lens. You now have a single system for inbound and outbound communications.

To further refine the customer experience or use multiple channels in your campaigns, for example, to send an SMS or email message to your customers when they do not answer calls, you have the option to use Amazon Pinpoint. Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service. It allows you to define customer segments, define the customer journey, define the contact strategy, and more. Amazon Pinpoint is the system handling high-volume SMS and email campaigns.

To better understand how Amazon Connect, Amazon Pinpoint, and other AWS services work together, you can refer to this very detailed blog post.

Let’s show you how it works
Imagine I am a contact center manager, and I want to create an outbound call campaign to target a selected list of customers.

I first import my customer contact list from a spreadsheet on Amazon S3. I may also import it from popular customer relationship management (CRM) and marketing automation applications, such as Marketo, Salesforce, Twilio’s Segment, ServiceNow, Shopify, Zendesk, and Amazon Pinpoint itself.

Amazon Connect outbound campaigns - import contact 2

Then I create a campaign and define some journey parameters: the communication channel, the start time, and the corresponding content, such as a call script, email template, or SMS message. At the scheduled start time, the journey is executed using Amazon Connect for calls or Amazon Pinpoint for SMS or emails, as specified.

Amazon Connect outbound campaigns - create campaign

When I configure the campaign to run in Predictive dial mode, as I mentioned before, the dialer automatically adjusts the dial rate based on the duration of calls and the real-time availability of agents. Once a call is answered, Amazon Connect distinguishes whether it is a live voice or a recorded message and routes the live customer to an available agent in the Amazon Connect agent application, where the agent can see the call script that I specified during setup, along with relevant customer information.

As explained earlier, I may use Amazon Pinpoint to define the customer journey. By doing so, I can combine voice, email, and SMS channels in the same outbound communication campaign to improve the efficiency of my agents and my customer’s experience. For example, a financial institution can use Amazon Connect to send an SMS notification to remind a customer of a missed payment and include a link to request a call back from an agent. When a call is requested, Amazon Connect automatically queues the call, dials the customer’s number, detects their voice, and connects an available agent to the customer.

Amazon Connect outbound campaigns - journey workflow

Amazon Pinpoint allows you to define the details of the customer journey.

Amazon Connect outbound campaigns - setup quiet times

As usual with AWS services, I can analyze contact events sent via Amazon EventBridge. EventBridge is a serverless event bus that makes it easier to build event-driven applications at scale using events generated from your applications, integrated software-as-a-service (SaaS) applications, and AWS service. When filtering or analyzing events posted to EventBridge, I can create metrics such as time to connect to an agent, duration of the contact, and call abandonment rate

These metrics help me understand the status of my campaign and ensure compliance with applicable regulations, such as maximum call abandonment rates. I also can use historical reports of these metrics to understand the effectiveness of all my communications campaigns over time.

Amazon Connect outbound campaigns - jounrey metrics

Speaking of compliance, we do not want anyone to abuse the system, intentionally or not, or to break any local compliance rules.

Access and Compliance
Using automated services to drive outbound communication campaigns is strictly regulated in several countries and territories. For example, the US adopted the Telephone Consumer Protection Act (TCPA) in 1991, and the United Kingdom’s Office of Communications has similar rules.

Amazon Connect outbound campaigns gives you the tools to stay compliant with these regulations and many others. However, just like with traditional IT security, it is a shared responsibility. It is your responsibility to use the service in a compliant manner. We are happy to assist you in addressing specific use cases.

Let’s share two examples to illustrate how Amazon Connect outbound campaigns can help you meet your compliance status: respect quiet time and monitor call abandonment rate.

The use of quiet times allows contact center managers to configure a schedule for channel communications based on the day of the week and the hours of the day. More precise delivery times means your customers are most likely to engage with the communication and increase metrics such as open rates for SMS and email, as well as pick-up rates for voice calls. It also allows contact center managers to follow country and state-level voice dialing legislation. The following screenshot shows how you can configure quiet times using Amazon Pinpoint.

Amazon Connect outbound campaigns - quiet times

According to TCPA, call abandonment rate is the percentage of calls picked up by a live customer but not connected to a live agent within two seconds after the customer greeting. I found it interesting that in the UK, the time is measured from the start of your customer greetings, while in the US, it is measured from the end of the greeting. Amazon Connect outbound campaigns provides you with metrics, such as customerGreetingStart, customerGreetingStop, andconnectedToAgent for each outbound communication. Contact center managers can use these to compute the abandonment rate and dial up or down the outgoing communication channel accordingly.

Other metrics, configuration parameters, and AWS Lambda API integration allow contact center managers to consult a Do-Not-Call (DNC) registry or list scrubbing and verify your customer’s local time zone or bank holiday calendars, just to name a few.

Pricing and Availability
Amazon Connect outbound campaigns is available in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (London) AWS Regions. This allows you to start your outbound campaigns for customers in the USA, UK, Australia, and New Zealand.

As usual, pricing is based on your usage; you only pay for what you use with no upfront or minimum engagement. The key metrics we are using for pricing are the minutes of outbound calls. The pricing page has all the details.

And now, go build your contact centers.

— seb

Registering SMS Sender IDs in Singapore

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/registering-sms-sender-ids-in-singapore/

A few weeks ago, we published a blog post about the process of registering alphanumeric Sender IDs. Today, we’re announcing support for registering Sender IDs in Singapore.

About Sender ID registration in Singapore

Singapore’s Infocomm Media Development Authority (IMDA) has created a Sender ID registry to protect consumers from fraudulent and malicious SMS messages. This registry is called the Singapore SMS Sender ID Registry (SSIR).

The government of Singapore encourages all government agencies and financial institutions to register with SSIR. Organizations and businesses outside of these industries can also register with SSIR.

Currently, there is no requirement to register your Sender ID. However, when you register with the SSIR, your Sender ID becomes a “Protected Sender ID.” Protected Sender IDs help to protect you and your customers by preventing other senders from using your Sender ID.

Note that in order to complete this registration process, your business or organization must have a Unique Entity Number (UEN). Businesses and other organizations receive a UEN when they register with Singapore’s Accounting and Corporate Regulatory Authority.

Registering your Sender ID

The first step in the registration process is to create a Protected Sender ID through the Singapore Network Information Centre (SGNIC). To initiate the registration process, send an email to [email protected]. In your message, include the name of your business, the Sender IDs that you want to register, and a description of your use case. SGNIC may contact you for additional information.

After you register with SGNIC, open a ticket in the AWS Support Center. You can find the procedure for opening a case in the Amazon Pinpoint User Guide. The AWS Support team will respond to your case within 24 hours. Their response includes a template for a letter that shows your intent to register a Sender ID.

The next step is to modify the contents of this letter. The regulatory groups in Singapore require a copy of this letter in order to allow AWS to send messages using your Sender ID. Begin by placing the contents of the letter on your company’s letterhead. Next, modify the fields that are highlighted in yellow. These fields include the following:

  • <Place>: The address of your company or organization.
  • <Brand Owner Company Name>: The name of your company or organization.
  • <Number>: Your Unique Entity Number.
  • <Signature>, <Name>, <Title>: The personal signature, name, and job title of the person who is submitting the request on behalf of your company or organization.
  • <ExampleSenderId1>, <ExampleSenderId2>: The Sender IDs that you intend to register with SGNIC. You can add or remove lines here depending on how many Sender IDs you plan to register.

Once you finish modifying the letter, submit it by attaching it to your existing case in the AWS Support Center.

What happens next?

IMDA regularly sends us lists of new Sender ID registrations. When we receive confirmation that your Sender ID has been registered, we update your account to allow it to send SMS messages through your Sender ID. We will also comment on your Support case to indicate that the process is complete.

Wrapping up

We continue to monitor changes to Sender ID registration requirements around the world. We’re working closely with carriers and organizations around the world to make the registration processes as straightforward as possible for our customers. Check in on this blog regularly to learn more about future regulatory changes.

For more information about registering Sender IDs in Singapore, see Special requirements for Singapore in the Amazon Pinpoint User Guide.

Registering Sender IDs for Sending SMS Messages

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/registering-sender-ids-for-sending-sms-messages/

With Amazon Pinpoint, you can use Sender IDs to send text messages to recipients in various countries around the world. A Sender ID is a short, alphanumeric identifier (such as “AMAZON”) that appears on a recipient’s device when they receive a message from you. A Sender ID is one type of origination identity—that is, an identity that’s used to send text messages. Other types of origination identities include short codes and long codes. Sender IDs are great for branding purposes, because recipients can easily determine who the sender of the message is.

SMS senders who send messages to some countries (such as India or the Philippines) are required to register their SMS use cases and message templates before they can send messages to those countries using a Sender ID. On the Amazon Pinpoint team, we listen to our customers when they tell us which countries they need to send messages to. We regularly add support for registration processes to help our customers reach their end users. In this post, I’ll discuss the purpose of Sender ID registration and provide information about registering Sender IDs.

Why is Sender ID registration required?

The rise of fraudulent and malicious SMS activity around the world means that it’s more important than ever for recipients of SMS messages to trust the Sender ID that is contacting them. To reduce the volume of fraudulent SMS messages reaching their customers, mobile carriers have systems in place to identify and prevent abuse.

Registering Sender IDs helps mobile carriers trace abuse and other issues back to a specific SMS sender. By registering a Sender ID, your messages bypass filters that can throttle or block unregistered traffic. This not only improves deliverability rates, but also helps earn trust, because the sender’s name is consistent and identifiable. AWS has processes for registering your dedicated Sender ID with regulatory agencies and industry groups in several countries.

The future of Sender ID registration

In the months and years ahead, we expect more countries to add Sender ID registration requirements. AWS will continue to work with local network operators to expand the services that we offer to our customers. We carefully monitor the global SMS industry and create new processes when needs arise. Regardless of changes to the regulatory landscape, we strive to offer consistently high, reliable SMS message deliverability rates.

How can I register a Sender ID?

You can find a list of countries that support Sender IDs in Supported countries and regions in the Amazon Pinpoint User Guide. That document also lists the countries that require pre-registration of Sender IDs.

If you plan to send messages to a country that requires Sender ID registration, you must complete the registration process. The registration process can be complicated, with many specific requirements and with different processes in each country. The AWS Support team can work with you to complete your registration. The first step in registering your Sender ID is to create a case with AWS Support. You can find more information about creating a case in Requesting Sender IDs for SMS messaging in the Amazon Pinpoint User Guide.

When you request a Sender ID, we provide you with an estimate of how long the request will take to complete. This estimate is based on the completion times that we’ve seen from other customers. Because each country has its own process, completion times for registration vary by destination country. For example, Sender ID registration in India can be complete in one week or less, whereas it can take six weeks or more in Vietnam. These requests can’t be expedited, because they involve the carriers themselves making changes to the ways that their networks are configured. We suggest that you start your registration process early so that you can start sending messages as soon as you launch your product or service.

When you create a case, it’s important that you check on it regularly. The AWS Support team will provide you with registration materials, such as the forms and cover letters that you must submit to begin the registration process. We recommend that you provide all of the requested information with as much detail as you can. Too much information is better than too little information. We also recommend that you don’t skip any fields in the registration forms that we send you. The carriers require that you provide responses in all of the fields on these forms. This is true even if you believe that a field doesn’t apply to your use case. This might occur if you’re registering a One-Time Password (OTP) use case, and the carriers require you to provide a response to the keyword “STOP.” Although it doesn’t seem logical that customers would want to opt-out of receiving one-time passwords, the carriers in most countries require you to provide recipients with a way to completely opt-out of receiving messages from you.

After you submit your application, it’s also possible that the mobile carriers will have feedback about your application. In this situation, you have to address their concerns before the registration process can continue. Addressing these concerns quickly can help reduce delays in completing your request.

Sender IDs are a great tool for reaching your customers by SMS. You can learn more about sender IDs and the other types of origination identities that Amazon Pinpoint supports in Originating identities for SMS messaging in the Amazon Pinpoint User Guide. Happy sending!

Build a multi-language notification system with Amazon Translate and Amazon Pinpoint

Post Syndicated from Praveen Allam original https://aws.amazon.com/blogs/architecture/build-a-multi-language-notification-system-with-amazon-translate-and-amazon-pinpoint/

Organizations with global operations can struggle to notify their customers of any business-related announcements or notifications in different languages. Their customers want to receive notifications in their local language and communication preference. Organizations often rely on complicated third-party services or individuals to manually translate the notifications. This can lead to a loss of revenue due to delayed communication and additional operational expenses.

This blog post demonstrates how to build a straightforward, cost-effective, and scalable multi-language notification system using AWS Serverless technologies. You can post a business-related announcement or notification in English, and based on the customer profile data, it will convert this announcement or notification into different languages. Additionally, the system will also deliver these translated announcements or notifications as an email, voice, or SMS.

Example of a multi-language notification use case

A restaurant franchise company is adding a new item to their menu and plans to release it in North America, Germany, and France. The corporate office has decided to send the following notification.

The company is adding a new item to the menu, and this will go live by May 10. Please ensure you are prepared for this change and plan accordingly.

The franchise owners in Germany want to receive the notifications in the German language, whereas the franchise owners in France want to receive it in French. North American franchises want to receive it in English.

Solution design for multi-language notification system

The solution in Figure 1 demonstrates how to build a multi-language notification system using Amazon Translate and Amazon Pinpoint.

AWS Serverless technologies handle automatic scaling, have built-in high availability architecture, and a pay-for-use billing model, which increases agility and optimizes costs. The system built with this solution is invoked using REST API endpoints. Once this solution is deployed, it can be integrated with any frontend application where users can log in and send out notification events.

Figure 1 illustrates the architecture of this solution.

Solution architecture for multi-language notification system. It includes all the AWS services that are required in this solution. The flow is described as follows.

Figure 1. Solution architecture for multi-language notification system

1. The restaurant franchise will log in to their UI to type the notification message in English. Upon submission, the notification message is sent to the Amazon API Gateway REST endpoint.
Note: In this solution, there is no UI available. You will use a terminal to submit the message.

2. Amazon API Gateway will send this message to Amazon Simple Queue Service (SQS), which will keep the HTTP requests asynchronous.

3. The SQS queue will invoke the SQS AWS Lambda function.

4. The SQS Lambda function invokes the AWS Step Functions state machine. This SQS Lambda function is used as a proxy mechanism to start the state machine workflow. AWS Step Functions are used to orchestrate the notification workflow process. The workflow process validates the message, converts it into different languages, and notifies the customers in their preferred way of communication (email, voice, or SMS). It also handles errors if any of the steps fail by using SQS dead-letter queue.

5. The message entered must be validated in order to ensure that the organizational standards are followed. To perform the message validation, we use the Amazon Comprehend service. Comprehend’s Sentiment analysis will determine whether to send or flag the message. All flagged messages are sent for review.

  • In the example use case message preceding, the message sentiment neutral score is 0.85 confidence. If you set the acceptable score to anything greater than 0.5 confidence, then it is a valid message. Once it passes the validation step, the workflow will proceed to the next step.
  • If the message is vague or not clear, the sentiment score might be less than 0.5 confidence. For example, if this is the message used: We are adding a dish; be ready for it, the sentiment score might be only 0.45 confidence. This is under the acceptable score, and the message will not be processed further.

6. After the message is successfully validated, the message is translated into various languages depending on the customers’ profiles. The Translate Lambda function determines the number of unique languages by referring to the customer profile data in the Amazon DynamoDB table. The function then uses Amazon Translate to translate the message to the different languages required for that notification event. In our example use case, the converted messages will look as follows:

  • German (de):

Das Unternehmen fügt dem Menü einen neuen Punkt hinzu, der bis zum 10. Mai live geschaltet wird. Bitte stellen Sie sicher, dass Sie auf diese Änderung vorbereitet sind und planen Sie entsprechend.

  • French (fr):

La société ajoute un nouvel article au menu, qui sera mis en ligne d’ici le 10 mai. Assurez-vous d’être prêt pour ce changement et de planifier en conséquence.

7. The last step in the workflow is to build the notification logic and deliver the notifications. The Amazon Pinpoint Lambda function retrieves the customer’s profile from the Amazon DynamoDB table. It then parses each record for a given notification event to find out the delivery mode (email, voice, or SMS message). The function then builds the notification logic using Amazon Pinpoint. Amazon Pinpoint notifies each customer either by email, voice, or SMS.

Code repository

The code for this solution is available on GitHub. Review the README file for detailed instructions on how to download and run the solution in your AWS account.

Conclusion

Organizations that operate on an international basis often struggle to build a multi-language notification system to communicate any business-related announcements or notifications to their customers in different languages. Communicating these announcements or notifications in a variety of formats such as email, voice, and SMS can be time-consuming. Our solution addresses these challenges using AWS services with fewer steps than traditional third-party options. This solution also features automatic scaling, built-in high availability, and a pay-for-use billing model to increase agility and optimize costs. These technologies not only decrease infrastructure management tasks like capacity provisioning and patching, but provides for a better customer experience.

Further reading:

Incident notification mechanism using Amazon Pinpoint two-way SMS

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/incident-notification-mechanism-using-amazon-pinpoint-two-way-sms/

Unexpected situations that require immediate attention can occur in any industry. Part of resolving these incidents is the notifications’ delivery. For example, utility companies that have installed gas sensors need to notify immediately the available engineer if a leak occurs.

The goal of an incident management process is to restore a normal service operation as quickly as possible and to minimize the impact on business operations, thus ensuring that the best possible levels of service quality and availability are maintained. A key element of incident management is sending timely notifications to the assigned or available resource(s) who can rectify the issue.

An incident can take place at any time and the resource(s) assigned to it might not have internet access and even if they receive the message they might not be equipped to work on it. This creates five key requirements for an incident notifications mechanism:

  1. Notify the resources via a communication channel that ensures message delivery even without internet access
  2. Enable assigned resources to respond to a request via a communication channel that doesn’t require internet access
  3. Send reminder(s) in case there is no response from the assigned resource(s)
  4. Escalate to another resource in case the first one doesn’t reply or declines the incident
  5. Store the incident details & status for reporting and data analysis

In this blog post, I share a solution on how you can automate the delivery of incident notifications. This solution utilizes Amazon Pinpoint SMS channel to contact the designated resources who might not have access to the internet. Furthermore, the recipient of the SMS is able to reply with an acknowledgement. AWS Step Functions orchestrates the user journey using AWS Lambda functions to evaluate the recipients’ response and trigger the next best action. You will use AWS CloudFormation to deploy this solution.

Use Cases

An incident notification mechanism can vary depending the organization’s requirements and 3rd party system integrations. In this blog the solution covers all five points listed above but it might require further modifications depending your use case.

With minor modifications this solution can also be used in the following use cases:

  1. Medicine intake notification: It will notify the patient via SMS that it is their time to take their medicine. If the patient doesn’t acknowledge the SMS by replying then this can be escalated to their assigned doctor
  2. Assignment submission: It will notify the student that their assignment is due. If the student doesn’t acknowledge the SMS by replying then this can be escalated to their teacher

High-level Architecture

The solution requires the country of your SMS recipients to support two-way SMS. To check which countries, support two-way SMS visit this page.  If two-way SMS is supported then you will need to request a dedicated originating identity. You can also use Toll Free Number or 10DLC if your recipients are in the US.

Note: Sender ID doesn’t support two-way SMS.

A new incident is represented as an item in an Amazon DynamoDB table containing information such as description, URL, incident_id as well as the contact numbers for two resources. A resource is someone who has been assigned to work on this incident. The second resource is for escalation purposes in case the first one doesn’t acknowledge or decline the incident notification.

The Amazon DynamoDB table covers three functions for this solution:

  1. A way to add new incidents using either the AWS console or programmatically
  2. As a storage for variables that indicate the incident’s status and can be used from the solution to determine the next action(s)
  3. As a historical data storage for all incidents that have been created for data analysis purposes

The solution utilizes Amazon DynamoDB Streams to invoke an AWS Lambda function every time a new incident is created. The AWS Lambda function triggers an AWS Step Function State machine, which orchestrates three AWS Lambda functions:

  1. Send_First_SMS: Sends the first SMS
  2. Reminder_SMS: Sends a reminder SMS if the resource does not acknowledge the first SMS
  3. Incident_State_Review: Assesses the status of the incident and either goes back to the first AWS Lambda function or finishes the AWS Step Function State machine execution

The AWS Step Functions State machine uses the Choice state, which evaluates the response of the previous AWS Lambda function and decides on the next state. This is a very useful feature that can reduce custom code and potentially AWS Lambda invocations resulting to cost savings.

Additionally, the waiting between steps is also managed from AWS Step Functions State machine using the Wait state. This can be configured to wait seconds, days or till a specific point in the future.

To be able to receive SMS, this solution uses Amazon Pinpoint’s two-way SMS feature. When receiving an SMS Amazon Pinpoint sends a payload to an Amazon SNS topic, which needs to be created separately. An AWS Lambda function that is subscribed to the Amazon SNS topic processes the SMS content and performs one or both of the following actions:

  1. Update the incident status in the DynamoDB table
  2. Create a new Step Function State machine execution

In this solution SMS recipients can reply by typing either yes or no. The SMS response is not case sensitive.

An inbound SMS payload contains the originationNumber, destinationNumber, messageKeyword, messageBody, inboundMessageId and previousPublishedMessageId. Noticeably there isn’t a direct way to associate an inbound SMS with an incident. To overcome this challenge this solution uses a second DynamoDB table, which stores the message_id and incident_id every time an SMS is send to any of the two resources. This allows the solution to use the previousPublishedMessageId from the inbound SMS payload to fetch the respective incident_id from the second DynamoDB table.

The code in this solution uses AWS SDK for Python (Boto3).

Prerequisites

  1. An Amazon Pinpoint project with the SMS channel enabled – Guide on how to enable Amazon Pinpoint SMS channel
  2. Check if the country you want to send SMS to, supports two-way SMS – List with countries that support two-way SMS
  3. An originating identity that supports two-way SMS – Guide on how to request a phone number
  4. Increase your monthly SMS spending quota for Amazon Pinpoint – Guide on how to increase the monthly SMS spending quota

Deploy the solution

Step 1: Create an S3 bucket

  1. Navigate to the Amazon S3 console
  2. Select Create bucket
  3. Enter a unique name for Bucket name
  4. Select the AWS Region to be the same as the one of your Amazon Pinpoint project
  5. Scroll to the bottom of the page and select Create bucket
  6. Follow this link to download the GitHub repository. Once the repository is downloaded, unzip it and navigate to  \amazon-pinpoint-incident-notifications-mechanism-main\src
  7. Access the S3 bucket created above and upload the five .zip files

Step 2: Create a stack

  1. The application is deployed using an AWS CloudFormation template.
  2. Navigate to the AWS CloudFormation console select Create stack > With new resources (standard)
  3. Select Template is ready as Prerequisite – Prepare template and choose Upload a template file as Template source
  4. Select Choose file and from the GitHub repository downloaded in step 1.6 navigate to amazon-pinpoint-incident-notifications-mechanism-main\cfn upload CloudFormation_template.yaml and select Next
  5. Type Pinpoint-Incident-Notifications-Mechanism as Stack name, paste the S3 bucket name created in step 1.5 as the LambdaCodeS3BucketName, type the Amazon Pinpoint Originating Number in E.164 format as OriginatingIdenity, paste the Amazon Pinpoint project ID as PinpointProjectId and type 40 for WaitingBetweenSteps
  6. Select Next, till you reach to Step 4 Review where you will need to check the box I acknowledge that AWS CloudFormation might create IAM resources and then select Create Stack
  7. The stack creation process takes approximately 2 minutes. Click on the refresh button to get the latest event regarding the deployment status. Once the stack has been deployed successfully you should see the last Event with Logical ID Pinpoint-Incident-Notifications-Mechanism and with Status CREATE_COMPLETE

Step 3: Configure two-way SMS SNS topic

  1. Navigate to the Amazon Pinpoint console > SMS and voice > Phone numbers. Select the originating identity that supports two-way SMS. Scroll to the bottom of the page and click to expand the  and check the box to enable it.

    For SNS topic select Choose an existing SNS topic then using the drop down choose the one that contains the name of the AWS CloudFormation stack from Step 2.4 as well as the name TwoWaySMSSNSTopic and click Save.

Step 4: Create a new incident

To create a new incident, navigate to Amazon DynamoDB console > Tables and select the table containing the name of the AWS CloudFormation stack from Step 2.4 as well as the name IncidentInfoDynamoDB. Select View items and then Create item.

On the Create item page choose JSON, copy and paste the JSON below into the text box and replace the values for the first_contact and second_contact with a valid mobile number that you have access to.

Note: If you don’t have two different mobile numbers, enter the same for both first_contact and second_contact fields. The mobile numbers must follow E.164 format +<country code><number>.

{
   "incident_id":{
      "S":"123"
   },
   "incident_stat":{
      "S":"not_acknowledged"
   },
   "double_escalation":{
      "S":"no"
   },
   "description":{
      "S":"Error 111, unit 1 malfunctioned. Urgent assistance is required."
   },
   "url":{
      "S":"https://example.com/incident/111/overview"
   },
   "first_contact":{
      "S":"+4479083---"
   },
   "second_contact":{
      "S":"+4479083---"
   }
}

Incident fields description:

  • incident_id: Needs to be unique
  • incident_stat: This is used from the application to store the incident status. When creating the incident, this value should always be not_acknowledged
  • double_escalation: This is used from the application as a flag for recipients who try to escalate an incident that is already escalated. When creating the incident, this value should always be no
  • description: You can type a description that best describes the incident. Be aware that depending the number of characters the SMS parts will increase. For more information on SMS character limits visit this page
  • url: You can add a URL that resources can access to resolve the issue. If this field is not pertinent to your use case then type no url
  • first_contact: This should contain the mobile number in E.164 format for the first resource
  • second_contact: This should contain the mobile number in E.164 format for the second resource. The second resource will be contacted only if the first one does not acknowledge the SMS or declines the incident

Once the above is ready you can select Create item. This will execute the AWS Step Functions State machine and you should receive an SMS. You can reply with yes to acknowledge the incident or with no to decline it. Depending your response, the incident status in the DynamoDB table will be updated and if you reply no then the incident will be escalated sending a SMS to the second_contact.

Note: The SMS response is not case sensitive.

Clean-up

To remove the solution:

  1. Delete the AWS CloudFormation stack by following the steps listed in this guide
  2. Delete the dedicated originating identity that you used to send the SMS by following the steps listed in this guide
  3. Delete the Amazon Pinpoint project by navigating the Amazon Pinpoint console, select your Amazon Pinpoint Project, choose Settings > General settings > Delete Project

Next Steps

This solution currently works only if your SMS recipients are in one country. If your use case requires to send SMS to multiple countries you will need to:

  • Check this page to ensure that these countries support two-way SMS
  • Follow the instructions in this page to obtain a number that supports two-way SMS for each country
  • Expand the solution to identify the country of the SMS recipient and to choose the correct number accordingly. To identify the country of the SMS recipient you can use Amazon Pinpoint’s phone number validate service via Amazon Pinpoint API or SDKs. The phone validate service returns a list of data points per mobile number with one of them being the Country

Incidents that are not being acknowledged by any of the assigned resources, have their status updated to unacknowledged but they don’t escalate further. Depending your requirements, you can expand the solution to send an email using Amazon Pinpoint APIs or perform an outbound call using Amazon Connect APIs.

Conclusion

In this blog post, I have demonstrated how your organization can use Amazon Pinpoint two-way SMS and Step Functions to automate incident notifications. Furthermore, the solution highlights the synergy of AWS services and how you can build a custom solution with little effort that meets your requirements.

About the Author

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. He loves to dive deep into his customer’s technical issues and help them design communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Queueing Amazon Pinpoint API calls to distribute SMS spikes

Post Syndicated from satyaso original https://aws.amazon.com/blogs/messaging-and-targeting/queueing-amazon-pinpoint-api-calls-to-distribute-sms-spikes/

Organizations across industries and verticals have user bases spread around the globe. Amazon Pinpoint enables them to send SMS messages to a global audience through a single API endpoint, and the messages are routed to destination countries by the service. Amazon Pinpoint utilizes downstream SMS providers to deliver the messages and these SMS providers offer a limited country specific threshold for sending SMS (referred to as Transactions Per Second or TPS). These thresholds are imposed by telecom regulators in each country to prohibit spamming. If customer applications send more messages than the threshold for a country, downstream SMS providers may reject the delivery.

Such scenarios can be avoided by ensuring that upstream systems do not send more than the permitted number of messages per second. This can be achieved using one of the following mechanisms:

  • Implement rate-limiting on upstream systems which call Amazon Pinpoint APIs.
  • Implement queueing and consume jobs at a pre-configured rate.

While rate-limiting and exponential backoffs are regarded best practices for many use cases, they can cause significant delays in message delivery in particular instances when message throughput is very high. Furthermore, utilizing solely a rate-limiting technique eliminates the potential to maximize throughput per country and priorities communications accordingly. In this blog post, we evaluate a solution based on Amazon SQS queues and how they can be leveraged to ensure that messages are sent with predictable delays.

Solution Overview

The solution consists of an Amazon SNS topic that filters and fans-out incoming messages to set of Amazon SQS queues based on a country parameter on the incoming JSON payload. The messages from the queues are then processed by AWS Lambda functions that in-turn invoke the Amazon Pinpoint APIs across one or more Amazon Pinpoint projects or accounts. The following diagram illustrates the architecture:

Step 1: Ingesting message requests

Upstream applications post messages to a pre-configured SNS topic instead of calling the Amazon Pinpoint APIs directly. This allows applications to post messages at a rate that is higher than Amazon Pinpoint’s TPS limits per country. For applications that are hosted externally, an Amazon API Gateway can also be configured to receive the requests and publish them to the SNS topic – allowing features such as routing and authentication.

Step 2: Queueing and prioritization

The SNS topic implements message filtering based on the country parameter and sends incoming JSON messages to separate SQS queues. This allows configuring downstream consumers based on the priority of individual queues and processing these messages at different rates.

The algorithm and attribute used for implementing message filtering can vary based on requirements. Similarly, filtering can be enabled based on business use-cases such as “REMINDERS”,   “VERIFICATION”, “OFFERS”, “EVENT NOTIFICATIONS” etc. as well. In this example, the messages are filtered based on a country attribute shown below:

Based on the filtering logic implemented, the messages are delivered to the corresponding SQS queues for further processing. Delivery failures are handled through a Dead Letter Queue (DLQ), enabling messages to be retried and pushed back into the queues.

Step 3: Consuming queue messages at fixed-rate

The messages from SQS queues are consumed by AWS Lambda functions that are configured per queue. These are light-weight functions that read messages in pre-configured batch sizes and call the Amazon Pinpoint Send Messages API. API call failures are handled through 1/ Exponential Backoff within the AWS SDK calls and 2/ DLQs setup as Destination Configs on the Lambda functions. The Amazon Pinpoint Send Messages API is a batch API that allows sending messages to 100 recipients at a time. As such, it is possible to have requests succeed partially – messages, within a single API call, that fail/throttle should also be sent to the DLQ and retried again.

The Lambda functions are configured to run at a fixed reserve concurrency value. This ensures that a fixed rate of messages is fetched from the queue and processed at any point of time. For example, a Lambda function receives messages from an SQS queue and calls the Amazon Pinpoint APIs. It has a reserved concurrency of 10 with a batch size of 10 items. The SQS queue rapidly receives 1,000 messages. The Lambda function scales up to 10 concurrent instances, each processing 10 messages from the queue. While it takes longer to process the entire queue, this results in a consistent rate of API invocations for Amazon Pinpoint.

Step 4: Monitoring and observability

Monitoring tools record performance statistics over time so that usage patterns can be identified. Timely detection of a problem (ideally before it affects end users) is the first step in observability. Detection should be proactive and multi-faceted, including alarms when performance thresholds are breached. For the architecture proposed in this blog, observability is enabled by using Amazon Cloudwatch and AWS X-Ray.

Some of the key metrics that are monitored using Amazon Cloudwatch are as follows:

  • Amazon Pinpoint
    • DirectSendMessagePermanentFailure
    • DirectSendMessageTemporaryFailure
    • DirectSendMessageThrottled
  • AWS Lambda
    • Invocations
    • Errors
    • Throttles
    • Duration
    • ConcurrentExecutions
  • Amazon SQS
    • ApproximateAgeOfOldestMessage
    • NumberOfMessagesSent
    • NumberOfMessagesReceived
  • Amazon SNS
    • NumberOfMessagesPublished
    • NumberOfNotificationsDelivered
    • NumberOfNotificationsFailed
    • NumberOfNotificationsRedrivenToDlq

AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how the application and its underlying services are performing, to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.

Notes:

  1. If you are using Amazon Pinpoint’s Campaign or Journey feature to deliver SMS to recipients in various countries, you do not need to implement this solution. Amazon Pinpoint will drain messages depending on the MessagesPerSecond configuration pre-defined in the campaign/journey settings.
  2. If you need to send transactional SMS to a small number of countries (one or two), you should work with AWS support to define your SMS sending throughput for those countries to accommodate spikey SMS message traffic instead.

Conclusion

This post shows how customers can leverage Amazon Pinpoint along with Amazon SQS and AWS Lambda to build, regulate and prioritize SMS deliveries across multiple countries or business use-cases. This leads to predictable delays in message deliveries and provides customers with the ability to control the rate and priority of messages sent using Amazon Pinpoint.


About the Authors

Satyasovan Tripathy works as a Senior Specialist Solution Architect at AWS. He is situated in Bengaluru, India, and focuses on the AWS Digital User Engagement product portfolio. He enjoys reading and travelling outside of work.

Rajdeep Tarat is a Senior Solutions Architect at AWS. He lives in Bengaluru, India and helps customers architect and optimize applications on AWS. In his spare time, he enjoys music, programming, and reading.

Enhance Your Contact Center Solution with Automated Voice Authentication and Visual IVR

Post Syndicated from Soonam Jose original https://aws.amazon.com/blogs/architecture/enhance-your-contact-center-solution-with-automated-voice-authentication-and-visual-ivr/

Recently, the Accenture AWS Business Group (AABG) assisted a customer in developing a secure and personalized Interactive Voice Response (IVR) contact center experience that receives and processes payments and responds to customer inquiries.

Our solution uses Amazon Connect at its core to help customers efficiently engage with customer service agents. To ensure transactions are completed securely and to prevent fraud, the architecture provides voice authentication using Amazon Connect Voice ID and a visual portal to submit payments. The visual IVR feature allows customers to easily provide the required information online while the IVR is on standby. The solution also provides agents the information they need to effectively and efficiently understand and resolve callers’ inquiries, which helps improve the quality of their service.

Overview of solution

Our IVR is designed using Contact Flows on Amazon Connect and uses the following services:

  • Amazon Lex provides the voice-based intent analysis. Intent analysis is the process of determining the underlying intention behind customer interactions.
  • Amazon Connect integrates with other AWS services using AWS Lambda.
  • Amazon DynamoDB stores customer data.
  • Amazon Pinpoint notifies customers via text and email.
  • AWS Amplify provides the customized agent dashboard and generates the visual IVR portal.

Figure 1 shows how this architecture routes customer calls:

  1. Callers dial the main line to interact with the IVR in Amazon Connect.
  2. Amazon Connect Voice ID sets up a voiceprint for first time callers or performs voice authentication for repeat callers for added security.
  3. Upon successful voice authentication, callers can proceed to IVR self-service functions, such as checking their account balance or making a payment. Amazon Lex handles the voice intent analysis.
  4. When callers make a payment request, they are given the option to be handed off securely to a visual IVR portal to process their payment.
  5. If a caller requests to be connected to an agent, the agent will be presented with the customer’s information and IVR interaction details on their agent dashboard.
Architecture diagram

Figure 1. Architecture diagram

Customer IVR experience

Figure 2 describes how callers navigate through the IVR:

  1. The IVR asks the caller the purpose of the call.
  2. The caller’s answer is sent for voice intent analysis. The IVR also attempts to authenticate the caller’s voice using Amazon Connect Voice ID. If authenticated, the caller is automatically routed to the correct flow based on the analyzed intent.
    • For the “Account Balance” flow, the caller is provided the account balance information.
    • For the “Make a Payment” flow, the caller can use the IVR or a visual IVR portal to process the payment. Upon payment completion, the caller is immediately notified their transaction has completed via SMS or email. Both flows allow the caller to be transferred to an agent. The caller also has the option to be called back when an agent becomes available or choose a specific date and time for the callback.
Customer IVR experience diagram

Figure 2. Customer IVR experience diagram

The intelligent self-service IVR solution includes the following features:

  • The IVR can redirect callers to a payment portal for scenarios like making a payment while the IVR remains on standby.
  • IVR transaction tracking helps agents understand the current status of the caller’s transaction and quickly determines the caller’s situation.
  • Callers have the option to receive a call as soon as the next agent becomes available or they can schedule a time that works for them to receive a callback.
  • IVR activity logging gives agents a detailed summary of the caller’s actions within the IVR.
  • Transaction confirmation which notifies callers of successful transactions via SMS or email.

Solution walkthrough

Amazon Connect Voice ID authenticates a caller’s voice as an added level of security. It requires 30 seconds to create the initial enrollment voiceprint and 10 seconds of a caller’s voice to authenticate. If there is not enough net speech to perform the voice authentication, the IVR asks the caller more questions, such as their first name and last name, until it has collected enough net speech.

The IVR falls back to dual tone multi-frequency (DTMF) input for the caller’s credentials in case the system cannot successfully authenticate. This can include information like the last four digits of their national identification number or postal code.

In contact flows, you will enable voice authentication by adding the “Set security behavior” contact block and specifying the authentication threshold, as shown in Figure 3.

Set security behavior contact block

Figure 3. Set security behavior contact block

Figure 4 shows the “Check security status” contact block, which determines if the user has been successfully authenticated or not. It also shows results that it may return if the caller is not successfully authenticated, including, “Not authenticated,” “Inconclusive,” “Not enrolled,” “Opted out,” and “Error.”

Check security status contact block

Figure 4. Check security status contact block

Providing a personalized experience for callers

To provide a personalized experience for callers, sample customer data is stored in a DynamoDB table. A Lambda function queries this table when callers call the contact center. The query returns information about the caller, such as their name, so the IVR can offer a customized greeting.

Transaction tracking

The table can also query if a customer previously called and attempted to make a payment but didn’t complete it successfully. This feature is called “transaction tracking.” Here’s how it works:

  • When the caller progresses through the “make a payment” flow, a field in the table is updated to reflect their transaction’s status.
  • If the payment is abandoned, the status in the table remains open, and the IVR prompts the caller to pick up where they left off the next time they call.
  • Once they have successfully completed their payment, we update the status in the table to “complete.”
  • When the IVR confirms that the caller’s payment has gone through, they will receive a confirmation via SMS and email. A Lambda function in the contact flow receives the caller’s phone number and email address. Then it distributes the confirmation messages via Amazon Pinpoint.

If a call is escalated to an agent, the “Check contact attributes” contact block in Figure 5 helps to check the caller’s intent and provide the agent with a customized whisper.

Agent whisper sample contact flow

Figure 5. Agent whisper sample contact flow

Making payments via the payment portal

To make a payment, an Amazon Lex bot presents the caller with the option to provide payment details over the phone or through a visual IVR portal.

If they choose to use the visual IVR portal (Figure 6), they can enter their payment details while maintaining an open phone connection with the contact center, in case they need additional assistance. Here’s how it works:

  • When callers select to use the payment portal, it prompts a Lambda function that generates a universally unique identifier (UUID) and provides the caller a unique PIN.
  • The UUID and PIN are stored in the DynamoDB table along with the caller’s information.
  • Another Lambda function generates a secure link using the UUID. It then uses Amazon Pinpoint to send the link to the caller over text message to their phone number on record. When they open the link, they are prompted to enter their unique PIN.
  • Then, the webpage makes an API call that validates the payment request by comparing the entered PIN to the PIN stored in the DynamoDB table.
  • Once validated, the caller can enter their payment information.
Visual IVR portal

Figure 6. Visual IVR portal

Figure 7 illustrates visual IVR portal contact flow:

  • Every 10 seconds, a Lambda function checks the caller’s payment status. It provides the caller the option to escalate to an agent if they have questions.
  • If the caller does not fill out all the information when they hit “Submit Payment,” an IVR prompt will ask them to provide all payment details before proceeding.
  • The IVR phone call stays active until the user’s payment status is updated to “complete” in the DynamoDB table. This generates an IVR prompt stating that their payment was successful.
Visual IVR portal sample contact flow

Figure 7. Visual IVR portal sample contact flow

Generating a chat transcript for agents

When the customer’s call is escalated to an agent, the agent receives a chat transcript. Here’s how it works:

  • After the caller’s intent is captured at the start of the call, the IVR logs activity using a “Set contact attribute” contact block, which prompts the $.Lex.SessionAttributes.transcript.
  • This transcript is used in a Lambda function to build a chat interface.
  • This transcript is shown on the agent’s dashboard, along with the Contact Control Panel (CCP) and a few key pieces of caller information.
IVR transcript

Figure 8. IVR transcript

The agent’s customized dashboard and the visual IVR portal are deployed and hosted on Amplify. This allows us to seamlessly connect to our code repository and automate deployments after changes are committed. It removed the need to configure Amazon Simple Storage Service (Amazon S3) buckets, an Amazon CloudFront distribution, and Amazon Route 53 DNS to host our front-end components.

This solution also offers callers the ability to opt-in for a callback or to schedule a callback. A “Check queue status” contact block checks the current time in queue, and if it reaches a certain threshold, the IVR will offer a callback. The caller has the option to receive a call as soon as the next agent becomes available or to schedule a time to receive a callback. A Lex bot gathers the date and time slots, which are then passed to a Lambda function that will validate the proposed callback option.

Once confirmed, the scheduled callback is placed into a DynamoDB table along with the caller’s phone number. Another Lambda function scans the table every 5 minutes to see if there are any callbacks scheduled within that 5-minute time period. You’ll add an Amazon EventBridge prompt to the Lambda function that specifies a schedule expression like cron(0/5 8-17 ? * MON-FRI *), which means the Lambda function will execute every 5 minutes, Monday through Friday from 8:00 AM to 4:55 PM.

Conclusion

This solution helps you increase customer satisfaction by making it easier for callers to complete transactions over the phone. The visual IVR provides added web-based support experience to submit payments. It also improves the quality of service of your customer service agents by making relevant information available to agents during the call.

This solution also allows you to scale out the resources to handle increasing demand. Custom features can easily be added using serverless technology, such as Lambda functions or other cloud-native services on AWS.

Ready to get started? The AABG helps customers accelerate their pace of digital innovation and realize incremental business value from cloud adoption and transformation. Connect with our team at [email protected] to learn how to use machine learning in your products and services.

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Dynamically personalize your in-product user experience using Amazon Pinpoint in-app messaging

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/dynamically-personalize-your-in-product-user-experience-using-amazon-pinpoint-in-app-messaging/

Many businesses today struggle to align out-of-product messaging through channels such as email and SMS, with in-product messaging shown when a users is within a mobile or web application. Customers will present one message to a user through a targeted email, but once a user visits the application they are presented with different messaging. This creates confusion for the user, and reduces the chances of them performing a high-value action such as a purchasing a discounted product. Customers can get around this by hard coding certain messages into their application, however this is time consuming for development teams, and slower to implement as it requires a new release of a mobile or web client.

Amazon Pinpoint in-app messaging allows customers to create, target and display in-product messages to users dynamically without the need to update client-side code after initial implementation. This allows a non-technical persona such as a marketer to modify the application experience and target user messaging independently of a development team. This also allows the in-product messaging to share the user targeting as the out-of-app messaging. This creates consistent user messaging, and increases the chance a user performs a high value action.

The blog outlines how to create in-app endpoints, segments, and campaigns. Then how to fetch in-app messages, implement simple logic to control message prioritization, message caps, and to listen for events in order to show the message at the desired moment.

Solution Overview

Assume you are a retailer and want to display a banner with a promotion to all customers with a recent purchase over $500 when they launch the application. To deliver the above experience using the in-app messaging channel, you will need to create a dynamic customer segment where User.UserAttribute.LastPurchaseValue > $500, design an in-app message template with a call-to-action to claim the promotion, and create an in-app campaign. The in-app campaign will be triggered based on the customer event app_launch and only for customers who belong to the dynamic segment created above. To render the message and send in-app message engagement events back to Amazon Pinpoint, you will need to go through an one time setup that is explained in a later section of this blog. Monitor your in-app campaign performance across different metrics, using the Amazon Pinpoint campaign analytics dashboard.

In-app channel implementation can differ depending the use case and requirements. The creation of customer segments, message templates and campaigns can be done either via the Amazon Pinpoint console or programmatically using Amazon Pinpoint APIs. The in-app messages retrieval, rendering and recording of engagement events can either be build and managed from you or use AWS Amplify.

In the following sections you will be introduced to the seven components of the in-app channel and how they operate together:

  • Step 1: Creating in-app endpoints & segment
  • Step 2: Creating an in-app message template
  • Step 3: Creating an in-app campaign
  • Step 4: Querying available in-app messages for an Amazon Pinpoint customer
  • Step 5: Rendering in-app messages
  • Step 6: Recording in-app events
  • Step 7: In-app message display logic using SessionCap, DailyCap, TotalCap

Prerequisites

For this blog post, you should have the following prerequisites:

Step 1: Creating an Amazon Pinpoint customer segment

In Amazon Pinpoint, users can have one or more endpoints. An endpoint describes a unique address, such as an email or mobile number. Similar to other Amazon Pinpoint channels, you need to create or import in-app endpoints with Channel = IN_APP. To retrieve in-app messages for a user, you have to use their IN_APP endpoint id. Note that the Address is not a required field for in-app and can be left blank.

  1. Copy the text below and save it as CSV in your computer
    Id,ChannelType,Address,User.UserId,OptOut
    111,IN_APP,123,userid1,NONE
  2. Navigate to the Amazon Pinpoint console
  3. Select the Amazon Pinpoint project that you want to set up the in-app channel
  4. Navigate to the Segments’ section
  5. Choose Import a segment
  6. Select Upload files from your computer as Import method
  7. Select Choose files and find the CSV file you created in step 1
  8. Choose Create segment
  9. Navigate to AWS Cloudshell console and wait till the terminal loads
  10. Replace <Application id> with your Amazon Pinpoint application id in the following command aws pinpoint get-endpoint –application-id <Application id> —endpoint-id 111
  11. Execute the command in step 10 by pasting it in the AWS CloudShell terminal and press Enter. You should be able to see a response similar to the one below


Step 2: Creating an in-app Message Template

In-app message templates contain a variety of fields with some of them offering the option to choose from pre-defined values such as Header alignment and others in a form of free text such as Message. The end result is a banner that includes a Header, Message, Image, Button(s) and Custom data with all of them being fully customizable. While building an in-app template, you can preview the banner across Phone, Tablet and Browser. This preview is for reference purposes only as the rendering can vary according to the end user’s device as well as your preference on how to render it.

Note: The message template for in-app currently doesn’t support message helpers for personalization but it is a feature the Amazon Pinpoint product team is exploring.

  1. Navigate to Message templates
  2. Select Create template and choose In-app messaging as Channel
  3. Type my_first_in-app_message_template as Template name
  4. Complete the  section, as per your message requirements
  5. Select Create

Step 3: Creating an in-app Campaign

A campaign is a messaging initiative that engages a specific audience segment. A campaign sends tailored messages according to a schedule or customer event that you define.

  1. Navigate to your Amazon Pinpoint project and select Campaigns and Create a campaign
  2. Type my_first_in-app_campaign as Campaign name
  3. Select Standard campaign as Campaign type and In-app messaging as Channel
  4. Select Very important for Set prioritization. This configuration is specific to the in-app channel and it helps you identify the most important in-app message for an endpoint
  5. Select Next and choose the segment in-app-segment from the dropdown. This should be an imported segment that you created in Step 1: Creating an Amazon Pinpoint customer segment. The Segment estimate should show 1 endpoints
  6. Select Next and choose the in-app message template with the name my_first_in-app_message_template, then select Next
  7. An in-app campaign needs to have a Trigger event, which will determine when the in-app message will be displayed. You can add event Attributes and/or Metrics to make it more specific. To learn how to record events with Amazon Pinpoint visit Reporting events in your application. If you currently do not record any events in your Amazon Pinpoint project type test_event as Trigger events
  8. Select Start and End date and time for Campaign dates. Note that in-app campaigns need to start at least 15 minutes later from the time of publishing
  9. In the Edit campaign settings section you will find the fields, which specify the Maximum number of session messages viewed per endpoint (SessionCap), Maximum number of daily messages viewed per endpoint (DailyCap) and Maximum number of messages viewed per endpoint (TotalCap). These values indicate how many times the in-app message can be displayed to the customer for that in-app campaign within a session, day and in total respectively. In all three campaign setting fields enter the number 10 and select Override project-level setting where applicable Set prioritization, Trigger events and Caps are part of the in-app message payload that you receive when calling Amazon Pinpoint’s In-app messages REST API operation. You will use this information to decide whether to render or not that in-app message.
  10. Select Next scroll down and select Launch campaign

Step 4: Querying available in-app messages for an Amazon Pinpoint customer

To retrieve in-app messages for an Amazon Pinpoint customer, you will need to have their IN_APP endpoint id and either use the In-app messages REST API operation, one of the AWS SDKs that support Amazon Pinpoint, AWS Command Line Interface or AWS Amplify.

Note: AWS Amplify manages on your behalf the in-app messages request, rendering and tracking, thus if you are using AWS Amplify for Amazon Pinpoint in-app channel the steps below are not required.

In the request body you need to specify the IN_APP endpoint id. If there are any available in-app messages for that endpoint id, the response will contain a JSON object with the top ten active in-app messages based on their priority (the ten in app message response is a hard limit). Loop through the in-app messages and identify the one that meets the criteria based on the Trigger event and Prioritization.

  1. Navigate to the AWS CloudShell console
  2. Replace <Application id> with your Amazon Pinpoint application id in the following command aws pinpoint get-in-app-messages –application-id <Application id> —endpoint-id 111
  3. Execute the command in step 2 by pasting it in the AWS CloudShell terminal and press Enter. You should be able to see a response similar to the one below

The response should contain only one in-app campaign. You can see all the in-app message template data and campaign configuration are present in the response.

Note: Campaigns that have passed their end date, or have reached their daily or total cap limit won’t show in the response. In case the response contains more than one in app message with the same priority and they both haven’t exceeded their caps, you can use the in-app campaign start date to evaluate which one to display.

It is recommended to retrieve the in-app messages once per session and store them locally. That way in every event the customer triggers in your mobile / web app you would check against the in-app messages stored locally instead of performing additional calls to Amazon Pinpoint. This approach decreases the in-app channel cost as you pay per request.

You can perform the operation of retrieving in-app messages for an Amazon Pinpoint customer either client side or server side. Server side can be implemented using the architecture illustrated below, which utilizes Amazon API Gateway and AWS Lambda creating a development framework agnostic approach. Furthermore Amazon API Gateway is offering a great variety of authentication and authorization mechanisms.

The server side architecture depicted below doesn’t cover the use case for offline customers. If this is a requirement then it is recommend to store in-app messages and fetch them locally when the device doesn’t have internet connectivity. Once the device is connected back to the internet you can retrospectively send any in-app related events.

Note: If you are using AWS Amplify, it will retry to publish customer offline events that occurred once the device gets back online.

Step 5: Rendering in-app messages

Render the in-app messages yourself based on the in-app message API response or use AWS Amplify which will render it on your behalf. AWS Amplify allows you to provide your own In-App Messaging UI component to override the default Amplify provided UI.

Step 6: Recording in-app events

Measuring in-app campaigns’ performance is based on four metrics:

  • Message displayed: a message has been displayed to an end user
  • Message dismissed: a user has dismissed a message
  • Message clicked: a user has clicked through a message
  • Any event type: Any event that a user can trigger on the mobile or web app

Fire the above events either from client or server side as Amazon Pinpoint custom events. Amazon Pinpoint custom events can be recorded using put_events REST API operation or AWS SDKs that support Amazon Pinpoint.

Note: If you are using AWS Amplify, the in-app events will be recorded automatically

To have these events recorded under Amazon Pinpoint Campaign deliver metrics dashboard, you have to use the following EventType names:

  • Message displayed: _inapp.message_displayed
  • Message dismissed: _inapp.message_dismissed
  • Message clicked: _inapp.message_clicked
  • Any event type: No specific name is required

In addition to the EventType, a few other fields are required in order to attribute these events to the correct in-app campaign. Within the event attributes’ object of the request payload, the fields campaign_id and delivery_type must be provided. Campaign_id should match the InApp campaign_id, while the delivery_type should be IN_APP_MESSAGE. Additionally, the treatment_id is necessary if you are running an A/B test.

Note: If you do not use the above event names and attributes, you won’t see any events under Campaign delivery metrics and Campaign engagement rates on the Amazon Pinpoint console.

Step 7: In-app message display logic using SessionCap, DailyCap and TotalCap

Message display logic refers to the logic that stores and assesses the number of times a user has seen / interacted with the in-app message. Amazon Pinpoint calculates the DailyCap & TotalCap as long as you record the _inapp.message_displayed event or using AWS Amplify. For the SessionCap event you need to count the _inapp.message_displayed locally on your mobile / web application unless you are using AWS Amplify.

Note: When retrieving the in-app messages from Amazon Pinpoint, the payload contains the remaining number of times you can display the in-app message daily & total.

Conclusion

This post walks you through how to configure Amazon Pinpoint to send in-app messages to your customers when browsing your mobile / web application. Using this Amazon Pinpoint channel, you can now:

  • Create in-app segments, message templates and campaigns
  • Retrieve in-app messages per user
  • Render in-app messages
  • Record customer engagement data with the in-app message

Related links

To learn more about the technologies or features used to create this solution, explore the following pages:

How to set up Amazon Quicksight dashboard for Amazon Pinpoint and Amazon SES engagement events

Post Syndicated from satyaso original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-set-up-amazon-quicksight-dashboard-for-amazon-pinpoint-and-amazon-ses-events/

In this post, we will walk through using Amazon Pinpoint and Amazon Quicksight to create customizable messaging campaign reports. Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service that allows customers to connect with users over channels like email, SMS, push, or voice. Amazon QuickSight is a scalable, serverless, embeddable, machine learning-powered business intelligence (BI) service built for the cloud. This solution allows event and user data from Amazon Pinpoint to flow into Amazon Quicksight. Once in Quicksight, customers can build their own reports that shows campaign performance on a more granular level.

Engagement Event Dashboard

Customers want to view the results of their messaging campaigns in ever increasing levels of granularity and ensure their users see value from the email, SMS or push notifications they receive. Customers also want to analyze how different user segments respond to different messages, and how to optimize subsequent user communication. Previously, customers could only view this data in Amazon Pinpoint analytics, which offers robust reporting on: events, funnels, and campaigns. However, does not allow analysis across these different parameters and the building of custom reports. For example, show campaign revenue across different user segments, or show what events were generated after a user viewed a campaign in a funnel analysis. Customers would need to extract this data themselves and do the analysis in excel.

Prerequisites

  • Digital user engagement event database solution must be setup at 1st.
  • Customers should be prepared to purchase Amazon Quicksight because it has its own set of costs which is not covered within Amazon Pinpoint cost.

Solution Overview

This Solution uses the Athena tables created by Digital user engagement events database solution. The AWS CloudFormation template given in this post automatically sets up the different architecture components, to capture detailed notifications about Amazon Pinpoint engagement events and log those in Amazon Athena in the form of Athena views. You still need to manually configure Amazon Quicksight dashboards to link to these newly generated Athena views. Please follow the steps below in order for further information.

Use case(s)

Event dashboard solutions have following use cases: –

  • Deep dive into engagement insights. (eg: SMS events, Email events, Campaign events, Journey events)
  • The ability to view engagement events at the individual user level.
  • Data/process mining turn raw event data into useful marking insights.
  • User engagement benchmarking and end user event funneling.
  • Compute campaign conversions (post campaign user analysis to show campaign effectiveness)
  • Build funnels that shows user progression.

Getting started with solution deployment

Prerequisite tasks to be completed before deploying the logging solution

Step 1 – Create AWS account, Pinpoint Project, Implement Event-Database-Solution.
As part of this step customers need to implement DUE Event database solution as the current solution (DUE event dashboard) is an extension of DUE event database solution. The basic assumption here is that the customer has already configured Amazon Pinpoint project or Amazon SES within the required AWS region before implementing this step.

The steps required to implement an event dashboard solution are as follows.

a/Follow the steps mentioned in Event database solution to implement the complete stack. Prior installing the complete stack copy and save the name Athena events database name as shown in the diagram. For my case it is due_eventdb. Database name is required as an input parameter for the current Event Dashboard solution.

b/Once the solution is deployed, navigate to the output page of the cloud formation stack, and copy, and save the following information, which will be required as input parameters in step 2 of the current Event Dashboard solution.

Step 2 – Deploy Cloud formation template for Event dashboard solution
This step generates a number of new Amazon Athena views that will serve as a data source for Amazon Quicksight. Continue with the following actions.

  • Download the cloud formation template(“Event-dashboard.yaml”) from AWS samples.
  • Navigate to Cloud formation page in AWS console, click up right on “Create stack” and select the option “With new resources (standard)”
  • Leave the “Prerequisite – Prepare template” to “Template is ready” and for the “Specify template” option, select “Upload a template file”. On the same page, click on “Choose file”, browse to find the file “Event-dashboard.yaml” file and select it. Once the file is uploaded, click “Next” and deploy the stack.

  • Enter following information under the section “Specify stack details”:
    • EventAthenaDatabaseName – As mentioned in Step 1-a.
    • S3DataLogBucket- As mentioned in Step 1-b
    • This solution will create additional 5 Athena views which are
      • All_email_events
      • All_SMS_events
      • All_custom_events (Custom events can be Mobile app/WebApp/Push Events)
      • All_campaign_events
      • All_journey_events

Step 3 – Create Amazon Quicksight engagement Dashboard
This step walks you through the process of creating an Amazon Quicksight dashboard for Amazon Pinpoint engagement events using the Athena views you created in step-2

  1. To Setup Amazon Quicksight for the 1st time please follow this link (this process is not needed if you have already setup Amazon Quicksight). Please make sure you are an Amazon Quicksight Administrator.
  2. Go/search Amazon Quicksight on AWS console.
  3. Create New Analysis and then select “New dataset”
  4. Select Athena as data source
  5. As a next step, you need to select what all analysis you need for respective events. This solution provides option to create 5 different set of analysis as mentioned in Step 2. They are a/All email events, b/All SMS Events, c/All Custom Events (Mobile/Web App, web push etc), d/ All Campaign events, e/All Journey events. Dashboard can be created from Quicksight analysis and same can be shared among the organization stake holders. Following are the steps to create analysis and dashboards for different type of events.
  6. Email Events –
    • For all email events, name the analysis “All-emails-events” (this can be any kind of customer preferred nomenclature), select Athena workgroup as primary, and then create a data source.
    • Once you create the data source Quicksight lists all the views and tables available under the specified database (in our case it is:-  due_eventdb). Select the email_all_events view as data source.
    • Select the event data location for analysis. There are mainly two options available which are a/ Import to Spice quicker analysis b/ Directly query your data. Please select the preferred options and then click on “visualize the data”.
    • Import to Spice quicker analysis – SPICE is the Amazon QuickSight Super-fast, Parallel, In-memory Calculation Engine. It’s engineered to rapidly perform advanced calculations and serve data. In Enterprise edition, data stored in SPICE is encrypted at rest. (1 GB of storage is available for free for extra storage customer need to pay extra, please refer cost section in this document )
    • Directly query your data – This process enables Quicksight to query directly to the Athena or source database (In the current case it is Athena) and Quicksight will not store any data.
    • Now that you have selected a data source, you will be taken to a blank quick sight canvas (Blank analysis page) as shown in the following Image, please drag and drop what visualization type you need to visualize onto the auto-graph pane. Please note that Amazon QuickSight is a Busines intelligence platform, so customers are free to choose the desired visualization types to observe the individual engagement events.
    • As part of this blog, we have displayed how to create some simple analysis graphs to visualize the engagement events.
    • As an initial step please Select tabular Visualization as shown in the Image.
    • Select all the event dimensions that you want to put it as part of the Table in X axis. Amazon Quicksight table can be extended to show as many as tables columns, this completely depends upon the business requirement how much data marketers want to visualize.
    • Further filtering on the table can be done using Quicksight filters, you can apply the filter on specific granular values to enable further filtering. For Eg – If you want to apply filtering on the destination email Id then 1/Select the filter from left hand menu 2/Add destination field as the filtering criterion 3/ Tick on the destination field you are trying to filter or search for the Destination email ID that 4/ All the result in the table gets further filtered as per the filter criterion
    • As a next step please add another visual from top left corner “Add -> Add Visual”, then select the Donut Chart from Visual types pane. Donut charts are always used for displaying aggregation.
    • Then select the “event_type” as the Group to visualize the aggregated events, this helps marketers/business users to figure out how many email events occurred and what are the aggregated success ratio, click ratio, complain ratio or bounce ratio etc for the emails/Campaign that’s sent to end users.
    • To create a Quicksight dashboards from the Quicksight analysis click Share menu option at the top right corner then select publish dashboard”. Provide required dashboard name while publishing the dashboard”. Same dashboard can be shared with multiple audiences in the Organization.
    • Following is the final version of the dashboard. As mentioned above Quicksight dashboards can be shared with other stakeholders and also complete dashboard can be exported as excel sheet.
  7. SMS Events-
    • As shown above SMS events can be analyzed using Quicksight and dash boards can be created out of the analysis. Please repeat all of the sub-steps listed in step 6. Following is a sample SMS dashboard.
  8. Custom Events-
    • After you integrate your application (app) with Amazon Pinpoint, Amazon Pinpoint can stream event data about user activity, different type custom events, and message deliveries for the app. Eg :- Session.start, Product_page_view, _session.stop etc. Do repeat all of the sub-steps listed in step 6 create a custom event dashboards.
  9. Campaign events
    • As shown before campaign also can be included in the same dashboard or you can create new dashboard only for campaign events.

Cost for Event dashboard solution
You are responsible for the cost of the AWS services used while running this solution. As of the date of publication, the cost for running this solution with default settings in the US West (Oregon) Region is approximately $65 a month. The cost estimate includes the cost of AWS Lambda, Amazon Athena, Amazon Quicksight. The estimate assumes querying 1TB of data in a month, and two authors managing Amazon Quicksight every month, four Amazon Quicksight readers witnessing the events dashboard unlimited times in a month, and a Quicksight spice capacity is 50 GB per month. Prices are subject to change. For full details, see the pricing webpage for each AWS service you will be using in this solution.

Clean up

When you’re done with this exercise, complete the following steps to delete your resources and stop incurring costs:

  1. On the CloudFormation console, select your stack and choose Delete. This cleans up all the resources created by the stack,
  2. Delete the Amazon Quicksight Dashboards and data sets that you have created.

Conclusion

In this blog post, I have demonstrated how marketers, business users, and business analysts can utilize Amazon Quicksight dashboards to evaluate and exploit user engagement data from Amazon SES and Pinpoint event streams. Customers can also utilize this solution to understand how Amazon Pinpoint campaigns lead to business conversions, in addition to analyzing multi-channel communication metrics at the individual user level.

Next steps

The personas for this blog are both the tech team and the marketing analyst team, as it involves a code deployment to create very simple Athena views, as well as the steps to create an Amazon Quicksight dashboard to analyse Amazon SES and Amazon Pinpoint engagement events at the individual user level. Customers may then create their own Amazon Quicksight dashboards to illustrate the conversion ratio and propensity trends in real time by integrating campaign events with app-level events such as purchase conversions, order placement, and so on.

Extending the solution

You can download the AWS Cloudformation templates, code for this solution from our public GitHub repository and modify it to fit your needs.


About the Author


Satyasovan Tripathy works at Amazon Web Services as a Senior Specialist Solution Architect. He is based in Bengaluru, India, and specialises on the AWS Digital User Engagement product portfolio. He likes reading and travelling outside of work.

Creating a costs analytics view to email campaign generated by Amazon Pinpoint

Post Syndicated from rafaaws original https://aws.amazon.com/blogs/messaging-and-targeting/creating-a-costs-analytics-view-to-email-campaign-generated-by-amazon-pinpoint/

Introduction

Many companies have multiple departments using different campaigns in the same AWS account on Amazon Pinpoint and need to split costs at the end of each month between the owners of each campaign. To do this, companies need an easy way to find how much each campaign has generated of cost, since the Amazon Pinpoint console doesn’t have this information. To solution this, it is possible combine some AWS services to find out these costs.

In this post, I will demonstrate how AWS analytics and storage services such as Amazon Kinesis, AWS Glue, Amazon Athena, Amazon Quicksight and Amazon Simple Storage Service (Amazon S3) can help you create an analytical view of the costs generated by emails sent through each campaign on Amazon Pinpoint. Not include transactional and Amazon Journey e-mails on the example.

Amazon Pinpoint is an AWS service that helps companies to engage their customers across multiple channels. You can use the Amazon Pinpoint to send email, SMS, push notifications and voice messages to deliver one-off demands or across campaigns.

In this blog, you will learn how to create a dashboard with total cost of emails sent and MTA (Monthly Targeted Audience) by each campaign. With this information, you will be able to distribute costs internally to each department responsible for each campaign on Amazon Pinpoint.

Solution Overview

To create this dashboard, we will take advantage of the Digital User Engagement Events Database solution. We can use an AWS CloudFormation template that set up Amazon Pinpoint event flow. This solution uses the Amazon Kinesis to stream all events about campaigns to a bucket on Amazon S3. After that, a data processing task is performed by AWS Lambda and cataloged on AWS Glue. Some views will be created on Amazon Athena to organize all data and we will use them to calculating and analyzing the Pinpoint costs. For more information about the solution, architecture and how AWS CloudFormation template works to automate the deployment, please visit the Implementation Guide page.

During the deployment process, you will have the option to create a new Amazon Pinpoint project to manage your campaigns or use an existing one.

Prerequisites

1. Complete the Digital User Engagement Events Database implementation.

2. Have an e-mail campaign on Amazon Pinpoint created with some emails already sent.

Analytics View

All events created by Amazon Pinpoint through the campaigns after the Digital Use Engagement Events Database implementation is complete should be appearing in parquet file format in the Amazon S3. If your campaign did not generate any events after the AWS CloudFormation was completed, I recommend creating and executing a new email campaign just to test this solution. Any test email sent during the implementation of this solution will incur charges. The email costs will be explained during this blog.

After the entire Amazon Pinpoint event flow has worked correctly, some modifications on the views created in Amazon Athena must be made. These changes will help to access the information about the quantity of endpoints registered by each campaign of your project. For this, the following steps are required:

To create a new view

1. Open the Amazon Athena console.

2. Under Database, choose the database name created by AWS CloudFormation template.

You will notice a table called “all events” and some views, eg. campaign_send, email_open, email_send and others. These views are responsible to improve the organization of data sent by Amazon Pinpoint, eg. in the campaign_send view it is possible to see all informations about all events that Amazon Pinpoint sends across the campaign by multiple channels.

3. Choose Create view.

A tab will be added on the center page so you can add your command which will be responsible to created a new view.

4. Replace the existing text to the command below and choose Run query.

CREATE OR REPLACE VIEW endpoint_unit AS
SELECT DISTINCT
client.client_id endpoint_id
, "min"("from_unixtime"((event_timestamp / 1000))) event_timestamp
, "month"("from_unixtime"((event_timestamp / 1000))) month_data
, "year"("from_unixtime"((event_timestamp / 1000))) year_data
FROM
all_events
WHERE (event_type = '_campaign.send')
GROUP BY client.client_id, "month"("from_unixtime"((event_timestamp / 1000))), "year"("from_unixtime"((event_timestamp / 1000)))

In this command we are creating a new view, grouping all the endpoints already used by campaigns, along with the earliest date and time that the endpoint was registered. This command will help you identify the first campaign that used the endpoint in each month and year.

Once the new view is created, you will notice that it is listed in the views pane with the name endpoint_unit. You can run this view to check which values are returned.

5. Choose Preview in the endpoint_unit view to return the results.

Example:

The data displayed refer only to the information of the endpoints used in campaigns after the implementation of the Digital User Engagement Events Database.

Now is the time to create the analytical view in Amazon Quicksight.

To check Amazon Quicksight Region

1. Open the Amazon Quicksight console.

If this is your first time using Amazon Quicksight, a page will appear to subscribe to the services, feel free to choose the best option for your business.

Warning: Some costs might occur by using Amazon Quicksight. Check the Amazon Quicksight Pricing page for more information.

2. On top of the screen, choose <Role>/<Account-Name> and select the region where you have the Amazon Pinpoint project.

To check Amazon Quicksight Permissions

Check the permission of the Amazon Quicksight to enable the access to Amazon S3 bucket.

1. On top of the screen, choose <Role>/<Account-Name>, Manage Quicksight.

2. In navigation pane, choose Security & permissions.

3. Under QuickSight access to AWS services, choose Add or remove.

4. Inside the QuickSight access to AWS services table, under Amazon S3, choose Details.

5. Choose Select S3 buckets.

6. Check if the Amazon S3 bucket name checkbox containing all stream files is selected. If not, select the checkbox and choose Finish and Update.

Create a Dataset for Campaign cost

1. Back to the Amazon Quicksight main page.

2. In the navigation page, choose Datasets, New dataset.

3. Choose Athena.

Now, let’s add the table containing information regarding the number of emails sent per Amazon Pinpoint campaign.

4. In the New Athena data source dialog box, do the following:

a. For Data source name, type a name.

b. Choose Create data source.

5. In the Choose your table dialog box, do the following:

a. Choose Use custom SQL.

b. In first field, enter a name for the custom SQL.

c. In second field, paste the command below.

SELECT * FROM "due_eventdb"."campaign_send" where (message_tags['delivery_type'] = 'EMAIL')

This command is filtering only events of campaign based on emails.

d. Choose Confirm query.

6. In the Finish dataset creation dialog box, you will be asked to select between storing a copy of the data of this table in SPICE or performing a query directly from data source. Feel free to choose the best option for your business.

7. Choose Visualize.

After creating the dataset, you will be redirected to the Amazon Quicksight analytics creation page.

As the information sent by Amazon Pinpoint does not have the unit cost of each email sent, we will use three features called Parameters, Control and Calculated Field to calculate these amounts.

Parameters are named variables that can transfer a value for use by an action or an object. To make the parameters accessible to the dashboard viewer, you add a parameter control.

The calculated fields help you to transform your data by using one or more of the actions: Operators, Functions, Aggregate functions, Fields that contain data or other calculated fields.

Create an Amazon QuickSight parameter

1. In the navigation bar, choose + Add at the top of the screen.

2. Choose Add parameter.

3. In the Create new parameter dialog box, do the following:

a. For Name, type the name for the parameter, eg. Costemail.

b. For Data type, choose number.

c. For Values, select Single value.

d. For Static default value enter the unit cost of email. The cost of each Amazon Pinpoint email can be found on Amazon Pinpoint Pricing page.

e. Choose Create and Close.

Important: All costs in this blog will be calculated in USD.

After creating the first parameters, we need to create a manual control of these costs.

Create an Amazon QuickSight control

1. In the navigation pane, choose Parameters.

2. Under the name of parameters that you previously created, choose Add control.

3.  In the Add control dialog box, do the following:

a. For Display name, enter a name.

b. For Style, choose Text field.

c. Choose Add.

This control will help you in the future if you need to change the unit price without changing the Parameters configuration.

We now need to create the Calculate Field. It multiplying the total sent emails value by the unit costs.

Create an Amazon QuickSight calculated field

1. In the navigation bar, choose + Add at the top of the screen again.

2. Choose add calculated field.

3. In add name field, type a name.

4. Paste the expression below.

count({pinpoint_campaign_id}) * ${name_parameters}

5. Replace the “name_parameters” in the expression with the name of the Parameter you created earlier.

6. Choose Save.

We now have all fields available to create the chart on Quicksight.

Create an Amazon QuickSight dashboard

1. In the navigation page, choose Visualize.

2. Under Visual type, choose Vertical bar chart.

Note: If you prefer, you can change it later to other visual type.

3. Choose the fields pinpoint_campaign_id, calculated field that you just created and event_timestamp. Drag each field to X axis, value and group/color to create the chart.

Example:

In this example, you can see the cost in USD x Amazon Pinpoint Campaign ID between April and May 2021.

If you prefer, you can customize your chart in Format Visual. You can also build another view to show the amount of emails sent per campaign.

4. In the navigation bar, choose Share.

5. Choose Publish dashboard.

6. In the Publish a dashboard dialog box, under Publish new dashboard as, type a name.

7. Choose Publish dashboard.

If you want, you can share your dashboard with other username, group, or email address.

Now that we have the total costs of emails per each campaign, let’s create the chart for the total endpoint cost for each campaign.

Create a Dataset for MTA cost

1. Back to the Amazon Quicksight main page.

2. In the navigation page, choose Datasets, New dataset.

3. Choose Athena.

4. In the New Athena data source dialog box, do the following:

a. For Data source name, type a name.

b. Choose Create data source.

5. In the Choose your table dialog box, do the following:

a. Choose Use custom SQL.

b. In first field, enter a name for the custom SQL.

c. In second field, paste the command below.

SELECT distinct c.endpoint_id, e.pinpoint_campaign_id, c.event_timestamp
FROM "due_eventdb"."campaign_send" e
INNER JOIN "due_eventdb"."endpoint_unit" c ON c.event_timestamp = e.event_timestamp

d. Choose Confirm query.

6. In the Finish dataset creation dialog box, you will be asked to select between storing a copy of the data of this table in SPICE or performing a query directly from data source. Feel free to choose the best option for your business.

7. Choose Visualize.

This command is responsible to join the pinpoint_campaign view with the endpoint_unit view. This command will return the campaign ID responsible for contacting the endpoint for the first time on each month.

8. After the dataset is created, repeat the same steps to create a new Parameter, Control and Calculated Field on section Create an Amazon QuickSight parameter, Create an Amazon QuickSight control and Create an Amazon QuickSight calculated field, but when creating a new Parameter, you will use the unit cost of each endpoint, currently it is described on the Amazon Pinpoint Pricing page as Monthly Targeted Audience (MTA).

If you send messages from an Amazon Pinpoint campaign or journey, the unique endpoints you contact are known as a monthly targeted audience (MTA). You are charged on the number of MTAs targeted in a calendar month.

Important: In this calculation we are not considering the subtraction of the free tier values.

In the process of creating Calculated Field, use the following expression:

count({endpoint_id}) * ${name_parameters}

Also remember to replace the name_parameters field in the expression with the name of the parameter that you previously created in point 8 on Create a Dataset for MTA cost section to calculate the costs of endpoints.
In this expression you are calculating the MTA for the distinct endpoint contacted per month.

After this, you will also have all the required fields to create your total cost chart of endpoint per campaign. In this case, use the fields pinpoint_campaign_id, calculated field that you created earlier and event_timestamp.

Example:

In this example, you can see the MTA cost in USD x Amazon Pinpoint Campaign ID between April and May 2021.

Some customers usually have tens or hundreds of campaigns, on this case you can use the Filter option in the navigation pane to a specify a range of date.

Optional: If you prefer you can combine the email and MTA cost in the same Analyses and Dashboard.

Add more datasets in the same Analyses and Dashboard

1. Back to the Amazon Quicksight main page.

2. In the navigation page, choose Analyses.

3. Choose the email cost Analyses that you created on section Create an Amazon QuickSight dashboard.

4. Choose the pencil icon on Dataset option near to the Navigation page.

5. Choose add dataset.

6. In the Choose dataset to add dialog box, choose the dataset you created on section Create a Dataset for MTA cost.

7. Choose Select.

Now you can create each parameter, control and calculated field per dataset on the same Analyses and publish all charts on same dashboard.

Cleanup

To avoid incurring charges, navigate to AWS Cloudformation console and delete the stack that you used on Digital User Engagement Events Database solutions procedure.

After the stack is deleted, you also need to delete your dashboards, analyses and datasets on Amazon Quicksight. You can also delete the stream events data in your Amazon S3 bucket.

Conclusion

In this blog, we used the total cost of email sent and endpoints to create the charts, but it is possible to obtain several analyses in Quicksight using the views that became available in the Digital User Engagement Events Database solution, such as costs for push notifications and other types of channels.

Also try creating dashboards with other Amazon Pinpoint channels

To do this, use this same procedure, explore the view campaign_send to find all data about other channels and modify the SQL queries on Amazon Quicksight to create your dashboards.


About the Author

Rafael Rodrigues is an Enterprise Solution Architect for AWS based in Sao Paulo, Brazil. He helps customers innovate with modern IT architecture on cloud computing.

Integrate AWS Glue DataBrew and Amazon PinPoint to launch marketing campaigns

Post Syndicated from Suraj Shivananda original https://aws.amazon.com/blogs/big-data/integrate-aws-glue-databrew-and-amazon-pinpoint-to-launch-marketing-campaigns/

Marketing teams often rely on data engineers to provide a consumer dataset that they can use to launch marketing campaigns. This can sometimes cause delays in launching campaigns and consume data engineers’ bandwidth. The campaigns are often launched using complex solutions that are either code heavy or using licensed tools. The processes of both extract, transform, and load (ETL) and launching campaigns need engineers who know coding, take time to build, and require maintenance overtime.

You can now simplify this process by integrating AWS services like Amazon PinPoint, AWS Glue DataBrew, and AWS Lambda. We can use DataBrew (a visual data preparation service) to perform ETL, Amazon PinPoint (an outbound and inbound marketing communications service) to launch campaigns, and Lambda functions to achieve end-to-end automation. This solution helps reduce time to production, can be implemented by less-technical folks because it doesn’t require coding, and has no licensing costs involved.

In this post, we walk through the end-to-end workflow and how to implement this solution.

Solution overview

In this solution, the source datasets are pushed to Amazon Simple Storage Service (Amazon S3) using SFTP (batch data) or Amazon API Gateway (streaming data) services. DataBrew jobs perform data transformations and prepare the data for Amazon PinPoint to launch campaigns. We use Lambda to achieve end-to-end automation, which includes an alert system via Amazon Simple Notification Service (Amazon SNS) to alert relevant teams of anomalies or errors.

The workflow includes the following steps:

  1. We can load or push data to Amazon S3 either through AWS Command Line Interface (AWS CLI) commands, AWS access keys, an AWS transfer service (SFTP), or an API Gateway service.
  2. We use DataBrew to perform ETL jobs.
  3. These ETL jobs are either triggered (using Lambda functions) or scheduled.
  4. The processed dataset from DataBrew is imported to Amazon PinPoint as segments using trigger-based Lambda functions.
  5. Marketing teams use Amazon PinPoint to launch campaigns.
  6. We can also perform data profiling using DataBrew.
  7. Finally, we export Amazon PinPoint metrics data to Amazon S3 using Amazon Kinesis Data Firehose. We can use the data for further analysis using Amazon Athena and Amazon QuickSight.

The following diagram illustrates our solution architecture.

As a bonus step, we can create a simple web portal using AWS Amplify that makes API calls to Amazon PinPoint to launch campaigns in case we want to restrict users from launching campaigns using Amazon PinPoint from the AWS Management Console. This web portal can also provide basic metrics generated by Amazon PinPoint. You can also use it as a product or platform that anyone can use to launch campaigns.

To implement the solution, we complete the following steps:

  1. Create the source datasets using both batch ingestion and Amazon Kinesis streaming services.
  2. Build an automated data ingestion pipeline that transforms the source data and makes it campaign ready.
  3. Build a DataBrew data profile job, after which you can view profile metrics and alert teams in case of any anomalies in the source data.
  4. Launch a campaign using Amazon PinPoint.
  5. Export Amazon PinPoint project events to Amazon S3 using Kinesis Data Firehose.

Create the source datasets

In this step, we create the source datasets from both an SFTP server and Kinesis in Amazon S3.

First, we create an S3 bucket to store the source, processed, and campaign-ready data.

We can ingest data into AWS through many different methods. For this post, we consider methods for batch and streaming ingestion.

For batch ingestion, we create an SFTP-enabled server for source data ingestion. We can configure SFTP servers to store data on either Amazon S3 or Amazon Elastic File System (Amazon EFS). For this post, we configure an SFTP server to store data on Amazon S3.

We use API Gateway and Kinesis for stream ingestion. We can also push data to Kinesis directly, but using API Gateway is preferred because it’s easy to handle cross-account and authentication issues. For instructions on integrating API Gateway and a Kinesis stream, see Tutorial: Create a REST API as an Amazon Kinesis proxy in API Gateway.

Each dataset should be in its own S3 folder: s3://<bucket-name>/src-files/datasource1/, s3://<bucket-name>/src-files/datasource2/, and so on.

Build the automated data ingestion pipeline

In this step, we build the end-to-end automated data ingestion pipeline that transforms the source data and makes it campaign ready.

We use DataBrew for data preparation and data quality, and Lambda and Amazon SNS for automation and alerting. Amazon PinPoint can then use this campaign data to launch campaigns.

Our pipeline performs the following functions:

  1. Run transformations on source datasets.
  2. Merge the datasets to make the data campaign ready.
  3. Perform data quality checks.
  4. Alert relevant teams in case of anomalies in the data quality.
  5. Alert relevant teams in case of DataBrew job failures.
  6. Import the campaign-ready dataset as a segment in Amazon PinPoint.

We can either have one DataBrew project to perform necessary transformations on each dataset and merge all the datasets into one final dataset, or have one DataBrew project for each dataset and another project that merges all the transformed datasets. The advantage of having individual projects for each dataset is that we’re decoupling all data sources, so an issue in one data source doesn’t impact another. We use this latter approach in this post.

For instructions on building the DataBrew job, see Creating and working with AWS Glue DataBrew recipe jobs.

DataBrew provides more than 250 transformations. For this post, we add the following transformations to clean the source datasets:

  • Validate column values and add default values if missing
  • Convert column values to a standard format, such as standardize date values or convert lowercase to uppercase (Amazon PinPoint is case-sensitive)
  • Remove special characters if needed
  • Split values if needed
  • Check for duplicates
  • Add audit columns

You could also add a few data quality recipe steps to the recipe.

DataBrew jobs can be scheduled or trigger based (through a Lambda function). For this post, we configure jobs processing individual data sources to be trigger based, and the final job merging all datasets is scheduled. The advantage of having trigger-based DataBrew jobs is that it only triggers if you have a source file, which helps reduce costs.

We first configure an S3 event that triggers a Lambda function. The function triggers the DataBrew job based on the S3 key value. See the following code (Boto3) for our function:

import boto3
 
client = boto3.client('databrew')

def lambda_handler(event, context):
record=event['Records'][0]
bucketname = record['s3']['bucket']['name']
key = record['s3']['object']['key']
print('key: '+key)
  
if key.find('SourceFolder1') != -1:
print('processing source1 file')
response = client.start_job_run(Name='databrew-process-data-job1')
if key.find('SourceFolder2') != -1:
print('processing source2 file')
response = client.start_job_run(Name='databrew-process-data-job2')

The processed data appears in s3://<bucket-name>/src-files/process-data/data-source1/.

Next we create the job that merges the processed data files into a final, campaign-ready dataset. We can configure the job to merge only those files that have been dropped in the last 24 hours.

The campaign-ready data is located in s3://<bucket-name>/src-files/campaign-ready/. This dataset is now ready to serve as input to Amazon PinPoint.

Build a DataBrew data profile job

We can use DataBrew to run a data profile job on any of the datasets defined in the previous steps. When you profile your data, DataBrew creates a report called a data profile. This report displays statistics such as the number of rows in the sample and the distribution of unique values in each column. In this post, we use Lambda functions to read the report and detect anomalies and send alerts using Amazon SNS to respective teams for further action.

  1. On the DataBrew console, on the Datasets page, select your dataset.
  2. Choose Run data profile.
  3. For Job name, enter a name for your job.
  4. For Data sample, select either Full dataset or Custom sample (for this post, we sample 20,000 rows).
  5. In the Job output settings section, for S3 location, enter your output bucket.
  6. Optionally, select Enable encryption for job output file to encrypt your data.
  7. Configure optional settings, such as profile configurations; number of nodes, job timeouts, and retries; schedules; and tags.
  8. Choose an AWS Identity and Access Management (IAM) role for the profile job.
  9. Choose Create and run job.

After you run the job, DataBrew provides you with job metrics. For example, the following screenshot shows the Dataset preview tab.

The following screenshot shows an example of the Data profile overview tab.

This tab also includes a summary of the column details.

The following screenshot shows an example of the Column statistics tab.

Next, we set up alerts using the profile output file.

  1. Configure an S3 event to trigger a Lambda function.

The function reads the output and checks for anomalies in the data. You define the anomalies; for example, when the total missing for a column is greater than 10. The function can then raise an SNS alert if it detects the anomaly.

Launch a campaign using Amazon PinPoint

Before you create segments, create an Amazon PinPoint project. To create segments, we use S3 events to trigger a Lambda function that creates a new base segment whenever the final DataBrew job loads the campaign-ready data. We can either create or update base segments; for this post, we create a new segment. See the following code (Boto3) for the Lambda function:

import boto3
import time
from datetime import datetime

now = datetime.now() # current date and time

date_time = now.strftime("%Y_%m_%d")

segname = segment_name_'+date_time

time.sleep(60)


client = boto3.client('pinpoint')

def lambda_handler(event, context):

response = client.create_import_job(
ApplicationId='xxx-project-id-from-pinpoint',
ImportJobRequest={
'DefineSegment': True,
'Format': 'CSV',
'RoleArn': 'arn:aws:iam::xxx-accountid:role/service-role/xxx_pinpoint_role',
'S3Url': 's3://<xxx-bucketname>/campaign-ready',
'SegmentName': segname
}
)

print(response)

Use this base segment to create dynamic segments and launch a campaign. For more information, see Amazon PinPoint campaigns.

Export Amazon PinPoint project events to Amazon S3 using Kinesis Data Firehose

You can track and push events related to your project, such as sent, delivered, opened messages, and a few others, to either Amazon Kinesis Data Streams or Kinesis Data Firehose, which stream this data to AWS data stores such as Amazon S3. For this post, we use Kinesis Data Firehose. We create our delivery stream prior to enabling event streams on the Amazon PinPoint project.

  1. Create your Firehose delivery stream.

The event stream is disabled by default.

  1. Choose Edit.
  2. Select Stream to Amazon Kinesis and select Send events to an Amazon Kinesis Data Firehose delivery stream.
  3. Choose the delivery stream you created.
  4. For IAM role, you can allow DataBrew to automatically create a new role or use an existing role.
  5. Choose Save.

The events are now sent to Amazon S3. You could either create an Athena table or use Amazon Kinesis Data Analytics to analyze events and build a dashboard using QuickSight.

Security best practices

Consider the following best practices in order to mitigate security threats:

  • Be mindful while creating IAM roles to provide access to only necessary services
  • If sending emails through Amazon SNS, make sure you send email alerts only to verified or subscribed recipients to minimize the possibility of automated emails being used to target victims’ external email addresses
  • Use IAM roles rather than user keys
  • Have logs written to Amazon CloudWatch, and set CloudWatch alarms in case of failures
  • Take regular backups of DataBrew jobs and Amazon Pinpoint campaigns (if needed)
  • Restrict network access for inbound and outbound traffic to least privilege
  • Enable the lifecycle policy to retain only necessary data, and delete unnecessary data
  • Enable server-side encryption using AWS KMS (SSE-KMS) or Amazon S3 (SSE-S3)
  • Enable cross-Region replication of data in case you feel backing up the source data is necessary

Clean up

To avoid ongoing charges, clean up the resources you created as part of this post:

  • S3 bucket
  • SFTP server
  • DataBrew resources
  • PinPoint resources
  • Firehose delivery stream, if applicable
  • Athena tables and QuickSight dashboards, if applicable

Conclusion

In this post, we walked through how to implement an automated workflow using DataBrew to perform ETL, Amazon PinPoint to launch campaigns, and Lambda to automate the process. This solution helps reduce time to production, is easy to implement because it doesn’t require coding, and has no licensing costs involved. Try this solution today for your own datasets, and leave any comments or questions in the comments section.


About the Authors

Suraj Shivananda is a Solutions Architect at AWS. He has over a decade of experience in Software Engineering, Data and Analytics, DevOps specifically for data solutions, automating and optimizing cloud based solutions. He’s a trusted technical advisor and helps customers build Well Architected solutions on the AWS platform.

Surbhi Dangi is a Sr Manager, Product Management at AWS. Her work includes building user experiences for Database, Analytics & AI AWS consoles, launching new database and analytics products, working on new feature launches for existing products, and building broadly adopted internal tools for AWS teams. She enjoys traveling to new destinations to discover new cultures, trying new cuisines, and teaches product management 101 to aspiring PMs.

Field Notes: Understanding Carrier Codes, Message Structure, and Interaction Analytics with Amazon Pinpoint

Post Syndicated from Edward Schaefer original https://aws.amazon.com/blogs/architecture/field-notes-understanding-carrier-codes-message-structure-and-interaction-analytics-with-amazon-pinpoint/

IT developers are frequently looking for an analytics system that tracks app user behavior and engagement with various marketing campaigns. It can be challenging to differentiate between use cases and advantages of utilizing Long Codes, Short Codes and Toll-Free numbers to feed into interaction analytics. With Amazon Pinpoint, developers can learn how each user prefers to engage and can personalize their end-user’s experience to increase engagement.

In this blog post, we’ll evaluate the differences between Long Codes, Short Codes and Toll-Free and we’ll also discuss messaging templates and creating journeys to customize events handling in Amazon Pinpoint.

Typical use cases of Amazon Pinpoint include:

  • Sending timely and targeted message to your customers promoting your products and services with basic templates or highly-personalized messages.
  • Event-based campaigns can be used to send a message when a customer creates a new account or when they add an item to their cart but don’t purchase it. These communications are transactional in nature and can be sent on customer activities within your application.
  • You can create customer outreach with millions in user communities and use built-in analytics to observe your campaign performance.

SMS messaging forms one of the most critical communication channels with customers. Both one way and two-way messaging are supported by Amazon Pinpoint when you enable the SMS channel in your project.

Architecture overview

The following diagram illustrates a typical architecture for how Pinpoint integrates with various AWS services.

Architecture outlining how Pinpoint intrgatios with various AWS services.

Long codes, short codes, and toll-free numbers

Dedicated long codes and 10DLC

A dedicated long code, also referred to as a long virtual number or LVN, is a standard phone number that contains up to 12 digits, depending on the country that it’s based in. You cannot request a long code for A2P messaging within the United States. Instead, you’ll need to request a 10DLC.

Typically used for customer service-related communications, long codes also allow businesses to establish consistent experiences. This is done by using the same number to send both SMS text messages as well as voice messages to customers.

In the United States, 10-digit long code (10DLC) numbers are designed specifically for high-volume Application-to-Person (A2P) messaging. Before purchasing a 10DLC, you must first register your company and create a campaign using the Amazon Pinpoint console. Since Jun 1, 2021, United States mobile carriers require 10DLC for A2P messages.

Common use cases: Customer service, appointment reminders, two-way communications, fraud, or emergency notifications (10DLC), promotional messaging (10DLC).

Advantages:

  • Customer Trust and Brand recognition: 10DLC and dedicated long codes are registered and dedicated to individual companies and campaigns. This means that if you send multiple messages to a recipient each message will appear to come from the same number.
  • SMS and Voice capable: Businesses can use the same long code numbers for both SMS and voice communications to provide consistent contact experiences to their customers.
  • High reliability and delivery rate (10DLC): 10DLC has been adopted by United States wireless carriers specifically for business messaging. By requiring a registration and pre-vetting process, wireless carriers can support higher message volumes and better deliverability while protecting consumers from potential abuse.
  • Low costs: 10DLC and dedicated long code numbers cost less than dedicate short codes.
  • Fast provisioning: Dedicated long codes are typically provisioned within 24-hours. 10DLC numbers are provisioned in about 1 week. These timelines are much shorter than the 10+ weeks needed for short code provisioning.

Disadvantages:

  • Carrier specific limits (10DLC): United States mobile carriers have announced varying throughput and volume limits depending on the tier assigned to your company. This can make planning more difficult. See 10DLC capabilities for an outline of carrier-specific limits.
  • Limited Throughput (Long Codes): Mobile carriers limit the throughput rate of dedicated long codes to 1 message part per second (MPS) in Canada and 10 MPS in all other countries and regions. This creates a bottleneck when messaging thousands of recipients.
  • Transactional messaging only (Long Codes): Mobile carriers prohibit the use of long codes for promotional or marketing messages. Violations could cause carriers to block messages, impose fines, or shut down service.
  • Difficult to remember: Compared to 5-digit to 6-digit short codes, 10-digit numbers are more difficult for customers to remember and to enter into their devices.

Toll-free numbers

Similar to dedicated long codes, toll-free numbers are 10-digit phone numbers beginning with one of the following area codes: 800, 888, 877, 866, 855, 844, or 833. Toll-free numbers are primarily used for transactional messages but they can be used promotional messaging if recipients opt-in to receiving messages and the opt-out rate is low.

Common use cases: Customer service, appointment reminders, two-way communications, fraud or emergency notifications.

Advantages:

  • SMS and Voice capable: Businesses can use the same toll-free numbers for both SMS and voice communications to provide consistent contact experiences to their customers.
  • Low costs: Like 10DLC and dedicated long code numbers, toll-free numbers cost significantly less than dedicate short codes.
  • Fast Provisioning: Typically, toll-free numbers are available immediately after request submission.

Disadvantages:

  • Supported in United States Only: Currently Amazon Pinpoint only supports SMS-enabled toll-free numbers in the United States. A toll-free number cannot be used to send messages outside of the US.
  • Limited Throughput: Mobile carriers limit the throughput rate of toll-free numbers to 3 MPS.

Dedicated short codes

Dedicated short codes are 3-digit to 8-digit numbers are commonly used for high-throughput application-to-person (A2P) messaging workloads. Mobile carriers review and approve all new short code requests before making them active. This vetting process allows messages sent using short codes to bypass carrier filters.

Common use-cases: Promotional messaging, emergency alert systems, mass communications, contest and voting submissions, and two-factor authentication.

Advantages:

  • High-throughput and high-volume messaging: Short codes are designed to be used for high volume messaging campaigns. The vetting and approval process required before activating short codes allows carriers to provide increased MPS throughput and daily message volume quotas while still protecting consumers from abuse.
  • High reliability and delivery rate: Due to the lengthy approval and audit process required to acquire a dedicated short code, short codes are not subject to carrier filtering resulting in reliable message delivery.
  • Easy to remember: Short codes are commonly 5–6 digits in length. This short length allows users to easily recognize and remember codes, increasing campaign effectiveness.

Disadvantages:

  • Lengthy Provisioning Time: It can take 8–12 weeks for short codes to become active on all carrier networks.
  • High Cost: Short code numbers have high one-time setup and monthly fees compared to other originating identities.  For example, in the United States, there’s a $650 one-time setup fee plus an additional recurring charge of $995.00 per month for each short code.
  • Strict rules and regulations: Short codes are governed by different regulatory bodies depending on the country and region. For example, in the United States short codes are regulated by the Federal Communications Commission (FCC), Federal Trade Commission (FTC), and Cellular Telecommunications Industry Association (CTIA). Review Best Practices to learn about the key SMS messaging laws around the world.

 

Number type Number format Channel support Two-way capable Requires registration Estimated Provisioning time SMS throughput (message segments per-second)² Pricing³
1 Long Code/10DLC 10DLC:
10 digits
Long Codes: up to 12 digits
SMS
Voice
Yes* Yes 1 week United States (US): Varies¹
Canada (CA): 1 MPS
All other countries and regions: 10 MPS
$1/mo + registration
2 Short code 3–8 digits SMS Yes* Yes United States (US): 12 weeks
Canada (CA):
16 weeks
All other countries and regions: Varies
United States (US) 100 MPS
Canada (CA): 100 MPS
All other countries and regions: Varies by country
United States (US): $650 setup + $995/mo
Canada (CA): $3,000 setup + $995/mo
All other countries and regions: Varies
3 Toll-free 10 digits SMS
Voice
Yes* No Available immediately United States (US): 3 MPS
Canada (CA): N/A
All other countries and regions: N/A
$2/mo

¹ https://docs.aws.amazon.com/pinpoint/latest/userguide/settings-10dlc.html
² https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-limitations-characters.html
³ https://aws.amazon.com/pinpoint/pricing/#Numbers

*visit Supported countries and regions (SMS channel) to check the SMS capabilities in your recipients’ countries.

Message templates

When you start using Amazon Pinpoint, you’ll want to think about your message templates.  These templates are the context of your messages and can be used for any of the four supported messaging channels–SMS, Voice, Push Notifications, and Voice.

When creating your templates, you can use custom attributes that you imported when building your segment.  We won’t be going into building your segments as that’s covered in Building segments.

We’ll outline how to create a template for SMS messages as that’s focus for this blog post.  The first section we have to fill out is our template details. You can access template creation page by navigating to the Message templates section in the left ‘hamburger’ menu when on the pinpoint service page.

Here you’ll start by naming your template and giving this initial version a description.  A sample format you can follow for naming your template names is channel_message-type_segment_target-campaign. For our example we’ll be using the name SMS_Transactional_NEUSA-Customers_WelcomeNewCustomer.

Next, we’ll build our template. We open the attribute finder by clicking on ‘Case attribute finder’ on the top right of that section and then loading our “Custom Attributes.”

When writing your message templates, you need to think through how these are perceived by the reader and if any words or phrases appear to be a spam message.  We’ll examine what makes a message to appear like spam in the next section as we cover message structure.

Visit the documentation to learn the latest guidance on templates. 

Message structure

The key part of building an Amazon Pinpoint template is the message.

First, there are a few components that make up your SMS message; the greeting, body, and closing.  We’ll start with the greeting section as this is the first thing our recipients will read. When you build out your campaign in Amazon Pinpoint, more on Amazon Pinpoint campaigns by visiting Amazon Pinpoint campaigns, you choose the type of message you’ll be sending.  This is one of the reasons why our naming convention contains “message-type” portion. You choose between Transactional or Promotional Messages.

When sending out promotional messages you’ll want to avoid directly addressing the person by name in the greeting.

The reason for this is when sending out promotional messages they are traditionally sent in bulk through various carriers around the world.  Therefore, using the recipient’s name in the greeting could be viewed as a targeted spam message and something you should avoid when creating templates for your promotional messages. Instead consider using your company name for the greeting as this quickly tells readers who the message is from upfront.

Transactional messages have a bit more leeway when it comes to the greeting section and the recipient’s name can be used, with some caveats.  You’ll still want to avoid using language that indicates spam, for example, punctuation like “Hi Bob-” or “Greetings Jane!” and instead greet your customers with a purpose “Bob’s order record:” or “Jane: Account Update”. Otherwise, you can leave out the name all together or substitute it with your company name. The goal with the greeting in a transactional message is to give the user an idea what the message is about.

Now that we have an idea of our greeting, we move onto the body which will contain the context of your message and any links or data you may want to pass on to the customer. Let’s start off with introducing randomness and taking advantage of the “Attributes Finder” panel we enabled earlier.  We can use attributes for common things like the customer’s name, account identifier, payment due, city, or any other information we may have on the recipient as part of our segment for our template as well.

If a URL is needed for the user to perform some action once they receive that message, this is an easy way to achieve randomness. We’ll want each URL to be unique which we can do by using a URL shortener like the one described by Eric Johnson @ https://aws.amazon.com/blogs/compute/building-a-serverless-url-shortener-app-without-lambda-part-1/. Other options to add uniqueness to our message might be a timestamp with the seconds, an anti-phishing phrase, city or zip-code, or any other unique information that’s ideally not something personally identifiable.

In the example below, we’ve used the following template:

[Attributes.CompanyName] Purchase the items today and receive
[Attributes.DiscountAmount] of your next order.  Visit
[Attributes.ShortURL] to checkout and complete your order.

When messages are sent that use this template, Amazon Pinpoint will use the custom attributes provided in the segment data to populate the custom fields.

Proactive interactions

When sending messages to customers, it’s important to think about how to handle certain events that may come back from a message sent to a customer.  These include common events such as a customer opting out of messages or a message failure.  We’ll start by handling opt-outs as this is a common one and sometimes unintentional.

In the below screenshot, we’ve setup a Amazon Pinpoint journey (for more information about journeys, visit Amazon Pinpoint journeys to take actions based a customer’s response to a message.  If the customer responds back to the message with “stop”, we’ve setup the yes/no split to invoke a Lambda function and send the customer an email.  In this case our email may include a message like “You have opted out of SMS messaging, if this was unintentional, please visit your account to reactivate.”

We invoke a Lambda function to update the customer record with their SMS status.  On the customer portal, you could then have a button next to the SMS status, that when clicked, calls the appropriate APIs to opt-in the customer.  Another way to achieve this is to validate phone number using Pinpoint’s validation API even before the page loads and then show the same button the customer.

Another common scenario are message failures, which we can address in the same manner. Our journey is configured with the same entry, SMS send, and yes/no split. However, this time we evaluate if the message failed, and if it does, we send an email to the customer. Similar to opt-outs, we can use a Lambda trigger to update the customer database and then prompt the user to update their phone number. This scenario is commonly caused when phone numbers are incorrect or entered in wrong format. So, a great way to reduce errors is to validate the phone number after the customer enters it in (more information about how to validate numbers in Amazon Pinpoint can be found in the Developer Guide)

The following flow shows a typical Pinpoint journey and integration with your application backend.

Diagram showing Journeys workflow

Conclusion

In this blog post, we showed you how to use features of Amazon Pinpoint like templates, and then journeys, to automatically resolve failed messages, opt-outs, and other events. We also reviewed the various types of phone number options available in Amazon Pinpoint, as well as how to monitor your usage.

From here, navigate to the console, setup your project in Amazon Pinpoint, and begin testing the various channels. We recommend watching a video on building immersive experiences with Amazon Pinpoint and Journeys. We have only just begun showing you what is possible with Amazon Pinpoint today.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.