Tag Archives: Amazon Pinpoint

Automate phone number validation with Amazon Pinpoint

Post Syndicated from Ilya Pupko original https://aws.amazon.com/blogs/messaging-and-targeting/automate-phone-number-validation-with-amazon-pinpoint/

Amazon Pinpoint allows you to engage with your customers across multiple messaging channels like SMS text, email, and voice messages. While planning and executing standard text (SMS) and voice-based campaigns, one of the challenges developers often run into is the need to verify if the phone numbers in their internal database are valid and conform to the standard E.164 format. You can attempt to verify the phone numbers manually one at a time, but it’s tedious. To overcome this issue, Amazon Pinpoint provides a phone number validation service that you can use to determine if a phone number is valid, have it automatically formatted, and obtain additional information about the phone number itself. For example, when you use the phone number validation service, it returns the following information:

  • The phone number in E.164 format.
  • The phone number type (such as mobile, landline, or VoIP).
  • The city and country where the phone number is based.
  • The service provider that is associated with the phone number.

This blog post aims to provide a step-by-step implementation guide and the necessary code to enable an integrated solution for number verification.

Process flows and architecture


This solution uses Amazon Simple Storage Service (Amazon S3), Amazon Pinpoint, AWS Step Functions, Amazon Simple Notification Service (SNS) and AWS Lambda. To initiate the process, you upload your source contact file in the CSV format to the dedicated Amazon S3 bucket. When the CSV file is uploaded, S3 triggers the associated tasks. Based on the optional configuration rules, the application code either runs the Phone Validate logic first or imports the contact information as-is into Amazon Pinpoint as a new imported segment and updates overall Amazon Pinpoint audience information. If Phone Validation is enabled, the system will first generate and save the new output file to Amazon S3 with the valid phone number, metadata, etc. and use this updated contact information during import. Additionally, the system will kick-off a scheduled campaign to all imported contacts.

This CloudFormation template will automatically create the following new resources on your first deploy:

  • AWS Lambda function: These functions contains the application code which validate the phone numbers. It also creates the segment for the uploaded contacts.
  • S3 event notification: When the CSV file is uploaded to the S3 bucket, the S3 Event Notification triggers the AWS Lambda function which initiate the AWS Step Functions State Machine. To learn more about the S3 Event Notification, check the documentation.
  • AWS Step Functions: This solution will set up an infrastructure to automatically trigger when a new file is placed in an S3 bucket. The process, managed by an AWS Step Functions state machine, will start a Pinpoint import process, wait for it to complete, and send notifications that the job started, successfully finished, or failed.
  • IAM role: The IAM role is used to make Amazon Pinpoint calls, to access S3, and interact with AWS Step Functions and Amazon SNS. You can check the IAM documentation to learn more about IAM roles.

Prerequisites and deployment steps

Step 1: Set up the Amazon Pinpoint project and the S3 bucket

In Amazon Pinpoint, a project (also sometimes referred to as “application”) is a collection of settings, customer information, segments, and campaigns. Setting up a Pinpoint project is the first step to deploy our solution. It holds the segment we will use in the later steps.

  1. Navigate to the Amazon Pinpoint from the services tab in the AWS Management Console and create a new Amazon Pinpoint project.
  2. Copy the Project ID from the Amazon Pinpoint console and save it in notepad. You will need it later.

In Amazon S3, create a new bucket to upload the files to. Make sure it is setup according to your company’s security practices. If you have an existing bucket you want to use instead, note that this solution will require a source bucket in the same region as the solution itself and it will override any triggers already in place on the bucket.

Step 2: Deploy code and services

AWS CloudFormation is a service that gives developers and businesses an easy way to create a collection of related AWS and third-party resources. You can provision them in an orderly and predictable fashion.

  1. Download the latest version of the solution from https://github.com/aws-samples/digital-user-engagement-reference-architectures/blob/master/cloudformation/S3_triggered_import.yaml
  2. Log in to your AWS account and navigate to the Amazon CloudFormation from the services tab in the AWS Management Console: https://console.aws.amazon.com/cloudformation/home
  3. Click on the Create Stack button and choose to provision New Resources. Then select Upload a template file and choose the file you just downloaded in the first step.
  4. On the Specify stack details screen all the information is pre-populated as shown in the screenshot below. Parameters:
    · Replace the PinpointProjectID field with the value you saved in Step 1
    · ValidatePhone: Choose true if you wish to validate the numbers via the Pinpoint API before importing the segment.
    · AssumeUS: Choose true if you want to assume US (+1) phone number for any phone 10 digits long or false if you want to import as-is.
    · AutoCreateCampaign: Choose true if you want to automatically create a campaign based on the imported file or false if you want to just import into the system without automatically scheduling any campaigns. This setting will be saved as an ImportSegment Lambda environment variable so you can adjust it later.
    · CampaignDelay: Number of minutes from the time of import to start of the campaign (if AutoCreateCampaign is set to true). Allows for the last-minute double check and/or pause as needed. Will be saved as CreateCampaign Lambda environment variable.
    · FileDropS3Bucket: Name of the existing Amazon S3 Bucket where new import files will be placed. Note that it has to be in the same region as you are running this template and the bucket should not have any existing notification configurations or they will be overwritten.
    · FileDropS3Prefix: Prefix (sub-folder name) of the Amazon S3 Bucket where you will be uploading new files to be imported.
  5. Settings on the configure stack options page are optional, click Next.

Select all acknowledgment boxes and click Create Stack. It takes a couple of minutes for the AWS CloudFormation to deploy all the resources.

The solution is now deployed and you can test it by uploading the sample CSV file to the Amazon S3 bucket. You will notice that the output CSV file is created in the “results” folder of the same S3 bucket, if you have validation enabled. You can also navigate to the Amazon Pinpoint console to check the Amazon Pinpoint segment. Once the deployment is complete and the segment is created, you can leverage Amazon Pinpoint campaigns to reach out to your customers.

Conclusion and Next Steps

Enabling solutions such as this provides an efficient and integrated mechanism to validate phone numbers and import customer contacts into Amazon Pinpoint. It saves time so that you can focus on creating effective campaigns to engage with your customers.

As the potential next steps, you can look into further expanding the solution by:

  1. Adjusting the default security of the Amazon S3 bucket by limiting who has access to new files. You can also adjust its encryption and the expiration of the files.
  2. Build out the lookup AWS Lambda to additionally fetch other information about the contact using your other systems of records and/or even 3rd party tools. You can also add business logic such as blocking numbers from certain countries (or vice versa, only allow certain countries).
  3. Add more dynamic segments and new endpoint (or user) attributes to more easily track the contacts based on their upload dates, type of phone number, etc.

Create a nice interface your users can use to interact with when needing to upload instead of using the S3 console directly. This “interface” may even be just a backend flow that simply integrates your system of records. This is so they don’t have to deal with any interface and uploads in the first place.

For this, and some other reference architectures you could consider, see https://github.com/aws-samples/digital-user-engagement-reference-architectures.

References

Amazon Pinpoint

https://aws.amazon.com/pinpoint/

Validating phone numbers in Amazon Pinpoint

https://docs.aws.amazon.com/pinpoint/latest/developerguide/validate-phone-numbers.html

Amazon Pinpoint Campaigns

https://docs.aws.amazon.com/pinpoint/latest/userguide/campaigns.html

Pinpoint Segment

https://docs.aws.amazon.com/pinpoint/latest/userguide/tutorials-create-a-segment.html

 

Send voice appointment reminders using Amazon Pinpoint custom channels and Amazon Connect

Post Syndicated from Ryan Lowe original https://aws.amazon.com/blogs/messaging-and-targeting/send-voice-appointment-reminders-using-amazon-pinpoint-custom-channels-and-amazon-connect/

Introduction

In this post, we will walk through setting up an always-on appointment reminder campaign in Amazon Pinpoint. No-show rates are a constant challenge for service providers. Industries such as hospitality estimate 20% of diners miss reservations in big cities,1 while salons average five missed appointments per week.2 Professional services such as financial institutions and sales teams have similar challenges to ensure clients do not miss meetings. To these businesses, an appointment missed represents lost revenue. As a result, the no-show rate is a key metric to improve. An outbound voice message provides another way to reach customers versus emails or SMS, and voice reminders give customers the choice of channels based on personal preferences.

Overview

Amazon Pinpoint is a multichannel communications service enabling customers to send both promotional and transactional messages across email, SMS, push notifications, voice, and custom channels. Amazon Connect is an easy to use omnichannel cloud contact center that helps companies provide superior customer service at a lower cost.

There are benefits of using these services together. Amazon Pinpoint allows you to build a segment of users which can be used within a campaign. Amazon Connect can enable customers to send outbound voice messages at scale should your user audience be large and require a high number of transactions per second (TPS).

To use these services together, you setup custom channels in Amazon Pinpoint, which can be created via an AWS Lambda function. These functions enable you to call APIs to trigger message sends as part of Amazon Pinpoint campaigns. Amazon Pinpoint has developed a new AWS Lambda function which can be used to send outbound voice messages via Amazon Connect. This configuration allows you to define the voice message to be sent, define the segment of users you would like to target, and send voice messages at scale through Amazon Connect via the Amazon Pinpoint custom channel.

The audience for this solution are technical customers who are used to working with multiple AWS services and are familiar with AWS Lambda functions. The solution built relies on the Amazon Pinpoint custom channel feature and targeting, along with the Amazon Connect outbound voice API called via a prepared AWS Lambda function. Once completed, you will be able to create an evergreen campaign which will send outbound voice messages to your patients who have an appointment the following day.

The costs associated with this solution will be:

  1. Amazon Connect outbound voice calls per minute
  2. Amazon Connect claimed phone number(s)
  3. Amazon Pinpoint Monthly Targeted Audience (MTA) costs.

The costs for a outbound voice reminder system that sends 10k messages per day, with an average length of 20 seconds per call, to an total monthly audience of 300k, in the US are as follows. Note that prices with vary for other countries. Complete Amazon Connect outbound call pricing can be found here.

Solution

Prerequisites:

For this walkthrough the article assumes:

  • An AWS account
  • Basic understanding of IAM and privileges required to create the following; IAM identity provider, roles, policies, and users
  • Basic understanding of Amazon Pinpoint and how to create a project
  • Basic understanding of Amazon Connect and experience in creating contact flows. More information on setup of Amazon Connect can be found here.

Step 1: Create an Appointment Reminder custom event

The first step in setting up this solution is to create and report a custom event to Amazon Pinpoint. There are multiple ways to report events in your application. Ffor demonstration purposes, below are two example event calls using the AWS SDK for Python (Boto3) from inside an AWS Lambda Function.

It is important to note that the Amazon Pinpoint events API can also be used to update endpoints when the event gets registered. In the below example, the first API call will update the endpoint attributes AppointmentDate and AppointmentTime with the details of the upcoming appointment. These attributes will be used in the outgoing message to the end-user

Sample Event: Appointment Coming Up

import boto3

client = boto3.client('pinpoint')
app_id = '[PINPOINT_PROJECT_ID]'
endpoint_id = '[ENDPOINT_ID]'
address = '[PHONE_NUMBER]'

def lambda_handler(event, context):

client.put_events(
ApplicationId = applicationId,
EventsRequest={
'BatchItem': {
endpoint_id: {
'Endpoint': {
'ChannelType': 'CUSTOM',
'Address': address,
'Attributes': {
'AppointmentDate': ['December 15th, 2020'],
'AppointmentTime': ['2:15pm']
}
},
'Events':{
'appointment-event': {
'Attributes':{},
'EventType': 'AppointmentReminder',
'Timestamp': datetime.datetime.fromtimestamp(time.time()).isoformat()
}
}
}
}
}
)

NOTE: The following steps assume that the AppointmentReminder event is being reported to Amazon Pinpoint. If you are unable to integrate the above API call into your application, you can manually create an AWS Lambda function using a Python runtime with the above code to trigger sample events.

Step 2: Create an Amazon Connect contact flow for outbound calls

This article assumes that you have an Amazon Connect contact center already setup and working. In this step, we will set up our Amazon Connect contact flow to dial our recipients and play read the message before hanging up.

  1. Log in to your Amazon Connect instance using your access URL (https://<alias>.awsapps.com/connect/login).
    Note: Replace alias with your instance’s alias.
  2. In the left navigation bar, pause on Routing, and then choose Contact flows.
  3. Under Contact flows, choose a template, or choose Create contact flow to design a contact flow from scratch. For more information, see Create a New Contact Flow.
  4. Download the sample JSON contact flow configuration file Outbound_calling.json.
  5. Choose the dropdown menu under Save and choose Import flow (beta).
  6. Select the Outbound_calling.json file in the Import flow (beta) dialog and choose Save.
  7. Choose Save to open the Save flow dialog. Then choose Save to close the dialog.
  8. Choose Publish to open the Publish dialog. Then choose Publish to close the dialog.
  9. In the contact flow designer, expand Show additional flow information.
  10. Under ARN, copy the Amazon Resource Name (ARN) contact flow. It looks like the following:
    arn:aws:connect:region:123456789012:instance/[ConnectInstanceId]/contact-flow/[ConnectContactFlowId]Note the ConnectInstanceId and ConnectContactFlowId from the ARN, they will be used in the next step.
  11. In the left navigation bar, pause on Routing and then choose Queues.
  12. Choose the queue you wish to use for the outbound calls.
  13. In the Edit queue screen, expand Show additional queue information.
  14. Under ARN, copy the Amazon Resource Name (ARN) for the queue. It looks like the following:
    arn:aws:connect:region:123456789012:instance/[ConnectInstanceId]/contact-flow/[ConnectQueueId]Note the ConnectQueueId from the ARN. It will be used in the next step.

Step 3: Deploy and modify the Amazon Pinpoint to the Amazon Connect custom channel with AWS Lambda function

Next, we will need to deploy an Amazon Pinpoint custom channel. Custom channels in Amazon Pinpoint allow you to send messages through any service with an API, including Amazon Connect. The AWS Serverless Application Repository contains an open-sourced AWS Lambda function that we will use for our custom channel. After deploying the AWS Lambda function, we will customize it to match our requirements.

  1. Navigate to the AWS Lambda Console, then choose Create function.
  2. Under Create function, Choose Browser serverless app repository.
  3. Under Public applications, choose the checkbox next to Show apps that create custom IAM roles or resource policies and enter amazon-pinpoint-connect-channel in the search box.
  4. Choose the amazon-pinpoint-connect-channel card from the list and review the Application details.
  5. Under Application settings enter the details for ConnectContactFlowId, ConnectInstanceId, and ConnectQueueId from the previous step.
  6. After reviewing all the details, choose the checkbox next to I acknowledge that this app creates custom IAM roles and resource policies and choose Deploy.
  7. Wait a couple minutes for the application to deploy two AWS Lambda functions and an AWS Simple Queue Service queue.
  8. Under Resources, choose the PinpointConnectQueuerFunction resource to open the AWS Lambda function configuration. This is the AWS Lambda function that Amazon Pinpoint will call when the message is crafted.
  9. Under Function code, scroll down to line 31 and replace
    message = "Hello World! -Pinpoint Connect Channel"
    with
    message = "This is a reminder of your upcoming appointment on {0} at {1}".format(endpoint_profile["Attributes"]["AppointmentDate"][0], endpoint_profile["Attributes"]["AppointmentTime"][0])
  10. Choose Deploy.

Step 4: (Optional) Modify the custom channel AWS Lambda function to meet change the rate of outgoing calls

By default, the custom channel we deployed in the previous step will place outbound calls through Amazon Connect at a rate of 1 call every 3 seconds. This allows you to configure how many active outbound calls to avoid running into service limits. Review your current service limits in Amazon Connect for more details.

  1. Navigate to the AWS Lambda Console, then choose AmazonPinpointConnectChannel-backgroundprocessor function.
  2. Under Function code, scroll down to line 73 and replace the sleep timer, currently set with 3 seconds, with your requirements.
  3. Choose Deploy.

Step 5: Create a Pinpoint custom campaign with your lambda function and segment

  1. Create a CSV file to import endpoints with the attributes of AppointmentDate and AppointmentTime.
    Example:
    Id,Address,ChannelType,Attributes.AppointmentDate,Attributes.AppointmentTime
    1,+1[PHONE_NUMBER],SMS,November 30 2020,9:00am
    2,+1[PHONE_NUMBER2],SMS,November 30 2020,10:00am
  2. Navigate to the Amazon Pinpoint console.
  3. In the All Projects list, select your project.
  4. In the navigation pane, choose Segments.
  5. Choose Create a Segment.
  6. Choose Import a segment and upload your CSV file and choose Create segment.
  7. In the navigation pane, choose Campaigns.
  8. Choose Create campaign.
  9. In the Create a campaign wizard, enter a name for campaign name.
  10. Under Channel choose Custom.
  11. Choose Next.
  12. On the Choose a segment screen, choose the segment created above, and choose Next.
  13. On the Create your message screen, do the following:
    a) For Lambda function choose AmazonPinpointConnectChannel that we deployed in Step 3 above.
    b) For endpoint Options choose SMS.
    c) Choose Next.
  14. On the Choose when to send the campaign screen, do the following:
    a) Choose When an event occurs.
    b) Under Events, choose the AppointmentReminder event.
    c) Under campaign dates, choose a Start date and time and an End date and time to be used as the campaign’s duration.
  15. Choose Next.
  16. Review the campaign details and choose Launch campaign.

Cleanup:

To remove the two AWS Lambda functions and the Amazon Simple Queue Service queue provisioned in the steps above in order not to incur further charges, please follow these steps below.

  1. Navigate to the Amazon CloudFormation Console.
  2. Choose severlessrepo-amazon-pinpoint-connect-channel and choose Delete.
  3. Choose Delete stack in the delete confirmation window.

 

Next Steps:

You can continue to iterate on this experience using Amazon Pinpoint and Amazon Connect to create a custom user experience.

To learn more about these services, please visit the Amazon Pinpoint or Amazon Connect web pages.

(1) https://www.scisolutions.com/uploads/news/Missed-Appts-Cost-HMT-Article-042617.pdf

(2) https://blog.carbonfreedining.org/the-ultimate-guide-to-restaurant-no-shows

Auto-reply to incoming emails using Amazon Simple Email Service (SES)

Post Syndicated from Ilya Pupko original https://aws.amazon.com/blogs/messaging-and-targeting/auto-reply-to-incoming-emails-using-amazon-simple-email-service-ses/

Both Amazon Pinpoint and Amazon Simple Email Service (SES) are known for their ability to send out transactional and promotional emails at scale and with ease. However, both are often not set up to receive email replies. Owners often assume that the “no-reply” addresses they are using do not require much consideration. This means that if a customer does reply, they would get an unhelpful server rejection indicating that the address is invalid. They would also not be able to unsubscribe via the simple reply, which is an otherwise established common practice. Automated guidance that the address is not monitored and who and how to reach for assistance would never be provided. In summary, a very unprofessional experience.

If you do have full control over the DNS and are not already receiving emails at the subdomain used for these emails, you can follow this short guide. It walks you through all the setup needed to have automated and templated responses to any address at the domain. This includes the address you use to send emails. Follow this post to ensure that your Amazon SES and Amazon Pinpoint are set up in accordance with common configuration and best business practices to have professional auto-reply to emails sent to the configured sending email addresses.

Solution overview

The proposed solution does not rely on any additional services. It does not add any additional charges beyond the cost directly associated with receiving and sending the emails and the minimal AWS Lambda function for the automated logic. It relies on SES built-in capability to receive emails, Amazon Pinpoint native templates, and uses Lambda for basic orchestration.

lambda diagram for response

Note, in this walkthrough and related code, we are using Amazon Pinpoint templates as they can be managed and maintained directly via the console, but you can choose to use SES templates (via the CreateTemplate API) or, if it makes better sense in your scenario, even just hardcode the template into the AWS Lambda function itself.

To complete the setup, all you must do is follow these steps:

      1. Confirm (Sub-) Domain setup in SES (even if you use Amazon Pinpoint to send your emails out, the SES portion of the console should show the validated domain as well). See SES Developer Guide.
      2. Ensure that your SES domain is verified and you are out of the sandbox. If still in the sandbox, you can only send emails to the Amazon SES mailbox simulator addresses and email addresses/domains that you have pre-verified. See Moving out of the Amazon SES sandbox.
      3. Configure SES to receive incoming emails. Please note that this must be done on the whole subdomain you use, not just a single email address. See Setting up Amazon SES email receiving.
      4. Create/add a new template you want to use via Amazon Pinpoint. Simply switch the console over to Amazon Pinpoint, select Message templates, click Create, select Email, and fill out the rest of the self-explanatory field.
        1. Plaintext portion is optional – you can either skip it or fill it out and enable in the Lambda function we are deploying in the next step.
        2. Similarly, if you prefer to use the SES template, you can instead. Just use the associated line in that same code.
        3. Same with a hardcoded template, if you prefer that for some reason.
      5. Have this pre-defined CloudFormation create the required SES receive rule, and Lambda function. This processes the incoming email and sends back the response, all using the code shared in the dedicated portion of our GitHub, AWS Digital User Engagement Reference Architectures repository. Specifically:
          1. Download the YAML from SES_Auto_Reply.yaml.
          2. Go to CloudFormation in AWS Management Console. (Remember to choose the region you want it deployed on)
          3. Click Create Stack and then choose With new resources
          4. Leave the default “Template is ready“, but switch to ”Upload a template file“ and choose the file you just downloaded
          5. Follow the wizard to give the “stack” a new name and enter the name of the template you created in step 4.
          6. Optionally you can also set the default response address, the addresses and/or domains you want to limit the auto-response to, and adjust the incoming email rule-set it should be stored under (the default should be fine, unless you have manually adjusted it in the past)
      6. Once deployed, the behavior is immediately active and you can further adjust any of these elements.

 

Conclusion and what’s next?

This architecture, once deployed, sends out the templated auto-response using the SES/Pinpoint domain/email address it received the original email on.

The new rule is added to the SES email receiving rule set to allow further customization:

  1. The rule can be limited to specific email address, specific domain, or just be set to be across all domains.
  2. It can also have the default response address set or reuse the address that the original rejected email was sent to.
  3. It can be moved down on the priority with other rules taking precedence and possibly even overriding it.
  4. It can have other actions added to it, like notifying SNS for additional tracking.

The Lambda function looks up the chosen Amazon Pinpoint template and uses it to reply. Here are some of the customizations you may want to consider within this function and the template:

  1. When sending the automated reply, by default, the template’s configured subject is appended with the original incoming email subject. You can adjust this to fit your company’s brand better.
  2. By default, the function supports an optional template tag %%NAME%% and %%ID%%. If the first appears in the template, it is automatically replaced with the original email’s FROM address. And if %%ID%% appears in the template, it is replaced with the SES’s original email message id, to help with any required audits.
  3. It is assumed that no additional tracking and actions are needed on such rejected and auto-replied emails, but you can further modify the flow by moving the rule around and adding more actions (as mentioned above), and even specify a particular/different SES Configuration Set for the outgoing emails.

Are you using this flow as a baseline for a more complex business flow or have other questions about it? We want to hear back – please comment here or file an issue in the GitHub repository. If you want to file a pull request to make it even more useful for others, please do so, we do appreciate community participation.

If you liked this article, we are continually expanding our Amazon Pinpoint and SES Architecture References and publish new solutions for these and other services. For most recent SES documentation, please see official SES documentation site, and for Amazon Pinpoint, please see Amazon Pinpoint documentation site.

 

 

 

Send real-time alerts using Amazon Pinpoint

Post Syndicated from Dhiraj Thakur original https://aws.amazon.com/blogs/messaging-and-targeting/send-real-time-alerts-using-amazon-pinpoint/

Businesses need to send real-time notifications in order to take action when alerted of a critical situation. Examples could include anomaly detection, healthcare emergencies, operations failures, and fraud transactions. Email, SMS, and push notifications are often used to notify stakeholders in real-time. However, building a large-scale, real-time notification solution can be a complex and costly challenge for a business.

Amazon Pinpoint enables you to engage with your stakeholders in real-time by sending email, SMS and push notifications. Your app can use the Amazon Pinpoint API and the AWS SDKs to send direct messages. With transactional messages, you send alerts to specific recipients, as opposed to messages that you send to segments. There is no minimum fee, no setup cost, and no fixed monthly cost with Amazon Pinpoint.

In this blog, we explore a solution to notify stakeholders of a large customer transaction. This requires immediate attention as our stakeholders want to ensure that there is enough inventory available to deliver goods with no delay.

Solution Overview

The solution that we build to handle this use case can be deployed in one hour. The following diagram illustrates the AWS services integrated in this solution:

At a high level, the solution uses the following workflow:

1. Define large value transaction threshold in rule table in Amazon DynamoDB.
2. Setup Amazon Pinpoint and configure to send email and SMS.
3. Setup AWS Lambda and implement the logic to send SMS and email if a customer makes a large value order transaction.
4. Create a test transaction in an order table using Amazon DynamoDB.
5.  Check details of SMS and email received.

Setting up the solution

Step 1: Set up Amazon Pinpoint

The first step in setting up this solution is to create a new Amazon Pinpoint project and configure the SMS and Email channel.
1. Navigate to Services -> Pinpoint.
2.Click on create a project.
3. Provide a name and click on the Create button.
4. Select Email in left panel and click on Edit.
5.  Select “Enable the email channel for this project”. Select “Verify a new email address” and provide a default sender address. Click on verify email address. Click on save. You should get a verification mail. Verify the email by clicking on the verification link.

Once your email is verified, it should look like this:

6. Repeat the same step to verify receiver email address as well.

Please note: By enabling the email channel, we can send up to 200 email per day in sandbox mode. We must verify an email address or domain identity in order to send any email. While we remain in sandbox, we also must verify all recipient email addresses before we proceed — this does not apply in production.

7.  Select SMS and voice in left panel and select “Enable the SMS channel for this project”. Please make sure “Transactional” is selected for critical or time-sensitive messages. Amazon Pinpoint optimizes the delivery of these messages for highest reliability.

Please note: We do not need to verify any phone numbers for this channel. However, we can request dedicated codes (either short or long) for our own exclusive use. Otherwise AWS will automatically allocate codes when we send SMS.

8. Note down the project ID.

We have done all the configurations in Amazon Pinpoint. Now, it’s time to setup Amazon DynamoDB tables.

Step 2: Set up Amazon DynamoDB table

1. Navigate to Services -> DynamoDB.
2. Click Create table.
3. Create a record similar to this:

This table stores your definition of large-value order transaction.

{
		"transaction_type" : "premium"
		"min_transaction_value" : 10000
		"max_transaction_value" : 50000
		"send_notification" : ""
		"unit" : "$"
		"email" : "[email protected]"
		"phone" : "+91123456789"
		
	
	}

4. Create order_detail table. Provide order_id as partition key. You can select “String” as data type.
5. Now we must enable the stream in this table. Once we enable a stream on a table, Amazon DynamoDB captures information about every modification to data items in the table. We will integrate this table with AWS Lambda to validate the order value. Click on Manage Stream.
6. Select appropriate view type and click on Enable.

Step 3: Set up AWS Lambda

In this step, we create an AWS Lambda function and then integrate it with the order_detail table. After, we will check the order and send a notification if large enough.

1. Navigate to Service>Lambda.
2.Click on Create function.
3. Select runtime as Python 3.6. Select an execution role. Please make sure that your role gives read access to Amazon DynamoDB table.
4. Click on Add Trigger.
5.  Select the Amazon DynamoDB table from the drop-down. Select order_detail table from the drop-down. Keep everything default. This integrates order_detail streaming to our AWS Lambda function.

Our Lambda function is invoked every time a transaction happens in order_detail table.

6. Now copy the source code.

The AWS Lambda code is available on GitHub here: https://github.com/aws-samples/send-real-time-notification-using-amazon-pinpoint-samples 

Step 4: Testing

1. That’s it! We have built the solution. Now it’s time to do an end to end test by creating an order transaction in order_detail table.

2. Run the Amazon DynamoDB put-item command to create a new item over our threshhold, or replace an old item with a new one. In our case it will be a new order transaction in order_detail table. Please make sure you have configured AWS CLI. You can refer to the quickstart guide to learn more.

aws dynamodb put-item --table-name order_detail --item "{""order_id"":{""S"":""O0001""},""customer_id"":{""S"":""C0001""},""customer_name"":{""S"":""JOHN MILLER""},""order_date"":{""S"":""2020-06-13T17:42:34Z""},""item_id"":{""S"":""P0001""},""item_quantity"":{""N"":""12""},""order_value"":{""N"":""20000""},""unit"":{""S"":""$""},""delivery_date"":{""S"":""2020-06-20""}}" --return-consumed-capacity TOTAL

3. It returns a success message similar to this:

4.  The order value of $20,000 is falling in the range of our large value order transaction definition (between $10,000 to $50,000). You should receive an SMS on the mobile number you have provided in the transaction_alert_rule table.

5. You will also receive an email as per configuration in the transaction_alert_rule table.

Step 4: Clean up

You’ve now successfully built our real-time notification solution using Amazon Pinpoint. Delete the resource you created in this blog like the Amazon DynamoDB tables to avoid ongoing charges. You can use the AWS CLI, AWS Management Consoles, or the AWS APIs to perform the cleanup.

Conclusion

Customers can use Amazon Pinpoint to help scale communications across use cases, including real-time notifications. Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service. You can connect with customers and stakeholders over channels like email, SMS, push, or voice.

 

Building an electronic security lock using serverless

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-an-electronic-security-lock-using-serverless/

In this guide I show how to build an electronic security lock for package delivery, securing physical documents, or granting access to a secret lab. This project uses AWS Serverless to create a touchscreen keypad lock that uses SMS to alert a recipient with a custom message and unlock code. Files are included for the lockbox shown, but the system can be installed in anything with a door.

CircuitPython is a lightweight version of Python that works on embedded hardware. It runs on an Adafruit PyPortal open-source IoT touch display. A relay wired to the PyPortal acts as an electronic switch to bridge power to an electronic solenoid lock.

I deploy the backend to the AWS Cloud using the AWS Serverless Application Repository. The code on the PyPortal makes REST calls to the backend to send a random four-digit code as a text message using Amazon Pinpoint. It also stores the lock state in AWS System Manager Parameter Store, a secure service for storing and retrieving sensitive information.

Prerequisites

You need the following to complete the project:

Deploy the backend application

An architecture diagram of the serverless backend.

An architecture diagram of the serverless backend.

The serverless backend consists of three Amazon API Gateway endpoints that invoke AWS Lambda functions. At boot, the PyPortal calls the FetchState function to access the lock state from a Parameter Store in AWS Systems Manager. For example, if the returned state is:

{ “locked”: True, “code”: “1234” }

the PyPortal leaves the relay open so that the solenoid lock remains locked. Once the matching “1234” code is entered, the relay circuit is closed and the solenoid lock is opened. When unlocked the PyPortal calls the UpdateState function to update the state to:

{ “locked”: False, “code”: “ ” }

In an unlocked state, the PyPortal requests a ten-digit phone number to be entered in order to lock. The SendCode function is called with the phone number so that it can generate a random four-digit code. A message is then sent to the recipient using Amazon Pinpoint, and the Parameter Store state is updated to “locked”. The state is returned in the response and the PyPortal opens the relay again and stores the unlock code locally.

Before deploying the backend, create an Amazon Pinpoint Project and request a long code. A long code is a dedicated phone number required for sending SMS.

  1. Navigate to the Amazon Pinpoint console.
  2. Ensure that you are in a Region where Amazon Pinpoint is supported. For the most up-to-date list, see AWS Service Endpoints.
  3. Choose Create Project.
  4. Name your project and choose Create.
  5. Choose Configure under SMS and Voice.
  6. Select Enable the SMS channel for this project and choose Save changes.

  7. Under Settings, SMS and Voice choose Request long codes.
  8. Enter the target country and select Transactional for Default call type. Choose Request long codes. This incurs a monthly cost of one dollar and can be canceled anytime. For a breakdown of costs, check out current pricing.
  9. Under Settings, General settings make a note of the Project ID.

I use the AWS Serverless Application Model (AWS SAM) to create the backend template. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Navigate to the aws-serverless-pyportal-lock application in the AWS Serverless Application Repository.
  2. Under Application settings, fill the parameters PinpointApplicationID and LockboxCustomMessage.
  3. Choose Deploy.
  4. Once complete, choose View CloudFormation Stack.
  5. Select the Outputs tab and make a note of the LockboxBaseApiUrl. This is required for configuring the PyPortal.
  6. Navigate to the URL listed as LockboxApiKey in the Outputs tab.
  7. Choose Show to reveal the API key. Make a note of this. This is required for authenticating requests from the PyPortal to the backend.

PyPortal setup

The following instructions walk through installing the latest version of the Adafruit CircuityPython libraries and firmware.

  1. Follow these instructions from Adafruit to install the latest version of the CircuitPython bootloader. At the time of writing, the latest version is 5.3.0.
  2. Follow these instructions to install the latest Adafruit CircuitPython library bundle. I use bundle version 5.x.
  3. Optionally install the Mu Editor, a multi-platform code editor and serial debugger compatible with Adafruit CircuitPython boards. This can help with troubleshooting issues.

Wiring

Electronic solenoid locks come in varying shapes, sizes, and voltages. Choose one that works for your needs and wire it according to the following instructions for the PyPortal.

  1. Gather the PyPortal, a solenoid lock, relay module, JST connectors, jumper wire, and a power source that matches the solenoid being used. For this project, a six-volt solenoid is used with a four AA battery holder.
  2. Wire the system following this diagram.
  3. Splice female jumper wires to the exposed leads of a JST connector to connect the relay module.
  4. Insert the JST connector end to the port labeled D4 on the PyPortal.
  5. Power the PyPortal using USB or by feeding a five-volt supply to the port labeled D3.

Code PyPortal

As with regular Python, CircuitPython does not need to be compiled to execute. You can flash new firmware on the PyPortal by copying a Python file and necessary assets to a mounted volume. The bootloader runs code.py anytime the device starts or any files are updated.

  1. Use a USB cable to plug the PyPortal into your computer and wait until a new mounted volume CIRCUITPY is available.
  2. Download the project from GitHub. Inside the project, copy the contents of /circuit-python on to the CIRCUITPY volume.
  3. Inside the volume, open and edit the secrets.py file. Include your Wi-Fi credentials along with the LockboxApiKey and LockboxBaseApiURL API Gateway endpoint. These can be found under Outputs in the AWS CloudFormation stack created by the AWS Serverless Application Repository.
  4. Save the file, and the device restarts. It takes a moment to connect to Wi-Fi and make the first request to the FetchState function.
  5. Test the system works by entering in a phone number when prompted. An SMS message with the unlock code is sent to the provided number.
  6. Mount the system to the desired door or container, such as a 3D printed safe (files included in the GitHub project).

    Optionally
    , if you installed the Mu Editor, you can choose “Serial” to follow along the device log.

 

Understanding the code

See circuit-python/code.py from the GitHub project, this is the main code for the PyPortal. When the PyPortal connects to Wi-Fi, the first thing it does is make a GET request to the API Gateway endpoint for the FetchState function.

def getState():
    endpoint = secrets['base-api'] + "/state"
    headers = {"x-api-key": secrets['x-api-key']}
    response = wifi.get(endpoint, headers=headers, timeout=30)
    handleState(response.json())
    response.close()

The FetchState Lambda function code, written in Python, gets the state from the Parameter Store and returns it in the response to the PyPortal.

import os
import json
import boto3

client = boto3.client('ssm')
parameterName = os.environ.get('PARAMETER_NAME')

def lambda_handler(event, context):
    response = client.get_parameter(
        Name=parameterName,
        WithDecryption=False
    )

    state = json.loads(response['Parameter']['Value'])

    return {
        "statusCode": 200,
        "body": json.dumps(state)
    }

The getState function in the CircuitPython code passes the returned state to the handleState function, which determines whether to physically lock or unlock the device.

def handleState(newState):
    print(state)
    state['code'] = newState['code']
    state['locked'] = newState['locked']
    print(state)
    if state['locked'] == True:
        lock()
    if state['locked'] == False:
        unlock()

When the device is unlocked, and a phone number is entered to lock the device, the CircuitPython command function is called.

def command(action, num):
    if action == "unlock":
        if num == state["code"]:
            unlock()
        else:
            number_label.text = "Wrong code!"
            playBeep()
    if action == "lock":
        if validate(num) == True:
            data = sendCode(num)
            handleState(data)

The CircuitPython sendCode function makes a POST request with the entered phone number to the API Gateway endpoint for the SendCode Lambda function

def sendCode(num):
    endpoint = secrets['base-api'] + "/lock"
    headers = {"x-api-key": secrets['x-api-key']}
    data = { "number": num }
    response = wifi.post(endpoint, json=data, headers=headers, timeout=30)
    data = response.json()
    print("Code received: ", data)
    response.close()
    return data

This Lambda function generates a random four-digit number and adds it to the custom message stored as an environment variable. It then sends a text message to the provided phone number using Amazon Pinpoint, and saves the new state in the Parameter Store. The new state is returned in the response and is used by the handleState function in the CircuitPython code.

import os
import json
import boto3
import random

pinpoint = boto3.client('pinpoint')
ssm = boto3.client('ssm')

applicationId = os.environ.get('APPLICATION_ID')
parameterName = os.environ.get('PARAMETER_NAME')
message = os.environ.get('MESSAGE')

def lambda_handler(event, context):
    print(event)
    body = json.loads(event['body'])

    number = "+1" + str(body['number'])
    code = str(random.randint(1111,9999))

    addresses = {}
    addresses[number] = {'ChannelType': 'SMS'}
    pinpoint.send_messages(
        ApplicationId=applicationId,
        MessageRequest={
            'Addresses': addresses,
            'MessageConfiguration': {
                'SMSMessage': {
                    'Body': message + code,
                    'MessageType': 'TRANSACTIONAL'
                }
            }
        }
    )

    state = { "locked": True, "code": code }

    response = ssm.put_parameter(
        Name=parameterName,
        Value=json.dumps(state),
        Type='String',
        Overwrite=True
    )

    return {
        "statusCode": 200,
        "body": json.dumps(state)
    }

Entering the correct unlock code from the SMS message calls the unlock function. The unlock function closes the relay circuit to open the solenoid lock. It plays a beep sound and then calls the updateState function, which makes a POST request to the API Gateway endpoint for the UpdateState Lambda function.

def updateState(newState):
    endpoint = secrets['base-api'] + "/state"
    headers = {"x-api-key": secrets['x-api-key']}
    response = wifi.post(endpoint, json=newState, headers=headers, timeout=30)
    data = response.json()
    print("Updated state to: ", data)
    response.close()
    return data

def unlock():
    print("Unlocked!")
    number_label.text = "Enter Phone# to Lock"
    time.sleep(1)
    btn = find_button("Unlock")
    if btn is not None:
        btn.selected = True
        btn.label = "Lock"
    lock_relay.value = True
    playBeep()
    updateState({"locked": False, "code": ""})

The UpdateState Lambda function updates the Parameter Store whenever the state is changed. When the PyPortal loses power or restarts, the last known state is fetched, preventing a false lock/unlocked position.

import os
import json
import boto3

client = boto3.client('ssm')
parameterName = os.environ.get('PARAMETER_NAME')

def lambda_handler(event, context):
    state = json.loads(event['body'])

    response = client.put_parameter(
        Name=parameterName,
        Value=json.dumps(state),
        Type='String',
        Overwrite=True
    )

    return {
        "statusCode": 200,
        "body": json.dumps(state)
    }

Conclusion

I show how to build an electronic keypad lock system using a basic relay circuit and a microcontroller. The system is managed by a serverless backend API deployed using the AWS Serverless Application Repository. The backend uses API Gateway to provide a REST API for Lambda functions that handle fetching lock state, updating lock state, and sending a random four-digit code via SMS using Amazon Pinpoint. Language consistency is achieved by using CircuitPython on the PyPortal and Python 3.8 in the Lambda function code.

Use this project as a template to build out any solution that requires secure physical access control. It can be embedded in cabinet drawers to protect documents or can be used with a door solenoid to control room access. Try combining it with a serverless geohashing app to develop a treasure hunting experience. Explore how to further modify the serverless application in the GitHub project by learning about the AWS Serverless Application Model. Read my previous guide to learn how you can add voice to a CircuitPython project on a PyPortal.

 

Adding WhatsApp as an Amazon Pinpoint Channel

Post Syndicated from Edwin Bejarano original https://aws.amazon.com/blogs/messaging-and-targeting/adding-whatsapp-as-an-amazon-pinpoint-channel/

Amazon Pinpoint recently announced the general availability of custom channels. Custom channels enable you to extend the capabilities of Amazon Pinpoint via a webhook or AWS Lambda function. Among many other possibilities, you can use custom channels to send messages to your customers through any API-enabled service, for example WhatsApp. With these new channels, you have full control over the message delivery to the endpoints associated with each custom channel campaign.

In this post, we provide a quick overview of the features and capabilities of using a custom channel as part of campaigns. We also provide a blueprint that you can use to build your first integration with WhatsApp as a custom channel.

Note: Amazon Web Services isn’t responsible for any third-party service that you use to send messages with custom channels. Third-party services may be subject to additional terms. 

Prerequisites

Before creating your new custom channel, you must have the integration ready and an Amazon Identity and Account Management (IAM) user created with the necessary permissions. First set up the following:

  1. Create an IAM administrator. For more information, see Creating your first IAM admin user and group in the IAM User Guide. Specify the credentials of this IAM user when you set up the AWS Command Line Interface (CLI).
  2. Follow the steps at https://www.twilio.com/docs/whatsapp/api to create a free Twilio account and have your WhatsApp account join the sandbox to be able to receive messages. Note the account SID and Auth token as they are needed in a later step.
  3. Configure the AWS CLI. For more information about setting up the AWS CLI, see Configuring the AWS CLI.

whatsapp

Step 1: Create an Amazon Pinpoint project.

In this section, you create and configure a project in Amazon Pinpoint. Later, you use this data to create segments and campaigns.

To set up the Amazon Pinpoint project

  1. Sign in to the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint/.
  2. On the All projects page, choose Create a project. Enter a name for the project, and then choose Create.
  3. On the Configure features page, under SMS and Voice, choose Configure.
  4. Under General settings, select Enable the SMS channel for this project, and then choose Save changes.
  5. In the navigation pane, under Settings, choose General settings. In the Project details section, copy the value under Project ID. You need this value for later.

Step 2: Create an endpoint.

In Amazon Pinpoint, an endpoint represents a specific method of contacting a customer. This could be their email address (for email messages) or their phone number (for SMS messages) or a custom endpoint type. Endpoints can also contain custom attributes, and you can associate multiple endpoints with a single user. In this step, we create an SMS endpoint that is used to send a WhatsApp message.

To create an endpoint using AWS CLI, at the command line, enter the following command:

aws pinpoint update-endpoint --application-id <project-id> \
--endpoint-id 12456 --endpoint-request "Address='<mobile-number>', \
ChannelType='SMS',Attributes={username=['testUser'],integrations=['WhatsApp']}"

In the preceding example, replace <project-id> with the Amazon Pinpoint Project ID that you copied in step 1. Replace <mobile-number> with your phone number, formatted in E.164 format (for example, +12065550142). For the WhatsApp integration to work, you must use the mobile number that you registered in the WhatsApp sandbox in the prerequisite steps.

Step 3: Create an AWS Lambda.

You must create an AWS Lambda that has the code that calls Twilio and sends a message to the endpoint.

1. Open the AWS Lambda console at http://console.aws.amazon.com/AWSLambda, and then click on Create Function.

2. Choose Author from scratch.

3. For Function Name, enter ‘WhatsAppTest’.

4. For Runtime, select Python 3.7.

5. Click Create Function.

6. For the function code, copy the following and paste into the code browser in your AWS Lambda function:

import base64
import json
import os
import urllib
from urllib import request, parse
 
TWILIO_SMS_URL = "https://api.twilio.com/2010-04-01/Accounts/{}/Messages.json" 
TWILIO_ACCOUNT_SID = os.environ.get("TWILIO_ACCOUNT_SID")
TWILIO_AUTH_TOKEN = os.environ.get("TWILIO_AUTH_TOKEN")
TWILIO_FROM_NUMBER = os.environ.get("TWILIO_FROM_NUMBER")
 
def lambda_handler(event, context):
	if not TWILIO_ACCOUNT_SID:
		return "Unable to access Twilio Account SID."
	elif not TWILIO_AUTH_TOKEN:
		return "Unable to access Twilio Auth Token."
	elif not TWILIO_FROM_NUMBER:
		return "Unable to access Twilio WhatsApp From Number."
	# Lets print out the event for our logs 
	print("Received event: {}".format(event))
	# insert Twilio Account SID into the REST API URL 
	populated_url = TWILIO_SMS_URL.format(TWILIO_ACCOUNT_SID)
	
	for key in event['Endpoints'].keys(): 
		to_number = event['Endpoints'][key]['Address']
		# Example body and using an attribute from the endpoint
		username = event['Endpoints'][key]['Attributes']['username'][0]
		body = "Hello {}, here is your weekly 10% discount coupon: SAVE10".format(username)
		post_params = {"To": "whatsapp:" + to_number, "From": "whatsapp:" + TWILIO_FROM_NUMBER, "Body": body}
		# encode the parameters for Python's urllib 
		data = parse.urlencode(post_params).encode() 
		req = request.Request(populated_url)
		# add authentication header to request based on Account SID + Auth Token 
		authentication = "{}:{}".format(TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN) 
		base64string = base64.b64encode(authentication.encode('utf-8')) 
		req.add_header("Authorization", "Basic %s" % base64string.decode('ascii')) 
		print("Request: " + str(request))
		try:
			# perform HTTP POST request
			with request.urlopen(req, data) as f:
				print("Twilio returned {}".format(str(f.read().decode('utf-8')))) 
		except Exception as e:
			# something went wrong!
			return e
	return "WhatsApp messages sent successfully"

7. Add environment variables for your Twilio Account SID, Auth token, and From Number that you acquired from the prerequisite steps.

environmentalvariables

8. Add permissions to your AWS Lambda to allow Amazon Pinpoint to invoke it:

aws lambda add-permission \
--function-name WhatsAppTest \
--statement-id sid \
--action lambda:InvokeFunction \
--principal pinpoint.us-east-1.amazonaws.com \
--source-arn arn:aws:mobiletargeting:us-east-1:<account-id>:apps/*

Step 4: Create a segment and campaign in Amazon Pinpoint.

Now that we have an endpoint, we must add it to a segment so that we can use it within a campaign. By sending a campaign, we can verify that our Amazon Pinpoint project is configured correctly, and that we created the endpoint correctly.

To create the segment and campaign:

1. Open the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint, and then choose the project that you created in step 1.

2. In the navigation pane, choose Segments, and then choose Create a segment.

3. Name the segment “WhatsAppTest.” Under Segment group 1, on the Add a filter menu, choose Filter by endpoint.

4. For Choose an endpoint attribute, choose integrations, then for values, choose WhatsApp.

Segmentfilter

5. Confirm that the Segment estimate section shows that there is one eligible endpoint, and then choose Create segment.

6. In the navigation pane, choose Campaigns, and then choose Create a campaign.

7. Name the campaign “WhatsAppTest.” Under Choose a channel for this campaign, choose Custom, and then choose Next.

8. On the Choose a segment page, choose the “WhatsAppTest” segment that you just created, and then choose Next.

9. In Create your message, choose the AWS Lambda function we just created, ‘WhatsAppTest.’ Select SMS in the Endpoint Options. On the Choose when to send the campaign page, keep all of the default values, and then choose Next. On the Review and launch page, choose Launch campaign.

Within a few seconds, you should receive a WhatsApp message at the phone number that you specified when you created the endpoint.

create message

Next steps

Your Custom channel solution for WhatsApp is now ready to use. There are several ways you can make this solution your own:

  • Customize your messaging: This post used an example message to be sent to your endpoints within the AWS Lambda. You can customize that message to fit your needs.
  • Expand endpoints in your application: This post only used one endpoint for the integration. You can use your WhatsApp integration with new endpoints by importing a segment that can be used with a new campaign. Learn how to import a segment here: https://docs.aws.amazon.com/pinpoint/latest/userguide/segments-importing.html
  • Use new integrations: This post focused on integrating your custom channel with WhatsApp but there are many other integrations that are possible when using AWS Lambda.

Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service. Learn more here: https://aws.amazon.com/pinpoint/

Retrying Undelivered Voice Messages with Amazon Pinpoint

Post Syndicated from Heidi Gloudemans original https://aws.amazon.com/blogs/messaging-and-targeting/retrying-undelivered-voice-messages-with-amazon-pinpoint/

Note: This post was written by Murat Balkan, an AWS Senior Solutions Architect.


Many of our customers use voice notifications to deliver mission-critical and time-sensitive messages to their users. Customers often configure their systems to retry delivery when these voice messages aren’t delivered the first time around. Other customers set up their systems to fall back to another channel in this situation.

This blog post shows you how to retry the delivery of a voice message if the initial attempt wasn’t successful.

Architecture

By completing the steps in this post, you can create a system that uses the architecture illustrated in the following image:

 

First, a Lambda function calls the SendMessage operation in the Amazon Pinpoint API. The SendMessage operation then initiates a phone call to the recipient and generates a unique message ID, which is returned to the Lambda function. The Lambda function then adds this message ID to a DynamoDB table.

While Amazon Pinpoint attempts to deliver a message to a recipient, it emits several event records. These records indicate when the call is initiated, when the phone is ringing, when the call is answered, and so forth. Amazon Pinpoint publishes these events to an Amazon SNS topic. In this example, we’re only interested in the BUSY, FAILED, and NO_ANSWER event types, so we add some filtering criteria.

An Amazon SQS queue then subscribes to the Amazon SNS topic and monitors the incoming events. The Delivery Delay attribute of this queue is also set at the queue level. This configuration provides a back-off retry mechanism for failed voice messages.

When the Delivery Delay timer is reached, another Lambda function polls the queue and extracts the MessageId attribute from the polled message. It uses this attribute to locate the DynamoDB record for the original call. This record also tells us how many times Amazon Pinpoint has attempted to deliver the message.

The Lambda function compares the number of retries to a MAX_RETRY environment variable to determine whether it should attempt to send the message again.

Prerequisites

To start sending transactional voice messages, create an Amazon Pinpoint project and then request a phone number. Next, clone this GitHub repository to your local machine.

After you add a long code to your account, use AWS SAM to deploy the remaining parts of this serverless architecture. You provide the long number as an input parameter to this template.

The AWS SAM template creates the following resources:

  • A Lambda function (CallGenerator) that initiates a voice call using the Amazon Pinpoint API.
  • An Amazon SNS topic that collects state change events from Amazon Pinpoint.
  • An Amazon SQS queue that queues the messages.
  • A Lambda function (RetryCallGenerator) that polls the Amazon SQS queue and re-initiates the previously failed call attempt by calling the CallGenerator function.
  • A DynamoDB table that contains information about previous attempts to deliver the call.

The template also defines a custom Lambda resource, CustomResource, which creates a configuration set in Amazon Pinpoint. This configuration set specifies the events to send to Amazon SNS. A Lambda environment variable, CONFIG_SET_NAME, contains the name of the configuration set.

This architecture consists of two Lambda functions, which are represented as two different apps in the AWS SAM template. These functions are named CallGenerator and RetryCallGenerator. The CallGenerator function initiates the voice message with Amazon Pinpoint. The SendMessage API in Amazon Pinpoint returns a MessageId. The architecture uses this ID as a key to connect messages to the various events that they generate. The CallGenerator function also retains this ID in a DynamoDB table called call_attempts. The RetryCallGenerator function looks up the MessageId in the call_attempts table. If necessary, the function tries to send the message again by invoking the CallGenerator function.

 

Deploying and Testing

Start by downloading the template from the GitHub repository. AWS SAM requires you to specify an Amazon Simple Storage Service (Amazon S3) bucket to hold the deployment artifacts. If you haven’t already created a bucket for this purpose, create one now. The bucket should be reachable by a AWS Identity and Access Management (IAM) user.

At the command line, enter the following command to package the application:

sam package --template template.yaml --output-template-file output_template.yaml --s3-bucket BUCKET_NAME_HERE

In the preceding command, replace BUCKET_NAME_HERE with the name of the Amazon S3 bucket that should hold the deployment artifacts.

AWS SAM packages the application and copies it into the Amazon S3 bucket. This AWS SAM template requires you to specify three parameters: longCode, the phone number that’s used to make the outbound calls; maxRetry, which is used to set the MAX_RETRY environment variable for the RetryCallGenerator application; and retryDelaySeconds, which sets the delivery delay time for the Amazon SQS queue.

When the AWS SAM package command finishes running, enter the following command to deploy the package:

sam deploy --template-file output_template.yaml --stack-name blogstack --capabilities CAPABILITY_IAM --parameter-overrides maxRetry=2 longCode=LONG_CODE retryDelaySeconds=60

In the preceding command, replace LONG_CODE with the dedicated phone number that you acquired earlier.

When you run this command, AWS SAM shows the progress of the deployment. When the deployment finishes, you can test it by sending a sample event to the CallGenerator Lambda function. Use the following sample event to test the Lambda function:

{
"Message": "<speak>Thank you for visiting the AWS <emphasis>Messaging and Targeting Blog</emphasis>.</speak>",
"PhoneNumber": "DESTINATION_PHONE_NUMBER",
"RetryCount": 0
}

In the preceding event, replace DESTINATION_PHONE_NUMBER with the phone number to which you want to send a test message.

Important: Telecommunication providers take several steps to limit unsolicited voice messages. For example, providers in the United States only deliver a certain number of automated voice messages to each recipient per day. For this reason, you can only use Amazon Pinpoint to send 10 calls per day to each recipient. Keep this limit in mind during the testing process.

Conclusion

This architecture shows how Amazon Pinpoint can deliver state change events to Amazon SNS and how a serverless application can use it. You can adapt this architecture to apply to other use cases, such as call auditing, advanced call analytics and more.

Creating a Seamless Handoff Between Amazon Pinpoint and Amazon Connect

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/creating-a-seamless-handoff-between-amazon-pinpoint-and-amazon-connect/

Note: This post was written by Ilya Pupko, Senior Consultant for the AWS Digital User Engagement team.


Time to read5 minutes
Learning levelIntermediate (200)
Services usedAmazon Pinpoint, Amazon SNS, AWS Lambda, Amazon Lex, Amazon Connect

Your customers deserve to have helpful communications with your brand, regardless of the channel that you use to interact with them. There are many situations in which you might have to move customers from one channel to another—for example, when a customer is interacting with a chatbot over SMS, but their needs suddenly change to require voice assistance. To create a great customer experience, your communications with your customers should be seamless across all communication channels.

Welcome aboard Customer Obsessed Airlines

In this post, we look at a scenario that involves our fictitious airline, Customer Obsessed Airlines. Severe storms in one area of the country have caused Customer Obsessed Airlines to cancel a large number of flights. Customer Obsessed Airlines has to notify all of the affected customers of the cancellations right away. But most importantly, to keep customers as happy as possible in this unfortunate and unavoidable situation, Customer Obsessed Airlines has to make it easy for customers to rebook their flights.

Fortunately, Customer Obsessed Airlines has implemented the solution that’s outlined later in this post. This solution uses Amazon Pinpoint to send messages to a targeted segment of customers—in this case, the specific customers who were booked on the affected flights. Some of these customers might have straightforward travel itineraries that can simply be rebooked through interactions with a chatbot. Other customers who have more complex itineraries, or those who simply prefer to interact with a human over the phone, can be handed off to an agent in your call center.

About the solution

The solution that we’ll build to handle this scenario can be deployed in under an hour. The following diagram illustrates the interactions in this solution.

At a high level, this solution uses the following workflow:

  1. An event occurs. Automated impact analysis systems trigger the creation of custom segments—in this case, all passengers whose flights were cancelled.
  2. Amazon Pinpoint sends a message to the affected passengers through their preferred channels. Amazon Pinpoint supports the email, SMS, push, and voice channels, but in this example, we focus exclusively on SMS.
  3. Passengers who receive the message can respond. When they do, they interact with a chatbot that helps them book a different flight.
  4. If a passenger requests a live agent, or if their situation can’t be handled by a chatbot, then Amazon Pinpoint passes information about the customer’s situation and communication history to Amazon Connect. The passenger is entered into a queue. When the passenger reaches the front of the queue, they receive a phone call from an agent.
  5. After being re-booked, the passenger receives a written confirmation of the changes to their itinerary through their preferred channel. Passengers are also given the option of providing feedback on their interaction when the process is complete.

To build this solution, we use Amazon Pinpoint to segment our customers based on their attributes (such as which flight they’ve booked), and to deliver messages to those segments.

We also use Amazon Connect to manage the voice calling part of the solution, and Amazon Lex to power the chatbot. Finally, we connect these services using logic that’s defined in AWS Lambda functions.

Setting up the solution

Step 1: Set up Amazon Pinpoint and link it with Amazon Lex

The first step in setting up this solution is to create a new Amazon Pinpoint project and configure the SMS channel. When that’s done, you can create an Amazon Lex chatbot and link it to the Amazon Pinpoint project.

We described this process in detail in an earlier blog post. Complete the procedures in Create an SMS Chatbot with Amazon Pinpoint and Amazon Lex, and then proceed to step 2.

Step 2: Set up Amazon Connect and link it with your Amazon Lex chatbot

By completing step 1, we’ve created a system that can send messages to our passengers and receive messages from them. The next step is to create a way for passengers to communicate with our call center.

The Amazon Connect Administrator Guide provides instructions for linking an Amazon Lex bot to an Amazon Connect instance. For complete procedures, see Add an Amazon Lex Bot.

When you complete these procedures, link your Amazon Connect instance to the same Amazon Lex bot that you created in step 1. This step is intended to provide customers with a consistent, cohesive experience across channels.

Step 3: Set up an Amazon Connect callback queue and use Amazon Pinpoint keyword logic to trigger it

Now that we’ve configured Amazon Pinpoint and Amazon Connect, we can connect them.

Linking the two services makes it possible for passengers to request additional assistance. Traditionally, passengers in this situation would have to call a call center themselves and then wait on hold for an agent to become available. However, in this solution, our call center calls the passenger directly as soon as an agent is available. When the agent calls the passenger, the agent has all of the information about the passenger’s issue, as well as a transcript of the passenger’s interactions with your chatbot.

To implement an automatic callback mechanism, use the Amazon Pinpoint Connect Callback Requestor, which is available on the AWS GitHub page.

Next steps

By completing the preceding three steps, you can send messages to a subset of your users based on the criteria you choose and the type of message you want to send. Your customers can interact with your message by replying with questions. When they do, a chatbot responds intelligently and appropriately.

You can add to this solution by expanding it to cover other communication channels, such as push notifications. You can also automate the initial communication by integrating the solution with your systems of record.

We’re excited to see what you build using the solution that we outlined in this post. Let us know of your ideas and your successes in the comments.

Building a serverless weather bot with two-way SMS, AWS SAM, and AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-serverless-weather-bot-with-two-way-sms-aws-sam-and-aws-lambda/

People love being able to send text messages to companies to change appointments, get support, or receive order updates. Short message service (SMS) is ubiquitous around the world and supported in almost every mobile phone that you can find today. It can also be a good interface for a variety of business applications.

Many developers know that Amazon SNS can send one-way text messages. Fewer know about handling two-way conversations with the other services available. In the example covered in this post, you can set up and deploy two-way SMS in about 10 minutes.

This example creates a weather bot that responds to a text message from a user, providing weather information for the request weather zipcode. This demo only works for US users, but the principles of the design apply anywhere. You receive a response back in a few seconds with a current weather report.

The SMS weatherbot responds to a request.

This post covers the following walkthroughs:

  • Setting up a phone number in Amazon Pinpoint
  • Deploying the serverless application using AWS SAM to respond to the text message
  • Reviewing the code used in the AWS Lambda function

The final architecture looks like the following diagram:

Architecture diagram for the weatherbot

Setting up Amazon Pinpoint

Amazon Pinpoint provides a range of different ways to send personalized messages to customers. This makes it easy to coordinate a large number of messages for web or mobile applications. It’s designed to help you drive engagement and make applications more useful to your users. This project uses the two-way text messaging feature, but Amazon Pinpoint has a broad range of other capabilities.

First, set up a phone number for this project. Amazon Pinpoint provides a dedicated number, which currently costs $1/month and is not covered by the Free Tier allowance. You are also charged for text messages, so be sure to review the current pricing before launching any application into production.

To reserve your dedicated phone number, follow these steps:

1. Sign in to the Amazon Pinpoint console.

2. Ensure that you are in a Region where Amazon Pinpoint is supported. For the most up-to-date list, see AWS Service Endpoints. This walkthrough uses us-east-1 (US East – N. Virginia).

3. On the Get started page, for Project name, enter weatherApp, and choose Create a project.

4. On the Configure features page, for SMS and voice, choose Configure.

Configure features

5. Select Enable the SMS channel for this project, and choose Save changes.

Set up SMS

6. Choose Settings, SMS and voice.

SMS and voice

7. Under Number settings, choose Request long codes.

Request long codes

For Target country or Region, choose United States. For Default call type, choose Promotional and then choose Request long codes. The confirmation page shows that a new phone number has been added to the account.

Confirmation page

8. At the top of the page, choose All projects, and note the Project ID. You need this in the next section.

All projects

You now have a dedicated phone number ready to receive SMS messages. At the moment, the messages are not routed anywhere. You configure this in the next section.

Setting up the serverless application

Before deploying the code, you need an API key from the OpenWeatherMap service. For a free account, sign up on the Create New Account page. This service provides an API where you can post a zip code and receive the current weather conditions for that location.

Make sure that you have the AWS CLI and the AWS SAM CLI installed before proceeding. You are using these tools from the command line to automate the deployment of this application. The code for this walkthrough is stored in the aws-serverless-sar-pinpoint-weather-bot GitHub repo. You use the AWS SAM template in the repo to coordinate the deployment of the Lambda function and the SNS topic.

1. Create a new, empty directory on your local machine and clone the repository:

git clone https://github.com/jbesw/aws-serverless-sar-pinpoint-weather-bot

git clone

2. Create a bucket for the deployment (specify a unique bucket name):

aws s3 mb s3://your_bucket_name

Change into the cloned directory:

cd .\aws-serverless-sar-pinpoint-weather-bot\

3. Run the AWS SAM build process and create the AWS SAM package:

sam build
sam package --output-template-file packaged.yaml --s3-bucket your_bucket_name

4. Deploy the AWS SAM application:

  • Replace yourAPIkey with the OpenWeatherMap API key
  • Replace yourApplicationId with the Amazon Pinpoint project ID from the first section.
sam deploy --template-file packaged.yaml \
  --stack-name myWeatherBot
  --capabilities CAPABILITY_IAM
  --region us-east-1
  –-parameter-overrides APIkey=<<yourAPIkey>> ApplicationId=<<yourApplicationId>>

After running these commands, the console shows the following message:

Successfully created/updated stack – myWeatherBot.

sam deploy

At this point, you have deployed the Lambda function to process the core application logic and an SNS topic to receive the text messages. The final step is to connect the Amazon Pinpoint service with the SNS topic that has been created by this AWS SAM template.

Connect Amazon Pinpoint to Amazon SNS

Browse to the SNS console to find the topic created by the deployment, and copy the ARN to the clipboard.

SNS topic

To add the SNS topic to the Amazon Pinpoint project:

1. In the Amazon Pinpoint console, under All projects, select your weatherApp project.

2. In the left navigation pane, choose Settings, SMS and voice.

SMS and voice

3. Under Number settings, choose the phone number. Expand the Two-way SMS section, and check Enable two-way SMS.

4. Under Incoming message destination, select Choose an existing SNS topic, and then select the ARN that you copied previously.

Incoming message destination

5. Choose Save.

Now you can test your deployment. Text weather zipcode to your dedicated phone number. The service responds with the weather summary.

Reviewing the code

When Amazon Pinpoint receives the incoming text message to the dedicated phone number, it publishes the message to the SNS topic. The Lambda function subscribes to this topic and is invoked every time a new message arrives.

App.js contains the entry point for the Lambda handler, providing a top-level error handler and iterating through the event object in case multiple messages are received. Each message is sent to the smsResponder function. This is wrapped in await Promise.all so processing happens in parallel, because the messages are not dependent on each other.

const { smsResponder }  = require('./smsResponder')

// Calls the SMS responder function for each text message passed in the event parameter.

exports.lambdaHandler = async (event, context) => {
  console.log('Starting handler')
  
  await Promise.all(
    event.Records.map(async (record) => {
      try {
        await smsResponder(record)
      } catch (err) {
        console.error(err)
        return err
      }
    })
  )

  return  {
    'statusCode': 200
  }
}

smsResponder.js checks that the text message begins with the keyword (weather), followed by a valid zip code. After requesting the weather summary, it sends the response back to Amazon Pinpoint to send the SMS back to the user.

When the params object is built to create the responding text message, this function reverses the destination and origination phone numbers from the incoming message. It marks the message as PROMOTIONAL, and sets the response channel to SMS.

const AWS = require('aws-sdk')
AWS.config.update({ region: process.env.AWS_REGION || 'us-east-1' })

const { getWeather } = require('./getWeather')
const KEYWORD = 'weather'

const validateZipCode = function (elementValue){
  let zipCodePattern = /^\d{5}$|^\d{5}-\d{4}$/
   return zipCodePattern.test(elementValue)
}

const sendSMS = async function (params) {
  const pinpoint = new AWS.Pinpoint()
  console.log('sendSMS called: ', JSON.stringify(params, null, 2))

  return new Promise((resolve, reject) => {
    pinpoint.sendMessages(params, function(err, data) {
      if(err) {
        console.error('sendSMS error:', err)
        reject(err)
      } else {
        console.log("Message sent. Data: ", data)
        resolve(data)
      }
    })
  })
}

const smsResponder = async (event) => {

  const msg = JSON.parse(event.Sns.Message)
  const msgWords = msg.messageBody.split(" ")

  // Check the first word of the text message is the keyword
  if (msgWords[0].toLowerCase() !== KEYWORD) return console.log('No keyword found - exiting')

  // Validate zip code and get the weather
  let message =''
  const zipCode = msgWords[1]

  if (validateZipCode(zipCode)) {
    message = await getWeather(zipCode)
  } else {
    message = 'Invalid zip code - text me in the format "weather 00000".'
  }

  // Send the SMS response
  var params = {
    ApplicationId: process.env.ApplicationId,
    MessageRequest: {
      Addresses: {
        [msg.originationNumber]: {
          ChannelType: 'SMS'
        }
      },
      MessageConfiguration: {
        SMSMessage: {
          Body: message,
          MessageType: 'PROMOTIONAL',
          OriginationNumber: msg.destinationNumber
        }
      }
    }
  }

Finally, getWeather.js takes a zip code and queries the OpenWeatherMap API for the weather summary. It performs some minimal processing to convert the result into a text message.

const getWeather = async function (zipCode) {

  try {
    // Get weather for the zip code provided
    const response = await axios({
      url: `${weatherURL}&zip=${zipCode}&APPID=${process.env.APIkey}`,
      method: 'get',
      port: 443,
      responseType: JSON
    })

    // Build natural response
    const weather = `Weather in ${response.data.name}: ${response.data.weather[0].description}, currently ${parseInt(response.data.main.temp)} degrees with a low of ${parseInt(response.data.main.temp_min)} and a high of ${parseInt(response.data.main.temp_max)}.`
    console.log('getWeather response: ', weather)
    return weather

  } catch (err) {
    console.error('getWeather error: ', err)
    return 'Sorry, there was a problem with the weather service.'
  }
}

Conclusion

Amazon Pinpoint simplifies handling two-way SMS to customer phones. A Lambda function can inspect incoming text messages, process the data, and send a response, all within 100 lines of code. Although this example only checks the weather one time, the functionality could be extended to any of the following tasks:

  • Sending daily weather reports.
  • Providing alerts for significant weather events.
  • Adding additional keywords to support different types of queries, such as weather averages.

Alternatively, this flow can be used to help support order processing, appointment management, or create marketing campaigns. Adding two-way SMS provides your customers with new ways to interact with your business applications.

Building Your First Journey in Amazon Pinpoint

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/building-your-first-journey-in-amazon-pinpoint/

Note: This post was written by Zach Barbitta, the Product Lead for Pinpoint Journeys, and Josh Kahn, an AWS Solution Architect.


We recently added a new feature called journeys to Amazon Pinpoint. With journeys, you can create fully automated, multi-step customer engagements through an easy to use, drag-and-drop interface.

In this post, we’ll provide a quick overview of the features and capabilities of Amazon Pinpoint journeys. We’ll also provide a blueprint that you can use to build your first journey.

What is a journey?

Think of a journey as being similar to a flowchart. Every journey begins by adding a segment of participants to it. These participants proceed through steps in the journey, which we call activities. Sometimes, participants split into different branches. Some participants end their journeys early, while some proceed through several steps. The experience of going down one path in the journey can be very different from the experience of going down a different path. In a journey, you determine the outcomes that are most relevant for your customers, and then create the structure that leads to those outcomes.

Your journey can contain several different types of activities, each of which does something different when journey participants arrive on it. For example, when participants arrive on an Email activity, they receive an email. When they land on a Random Split activity, they’re randomly separated into one of up to five different groups. You can also separate customers based on their attributes, or based on their interactions with previous steps in the journey, by using a Yes/no Split or a Multivariate Split activity. There are six different types of activities that you can add to your journeys. You can find more information about these activity types in our User Guide.

The web-based Amazon Pinpoint management console includes an easy-to-use, drag-and-drop interface for creating your journeys. You can also create journeys programmatically by using the Amazon Pinpoint API, the AWS CLI, or an AWS SDK. Best of all, you can use the API to modify journeys that you created in the console, and vice-versa.

Planning our journey

The flexible nature of journeys makes it easy to create customer experiences that are timely, personalized, and relevant. In this post, we’ll assume the role of a music streaming service. We’ll create a journey that takes our customers through the following steps:

  1. When customers sign up, we’ll send them an email that highlights some of the cool features they’ll get by using our service.
  2. After 1 week, we’ll divide customers into two separate groups based whether or not they opened the email we sent them when they signed up.
  3. To the group of customers who opened the first message, we’ll send an email that contains additional tips and tricks.
  4. To the group who didn’t open the first message, we’ll send an email that reiterates the basic information that we mentioned in the first message.

The best part about journeys in Amazon Pinpoint is that you can set them up in just a few minutes, without having to do any coding, and without any specialized training (unless you count reading this blog post as training).

After we launch this journey, we’ll be able to view metrics through the same interface that we used to create the journey. For example, for each split activity, we’ll be able to see how many participants were sent down each path. For each Email activity, we’ll be able to view the total number of sends, deliveries, opens, clicks, bounces, and unsubscribes. We’ll also be able to view these metrics as aggregated totals for the entire journey. These metrics can help you discover what worked and what didn’t work in your journey, so that you can build more effective journeys in the future.

First steps

There are a few prerequisites that we have to complete before we can start creating the journey.

Create Endpoints

When a new user signs up for your service, you have to create an endpoint record for that user. One way to do this is by using AWS Amplify (if you use Amplify’s built-in analytics tools, Amplify creates Amazon Pinpoint endpoints). You can also use AWS Lambda to call Amazon Pinpoint’s UpdateEndpoint API. The exact method that you use to create these endpoints is up to your engineering team.

When your app creates new endpoints, they should include a user attribute that identifies them as someone who should participate in the journey. We’ll use this attribute to create our segment of journey participants.

If you just want to get started without engaging your engineering team, you can import a segment of endpoints to use in your journey. To learn more, see Importing Segments in the Amazon Pinpoint User Guide.

Verify an Email Address

In Amazon Pinpoint, you have to verify the email addresses that you use to send email. By verifying an email address, you prove that you own it, and you grant Amazon Pinpoint permission to send your email from that address. For more information, see Verifying Email Identities in the Amazon Pinpoint User Guide.

Create a Template

Before you can send email from a journey, you have to create email templates. Email templates are pre-defined message bodies that you can use to send email messages to your customers. You can learn more about creating email templates in the Amazon Pinpoint User Guide.

To create the journey that we discussed earlier, we’ll need at least three email templates: one that contains the basic information that we want to share with new customers, one with more advanced content for users who opened the first email, and another that reiterates the content in the first message.

Create a Segment

Finally, you need to create the segment of users who will participate in the journey. If your service creates an endpoint record that includes a specific user attribute, you can use this attribute to create your segment. You can learn more about creating segments in the Amazon Pinpoint User Guide.

Building the Journey

Now that we’ve completed the prerequisites, we can start building our journey. We’ll break the process down into a few quick steps.

Set it up

Start by signing in to the Amazon Pinpoint console. In the navigation pane on the right side of the page, choose Journeys, and then choose Create a journey. In the upper left corner of the page, you’ll see an area where you can enter a name for the journey. Give the journey a name that helps you identify it.

Add participants

Every journey begins with a Journey entry activity. In this activity, you define the segment that will participate in the journey. On the Journey entry activity, choose Set entry condition. In the list, choose the segment that contains the participants for the journey.

On this activity, you can also control how often new participants are added to the journey. Because new customers could sign up for our service at any time, we want to update the segment regularly. For this example, we’ll set up the journey to look for new segment members once every hour.

Send the initial email

Now we can configure the first message that we’ll send in our journey. Under the Journey entry activity that you just configured, choose Add activity, and then choose Send email. Choose the email template that you want to use in the email, and then specify the email address that you want to send the email from.

Add the split

Under the Email activity, choose Add activity, and then choose Yes/no split. This type of activity looks for all journey participants who meet a condition and sends them all down a path (the “Yes” path). All participants who don’t meet that condition are sent down a “No” path. This is similar to the Multivariate split activity. The difference between the two is that the Multivariate split can produce up to four branches, each with its own criteria, plus an “Else” branch for everyone who doesn’t meet the criteria in the other branches. A Yes/no split activity, on the other hand, can only ever have two branches, “Yes” and “No”. However, unlike the Multivariate split activity, the “Yes” branch in a Yes/no split activity can contain more than one matching criteria.

In this case, we’ll set up the “Yes” branch to look for email click events in the Email activity that occurred earlier in the journey. We also know that the majority of recipients who open the email aren’t going to do so the instant they receive the email. For this reason, we’ll adjust the Condition evaluation setting for the activity to wait for 7 days before looking for open events. This gives our customers plenty of time to receive the message, check their email, open the message, and, if they’re interested in the content of the message, click a link in the message. When you finish setting up the split activity, it should resemble the example shown in the following image.

Continue building the branches

From here, you can continue building the branches of the journey. In each branch, you’ll send a different email template based on the needs of the participants in that branch. Participants in the “Yes” branch will receive an email template that contains more advanced information, while those in the “No” branch will receive a template that revisits the content from the original message.

To learn more about setting up the various types of journey activities, see Setting Up Journey Activities in the Amazon Pinpoint User Guide.

Review the journey

When you finish adding branches and activities to your journey, choose the Review button in the top right corner of the page. When you do, the Review your journey pane opens on the left side of the screen. The Review your journey pane shows you errors that need to be fixed in your journey. It also gives you some helpful recommendations, and provides some best practices that you can use to optimize your journey. You can click any of the notifications in the Review your journey page to move immediately to the journey activity that the message applies to. When you finish addressing the errors and reviewing the recommendations, you can mark the journey as reviewed.

Testing and publishing the journey

After you review the journey, you have a couple options. You can either launch the journey immediately, or test the journey before you launch it.

If you want to test the journey, close the Review your journey pane. Then, on the Actions menu, choose Test. When you test your journey, you have to choose a segment of testers. The segment that you choose should only contain the internal recipients on your team who you want to test the journey before you publish it. You also have the option of changing the wait times in your journey, or skipping them altogether. If you choose a wait time of 1 hour here, it makes it much easier to test the Yes/no split activity (which, in the production version of the journey, contains a 7 day wait). To learn more about testing your journey, including some tips and best practices, see Testing Your Journey in the Amazon Pinpoint User Guide.

When you finish your testing, complete the review process again. On the final page of content in the Review your journey pane, choose Publish. After a short delay, your journey goes live. That’s it! You’ve created and launched your very first journey!

Wrapping up

We’re very excited about Pinpoint Journeys and look forward to seeing what you build with it. If you have questions, leave us a message on the Amazon Pinpoint Developer Forum and check out the Amazon Pinpoint documentation.

Visit the AWS Digital User Engagement team at AWS re:Invent 2019

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/visit-the-aws-digital-user-engagement-team-at-aws-reinvent-2019/

AWS re:Invent 2019 is less than 50 days away, and that means it’s time to start planning your agenda. The Digital User Engagement team is hosting several builders sessions, chalk talks, and workshops this year. Come join us and learn more about using Amazon Pinpoint and Amazon SES to engage with and delight your customers.

Visit our booth

You’ll find our booth in the Expo Hall in the Venetian. Stop by to meet the team, see a demo, and pick up some swag!

Leadership session

EUC206: How AWS is defining the future of engagement and messaging

  • What: Simon Poile, the General Manager of the AWS Digital User Engagement team, talks about how AWS is building on Amazon’s customer-centric culture of innovation to help you better engage your customers. You’ll also hear from AWS customer Coinbase, which uses Amazon Pinpoint to delight its customers while growing its business.
  • When: Wednesday, Dec 4, 10:45 AM – 11:45 AM
  • Where: MGM, Level 3, Chairman’s Ballroom 368

Sessions

EUC207: Build high-volume email applications with Amazon SES

  • What: Companies in many industries use AWS to send millions of emails every day, including Amazon.com. In this session, learn how to build applications using the highly scalable, highly reliable, and multi-tenant-capable email infrastructure of Amazon Simple Email Service (Amazon SES). You also learn how to monitor delivery rates and other important metrics, and how to use this data to improve deliverability. Members of the Amazon.com team discuss the architecture of their multi-tenant email-sending platform, the historical challenges they faced, and the ways Amazon Pinpoint and Amazon SES helped them meet their goals around Prime Day, Cyber Monday, and other retail events.
  • When: Monday, Dec 2, 1:00 PM – 2:00 PM
  • Where: MGM, Level 1, Grand Ballroom 119

Chalk talks

EUC328: Engage with your customers using SMS text messages

  • What: Text messages form a vital part of customer-engagement strategy for organizations around the world. In this workshop, learn how to use Amazon Pinpoint to send promotional, transactional, and two-way SMS messages. You also see demonstrations of how other AWS customers use SMS messaging to engage with their customers.
  • When: Wednesday, Dec 4, 2:30 PM – 3:30 PM
  • Where: Bellagio, Bellagio Ballroom 5

EUC336: Surprise and delight customers with location-based notifications

  • What: In this chalk talk, learn how to use AWS Amplify, AWS AppSync, and Amazon Pinpoint to geo-target customers. We teach you how to build and configure geofences to trigger location-based mobile-app notifications. We also walk you through the published solution and provide dedicated time for Q&A with an AWS solutions architect.
  • When: Thursday, Dec 5, 2:30 PM – 3:30 PM
  • Where: Aria, Plaza Level East, Orovada 3

AIM346-R and AIM346-R1: Personalized user engagement with machine learning

  • What: In this chalk talk, we discuss how to use Amazon Personalize and Amazon Pinpoint to provide a personalized, omni-channel experience starting in your mobile application. We discuss best practices for real-time updates, personalized notifications (push), and messaging (email and text) that drives user engagement and product discovery. We also demonstrate how other mobile services can be used to facilitate rapid prototyping.
Session #WhenWhere
AIM346-RMonday, Dec 2, 11:30 AM – 12:30 PMBellagio, Bellagio Ballroom 7
AIM346-R1Tuesday, Dec 3, 4:00 PM – 5:00 PMAria, Plaza Level East, Orovada 3

ENT315: Improve message deliverability to ensure customer reach

  • What: Do you have outbound and inbound email requirements? Is email a critical workload for your enterprise? Several factors determine whether your email messages reach your recipients. In this chalk talk, learn how to safely migrate your outbound and inbound email volumes over to AWS and Amazon Simple Email Service (Amazon SES). Learn how to onboard, safely ramp up, and ensure that business continues without disruption. Also learn best practices for delivering email messages into your customers’ inboxes rather than their spam folders, and receive guidance on scaling and improving the deliverability of your email campaigns.
  • When: Thursday, Dec 5, 11:30 AM – 12:30 PM
  • Where: Aria, Plaza Level East, Orovada 3

Builder’s sessions

EUC308-R and EUC308-R1: Build and deploy your own two-way text chatbot

  • What: In this builders session, you build an AI-powered chatbot that your customers can engage with by sending SMS messages. Your chatbot can help customers quickly ask questions, get answers, book appointments, check order status, and much more.
Session #WhenWhere
EUC308-RTuesday, Dec 3, 1:00 PM – 2:00 PMMirage, Grand Ballroom B – Table 9
EUC308-R1Wednesday, Dec 4, 4:00 PM – 5:00 PMAria, Level 1 West, Bristlecone 2 – Table 2

EUC309-R and EUC309-R1: Build your own omnichannel e-commerce experience

  • What: In this hands-on session, you learn how to integrate AWS Amplify and Amazon Pinpoint to create a retail website. You use the event data that’s generated by customers’ activities on your site to send custom-tailored emails and push notifications, creating a curated, omnichannel experience. This session is intended for builders who want to expand the user-engagement capabilities of their sites and apps.
Session #WhenWhere
EUC309-RMonday, Dec 2, 12:15 PM – 1:15 PMAria, Level 1 West, Bristlecone 4 – Table 8
EUC309-R1Tuesday, Dec 3, 11:30 AM – 12:30 PMAria, Level 1 West, Bristlecone 2 – Table 1

EUC322: Improve customer engagement by predicting user behavior

  • What: In this hands-on session, you learn how to use Amazon SageMaker and Amazon Pinpoint to create customer engagement scenarios powered by machine learning. You use cross-channel customer-activity and demographic data to train your own behavioral models. After you use your model to categorize your customers, you use Amazon Pinpoint to send engagement campaigns that are optimized to reengage users. This session is intended for builders, marketers, or data scientists who want to improve user engagement using machine learning.
  • When: Monday, Dec 2, 10:45 AM – 11:45 AM
  • Where: Aria, Level 1 West, Bristlecone 4 – Table 2

Predictive User Engagement using Amazon Pinpoint and Amazon Personalize

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/predictive-user-engagement-using-amazon-pinpoint-and-amazon-personalize/

Note: This post was written by John Burry, a Solution Architect on the AWS Customer Engagement team.


Predictive User Engagement (PUE) refers to the integration of machine learning (ML) and customer engagement services. By implementing a PUE solution, you can combine ML-based predictions and recommendations with real-time notifications and analytics, all based on your customers’ behaviors.

This blog post shows you how to set up a PUE solution by using Amazon Pinpoint and Amazon Personalize. Best of all, you can implement this solution even if you don’t have any prior machine learning experience. By completing the steps in this post, you’ll be able to build your own model in Personalize, integrate it with Pinpoint, and start sending personalized campaigns.

Prerequisites

Before you complete the steps in this post, you need to set up the following:

  • Create an admin user in Amazon Identity and Account Management (IAM). For more information, see Creating Your First IAM Admin User and Group in the IAM User Guide. You need to specify the credentials of this user when you set up the AWS Command Line Interface.
  • Install Python 3 and the pip package manager. Python 3 is installed by default on recent versions of Linux and macOS. If it isn’t already installed on your computer, you can download an installer from the Python website.
  • Use pip to install the following modules:
    • awscli
    • boto3
    • jupyter
    • matplotlib
    • sklearn
    • sagemaker

    For more information about installing modules, see Installing Python Modules in the Python 3.X Documentation.

  • Configure the AWS Command Line Interface (AWS CLI). During the configuration process, you have to specify a default AWS Region. This solution uses Amazon Sagemaker to build a model, so the Region that you specify has to be one that supports Amazon Sagemaker. For a complete list of Regions where Sagemaker is supported, see AWS Service Endpoints in the AWS General Reference. For more information about setting up the AWS CLI, see Configuring the AWS CLI in the AWS Command Line Interface User Guide.
  • Install Git. Git is installed by default on most versions of Linux and macOS. If Git isn’t already installed on your computer, you can download an installer from the Git website.

Step 1: Create an Amazon Pinpoint Project

In this section, you create and configure a project in Amazon Pinpoint. This project contains all of the customers that we will target, as well as the recommendation data that’s associated with each one. Later, we’ll use this data to create segments and campaigns.

To set up the Amazon Pinpoint project

  1. Sign in to the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint/.
  2. On the All projects page, choose Create a project. Enter a name for the project, and then choose Create.
  3. On the Configure features page, under SMS and voice, choose Configure.
  4. Under General settings, select Enable the SMS channel for this project, and then choose Save changes.
  5. In the navigation pane, under Settings, choose General settings. In the Project details section, copy the value under Project ID. You’ll need this value later.

Step 2: Create an Endpoint

In Amazon Pinpoint, an endpoint represents a specific method of contacting a customer, such as their email address (for email messages) or their phone number (for SMS messages). Endpoints can also contain custom attributes, and you can associate multiple endpoints with a single user. In this example, we use these attributes to store the recommendation data that we receive from Amazon Personalize.

In this section, we create a new endpoint and user by using the AWS CLI. We’ll use this endpoint to test the SMS channel, and to test the recommendations that we receive from Personalize.

To create an endpoint by using the AWS CLI

  1. At the command line, enter the following command:
    aws pinpoint update-endpoint --application-id <project-id> \
    --endpoint-id 12456 --endpoint-request "Address='<mobile-number>', \
    ChannelType='SMS',User={UserAttributes={recommended_items=['none']},UserId='12456'}"

    In the preceding example, replace <project-id> with the Amazon Pinpoint project ID that you copied in Step 1. Replace <mobile-number> with your phone number, formatted in E.164 format (for example, +12065550142).

Note that this endpoint contains hard-coded UserId and EndpointId values of 12456. These IDs match an ID that we’ll create later when we generate the Personalize data set.

Step 3: Create a Segment and Campaign in Amazon Pinpoint

Now that we have an endpoint, we need to add it to a segment so that we can use it within a campaign. By sending a campaign, we can verify that our Pinpoint project is configured correctly, and that we created the endpoint correctly.

To create the segment and campaign

  1. Open the Pinpoint console at http://console.aws.amazon.com/pinpoint, and then choose the project that you created in Step 1.
  2. In the navigation pane, choose Segments, and then choose Create a segment.
  3. Name the segment “No recommendations”. Under Segment group 1, on the Add a filter menu, choose Filter by user.
  4. On the Choose a user attribute menu, choose recommended-items. Set the value of the filter to “none”.
  5. Confirm that the Segment estimate section shows that there is one eligible endpoint, and then choose Create segment.
  6. In the navigation pane, choose Campaigns, and then choose Create a campaign.
  7. Name the campaign “SMS to users with no recommendations”. Under Choose a channel for this campaign, choose SMS, and then choose Next.
  8. On the Choose a segment page, choose the “No recommendations” segment that you just created, and then choose Next.
  9. In the message editor, type a test message, and then choose Next.
  10. On the Choose when to send the campaign page, keep all of the default values, and then choose Next.
  11. On the Review and launch page, choose Launch campaign. Within a few seconds, you receive a text message at the phone number that you specified when you created the endpoint.

Step 4: Load sample data into Amazon Personalize

At this point, we’ve finished setting up Amazon Pinpoint. Now we can start loading data into Amazon Personalize.

To load the data into Amazon Personalize

  1. At the command line, enter the following command to clone the sample data and Jupyter Notebooks to your computer:
    git clone https://github.com/markproy/personalize-car-search.git

  2. At the command line, change into the directory that contains the data that you just cloned. Enter the following command:
    jupyter notebook

    A new window opens in your web browser.

  3. In your web browser, open the first notebook (01_generate_data.ipynb). On the Cell menu, choose Run all. Wait for the commands to finish running.
  4. Open the second notebook (02_make_dataset_group.ipynb). In the first step, replace the value of the account_id variable with the ID of your AWS account. Then, on the Cell menu, choose Run all. This step takes several minutes to complete. Make sure that all of the commands have run successfully before you proceed to the next step.
  5. Open the third notebook (03_make_campaigns.ipynb). In the first step, replace the value of the account_id variable with the ID of your AWS account. Then, on the Cell menu, choose Run all. This step takes several minutes to complete. Make sure that all of the commands have run successfully before you proceed to the next step.
  6. Open the fourth notebook (04_use_the_campaign.ipynb). In the first step, replace the value of the account_id variable with the ID of your AWS account. Then, on the Cell menu, choose Run all. This step takes several minutes to complete.
  7. After the fourth notebook is finished running, choose Quit to terminate the Jupyter Notebook. You don’t need to run the fifth notebook for this example.
  8. Open the Amazon Personalize console at http://console.aws.amazon.com/personalize. Verify that Amazon Personalize contains one dataset group named car-dg.
  9. In the navigation pane, choose Campaigns. Verify that it contains all of the following campaigns, and that the status for each campaign is Active:
    • car-popularity-count
    • car-personalized-ranking
    • car-hrnn-metadata
    • car-sims
    • car-hrnn

Step 5: Create the Lambda function

We’ve loaded the data into Amazon Personalize, and now we need to create a Lambda function to update the endpoint attributes in Pinpoint with the recommendations provided by Personalize.

The version of the AWS SDK for Python that’s included with Lambda doesn’t include the libraries for Amazon Personalize. For this reason, you need to download these libraries to your computer, put them in a .zip file, and upload the entire package to Lambda.

To create the Lambda function

  1. In a text editor, create a new file. Paste the following code.
    # Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
    #
    # This file is licensed under the Apache License, Version 2.0 (the "License").
    # You may not use this file except in compliance with the License. A copy of the
    # License is located at
    #
    # http://aws.amazon.com/apache2.0/
    #
    # This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
    # OF ANY KIND, either express or implied. See the License for the specific
    # language governing permissions and limitations under the License.
    
    AWS_REGION = "<region>"
    PROJECT_ID = "<project-id>"
    CAMPAIGN_ARN = "<car-hrnn-campaign-arn>"
    USER_ID = "12456"
    endpoint_id = USER_ID
    
    from datetime import datetime
    import json
    import boto3
    import logging
    from botocore.exceptions import ClientError
    
    DATE = datetime.now()
    
    personalize           = boto3.client('personalize')
    personalize_runtime   = boto3.client('personalize-runtime')
    personalize_events    = boto3.client('personalize-events')
    pinpoint              = boto3.client('pinpoint')
    
    def lambda_handler(event, context):
        itemList = get_recommended_items(USER_ID,CAMPAIGN_ARN)
        response = update_pinpoint_endpoint(PROJECT_ID,endpoint_id,itemList)
    
        return {
            'statusCode': 200,
            'body': json.dumps('Lambda execution completed.')
        }
    
    def get_recommended_items(user_id, campaign_arn):
        response = personalize_runtime.get_recommendations(campaignArn=campaign_arn, 
                                                           userId=str(user_id), 
                                                           numResults=10)
        itemList = response['itemList']
        return itemList
    
    def update_pinpoint_endpoint(project_id,endpoint_id,itemList):
        itemlistStr = []
        
        for item in itemList:
            itemlistStr.append(item['itemId'])
    
        pinpoint.update_endpoint(
        ApplicationId=project_id,
        EndpointId=endpoint_id,
        EndpointRequest={
                            'User': {
                                'UserAttributes': {
                                    'recommended_items': 
                                        itemlistStr
                                }
                            }
                        }
        )    
    
        return
    

    In the preceding code, make the following changes:

    • Replace <region> with the name of the AWS Region that you want to use, such as us-east-1.
    • Replace <project-id> with the ID of the Amazon Pinpoint project that you created earlier.
    • Replace <car-hrnn-campaign-arn> with the Amazon Resource Name (ARN) of the car-hrnn campaign in Amazon Personalize. You can find this value in the Amazon Personalize console.
  2. Save the file as pue-get-recs.py.
  3. Create and activate a virtual environment. In the virtual environment, use pip to download the latest versions of the boto3 and botocore libraries. For complete procedures, see Updating a Function with Additional Dependencies With a Virtual Environment in the AWS Lambda Developer Guide. Also, add the pue-get-recs.py file to the .zip file that contains the libraries.
  4. Open the IAM console at http://console.aws.amazon.com/iam. Create a new role. Attach the following policy to the role:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogStream",
                    "logs:DescribeLogGroups",
                    "logs:CreateLogGroup",
                    "logs:PutLogEvents",
                    "personalize:GetRecommendations",
                    "mobiletargeting:GetUserEndpoints",
                    "mobiletargeting:GetApp",
                    "mobiletargeting:UpdateEndpointsBatch",
                    "mobiletargeting:GetApps",
                    "mobiletargeting:GetEndpoint",
                    "mobiletargeting:GetApplicationSettings",
                    "mobiletargeting:UpdateEndpoint"
                ],
                "Resource": "*"
            }
        ]
    }
    
  5. Open the Lambda console at http://console.aws.amazon.com/lambda, and then choose Create function.
  6. Create a new Lambda function from scratch. Choose the Python 3.7 runtime. Under Permissions, choose Use an existing role, and then choose the IAM role that you just created. When you finish, choose Create function.
  7. Upload the .zip file that contains the Lambda function and the boto3 and botocore libraries.
  8. Under Function code, change the Handler value to pue-get-recs.lambda_handler. Save your changes.

When you finish creating the function, you can test it to make sure it was set up correctly.

To test the Lambda function

  1. On the Select a test event menu, choose Configure test events. On the Configure test events window, specify an Event name, and then choose Create.
  2. Choose the Test button to execute the function.
  3. If the function executes successfully, open the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint.
  4. In the navigation pane, choose Segments, and then choose the “No recommendations” segment that you created earlier. Verify that the number under total endpoints is 0. This is the expected value; the segment is filtered to only include endpoints with no recommendation attributes, but when you ran the Lambda function, it added recommendations to the test endpoint.

Step 7: Create segments and campaigns based on recommended items

In this section, we’ll create a targeted segment based on the recommendation data provided by our Personalize dataset. We’ll then use that segment to create a campaign.

To create a segment and campaign based on personalized recommendations

  1. Open the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint. On the All projects page, choose the project that you created earlier.
  2. In the navigation pane, choose Segments, and then choose Create a segment. Name the new segment “Recommendations for product 26304”.
  3. Under Segment group 1, on the Add a filter menu, choose Filter by user. On the Choose a user attribute menu, choose recommended-items. Set the value of the filter to “26304”. Confirm that the Segment estimate section shows that there is one eligible endpoint, and then choose Create segment.
  4. In the navigation pane, choose Campaigns, and then choose Create a campaign.
  5. Name the campaign “SMS to users with recommendations for product 26304”. Under Choose a channel for this campaign, choose SMS, and then choose Next.
  6. On the Choose a segment page, choose the “Recommendations for product 26304” segment that you just created, and then choose Next.
  7. In the message editor, type a test message, and then choose Next.
  8. On the Choose when to send the campaign page, keep all of the default values, and then choose Next.
  9. On the Review and launch page, choose Launch campaign. Within a few seconds, you receive a text message at the phone number that you specified when you created the endpoint.

Next steps

Your PUE solution is now ready to use. From here, there are several ways that you can make the solution your own:

  • Expand your usage: If you plan to continue sending SMS messages, you should request a spending limit increase.
  • Extend to additional channels: This post showed the process of setting up an SMS campaign. You can add more endpoints—for the email or push notification channels, for example—and associate them with your users. You can then create new segments and new campaigns in those channels.
  • Build your own model: This post used a sample data set, but Amazon Personalize makes it easy to provide your own data. To start building a model with Personalize, you have to provide a data set that contains information about your users, items, and interactions. To learn more, see Getting Started in the Amazon Personalize Developer Guide.
  • Optimize your model: You can enrich your model by sending your mobile, web, and campaign engagement data to Amazon Personalize. In Pinpoint, you can use event streaming to move data directly to S3, and then use that data to retrain your Personalize model. To learn more about streaming events, see Streaming App and Campaign Events in the Amazon Pinpoint User Guide.
  • Update your recommendations on a regular basis: Use the create-campaign API to create a new recurring campaign. Rather than sending messages, include the hook property with a reference to the ARN of the pue-get-recs function. By completing this step, you can configure Pinpoint to retrieve the most up-to-date recommendation data each time the campaign recurs. For more information about using Lambda to modify segments, see Customizing Segments with AWS Lambda in the Amazon Pinpoint Developer Guide.

Sending Push Notifications to iOS 13 and watchOS 6 Devices

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/sending-push-notifications-to-ios-13-and-watchos-6-devices/

Last week, we made some changes to the way that Amazon Pinpoint sends Apple Push Notification service (APNs) push notifications.

In June, Apple announced that push notifications sent to iOS 13 and watchOS devices would require the new apns-push-type header. APNs uses this header to determine whether a notification should be shown on the display of the recipient’s device, or if it should be sent to the background.

When you use Amazon Pinpoint to send an APNs message, you can choose whether you want to send the message as a standard message, or as a silent notification. Amazon Pinpoint uses your selection to determine which value to apply to the apns-push-type header: if you send the message as a standard message, Amazon Pinpoint automatically sets the value of the apns-push-type header to alert; if you send the message as a silent notification, it sets the apns-push-type header to silent. Amazon Pinpoint applies these settings automatically—you don’t have to do any additional work in order to send messages to recipients with iOS 13 and watchOS 6 devices.

One last thing to keep in mind: if you specify the raw content of an APNs push notification, the message payload has to include the content-available key. The value of the content-available key has to be an integer, and can only be 0 or 1. If you’re sending an alert, set the value of content-available to 0. If you’re sending a background notification, set the value of content-available to 1. Additionally, background notification payloads can’t include the alert, badge, or sound keys.

To learn more about sending APNs notifications, see Generating a Remote Notification and Pushing Background Updates to Your App on the Apple Developer website.

Create an SMS Chatbot with Amazon Pinpoint and Lex

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/create-an-sms-chatbox-with-amazon-pinpoint-and-lex/

Note: This post was written by Ilya Pupko, Senior Consultant for the AWS Digital User Engagement team, and by Gopinath Srinivasan, an AWS Enterprise Solution Architect.


A major advantage of using Amazon Pinpoint for your customer engagement workflows is its ability to tightly integrate with other AWS services. These integrations make it possible to engage in deeper conversations with your customers, as opposed to simply sending one-directional, one-size-fits-all messages.

In this tutorial, we look at the process of creating an SMS-based chatbot using Amazon Lex. This chatbot will help our customers schedule appointments. We’ll use Amazon Pinpoint to send responses from the chatbot over the SMS channel, and we’ll use AWS Lambda to connect the two services together. The following image illustrates the architecture that we’ll create in this tutorial.

The steps in this post are intended to provide general guidance, rather than specific procedures. If you’ve used other AWS services in the past, most of the concepts here will be familiar. If not, don’t worry—we’ve included links to the documentation to make things easier.

Step 1: Set up a project in Amazon Pinpoint

The first step in setting up the chatbot is to create a new Amazon Pinpoint project that can send and receive SMS messages. To be able to receive incoming SMS messages, you also need to obtain a dedicated phone number.

To create a new SMS project

  1. Sign in to the Amazon Pinpoint console at https://console.aws.amazon.com/pinpoint.
  2. Create a new Amazon Pinpoint project, and enable the SMS channel for the project. For more information, see https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-setup.html.
  3. Request a long code for your country. For more information, see https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-voice-manage.html#channels-voice-manage-request-phone-numbers.
  4. Enable the two-way SMS feature for the dedicated long code that you just purchased. Under Incoming message destination, choose Create a new SNS topic, and name it LexPinpointIntegrationDemo. For more information about setting up two-way SMS, see https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-two-way.html.

Step 2: Create a Lex chatbot

Now it’s time to create your bot. For the purposes of this example, we’ll use a bot that’s pre-configured to handle appointment requests. Later, you can customize this bot to fit your needs by specifying additional intents.

To create your bot

  1. Sign in to the Lex console at https://console.aws.amazon.com/lex.
  2. Create a new bot. On the Create your bot page, choose the ScheduleAppointment sample. Use the default IAM role. For COPPA, choose No. Note the name that you specified for the bot—you need to refer to this name in the Lambda function that you create later. For more information about creating bots in Lex, see https://docs.aws.amazon.com/lex/latest/dg/gs-console.html.
  3. When the bot finishes building, choose Publish. For Create an alias, enter Latest. Choose Publish.

Step 3: Set up the Lambda backend

After you create your Lex bot, you have to create a Lambda function that allows your Lex bot to send messages through Amazon Pinpoint.

To create the Lambda function

  1. Sign in to the Lambda console at https://console.aws.amazon.com/lambda.
  2. Create a new Node.js 10.x function from scratch. Create a new IAM role with the default permissions. For more information about creating functions, see https://docs.aws.amazon.com/lambda/latest/dg/getting-started-create-function.html.
  3. In the Designer section, choose Add trigger. Add a new SNS trigger, and then choose the LexPinpointIntegrationDemo topic that you created earlier.
  4. In the Lambda code editor, paste the following code:
    "use strict";
    
    const AWS = require('aws-sdk');
    AWS.config.update({
        region: process.env.Region
    });
    const pinpoint = new AWS.Pinpoint();
    const lex = new AWS.LexRuntime();
    
    var AppId = process.env.PinpointApplicationId;
    var BotName = process.env.BotName;
    var BotAlias = process.env.BotAlias;
    
    exports.handler = (event, context)  => {
        /*
        * Event info is sent via the SNS subscription: https://console.aws.amazon.com/sns/home
        * 
        * - PinpointApplicationId is your Pinpoint Project ID.
        * - BotName is your Lex Bot name.
        * - BotAlias is your Lex Bot alias (aka Lex function/flow).
        */
        console.log('Received event: ' + event.Records[0].Sns.Message);
        var message = JSON.parse(event.Records[0].Sns.Message);
        var customerPhoneNumber = message.originationNumber;
        var chatbotPhoneNumber = message.destinationNumber;
        var response = message.messageBody.toLowerCase();
        var userId = customerPhoneNumber.replace("+1", "");
    
        var params = {
            botName: BotName,
            botAlias: BotAlias,
            inputText: response,
            userId: userId
        };
        response = lex.postText(params, function (err, data) {
            if (err) {
                console.log(err, err.stack);
                //return null;
            }
            else if (data != null && data.message != null) {
                console.log("Lex response: " + data.message);
                sendResponse(customerPhoneNumber, chatbotPhoneNumber, response.response.data.message);
            }
            else {
                console.log("Lex did not send a message back!");
            }
        });
    }
    
    function sendResponse(custPhone, botPhone, response) {
        var paramsSMS = {
            ApplicationId: AppId,
            MessageRequest: {
                Addresses: {
                    [custPhone]: {
                        ChannelType: 'SMS'
                    }
                },
                MessageConfiguration: {
                    SMSMessage: {
                        Body: response,
                        MessageType: "TRANSACTIONAL",
                        OriginationNumber: botPhone
                    }
                }
            }
        };
        pinpoint.sendMessages(paramsSMS, function (err, data) {
            if (err) {
                console.log("An error occurred.\n");
                console.log(err, err.stack);
            }
            else if (data['MessageResponse']['Result'][custPhone]['DeliveryStatus'] != "SUCCESSFUL") {
                console.log("Failed to send SMS response:");
                console.log(data['MessageResponse']['Result']);
            }
            else {
                console.log("Successfully sent response via SMS from " + botPhone + " to " + custPhone);
            }
        });
    }
  5. In the Environment variables section, add the following variables:

    KeyValue
    PinpointApplicationIdThe name of the Pinpoint project that you created earlier.
    BotNameThe name of the Lex bot that you created earlier.
    BotAliasLatest
    RegionThe AWS Region that you created the Amazon Pinpoint project and Lex bot in.
  6. Under Execution role, choose View the LexPinpointIntegrationDemoLambda role.
  7. In the IAM console, add an inline policy. Paste the following into the policy editor:
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Sid":"Logs",
             "Effect":"Allow",
             "Action":[
                "logs:CreateLogStream",
                "logs:CreateLogGroup",
                "logs:PutLogEvents"
             ],
             "Resource":[
                "arn:aws:logs:*:*:*"
             ]
          },
          {
             "Sid":"Pinpoint",
             "Effect":"Allow",
             "Action":[
                "mobiletargeting:SendMessages"
             ],
             "Resource":[
                "arn:aws:mobiletargeting:<REGION>:<ACCOUNTID>:apps/*"
             ]
          },
          {
             "Sid":"Lex",
             "Effect":"Allow",
             "Action":[
                "lex:PostContent",
                "lex:PostText"
             ],
             "Resource":[
                "arn:aws:lex:<REGION>:<ACCOUNTID>:bot/<BOTNAME>"
             ]
          }
       ]
    }

    In the preceding code, make the following changes:

    • Replace <REGION> with the name of the AWS Region that you created the Amazon Pinpoint project and the Lex bot in (such as us-east-1).
    • Replace <ACCOUNTID> with your AWS account ID.
    • Replace <BOTNAME> with the name of your Lex bot.
  8. When you finish, save the policy as PinpointLexFunctionRole.

Step 4: Test the chatbot

Your SMS chatbot is now set up and ready to use! You can test it by sending a message (such as “Schedule an appointment”) to the long code that you obtained earlier. The chatbot responds, asking what type of appointment you want to book, and at what time.

Conclusion

Now that you’ve created your chatbot, you can start to customize it to fit your specific use case. For example, you can enhance the chatbot’s conversational abilities by adding intents, or you could expand on the Lambda function to integrate it with a third-party scheduling tool.

To learn more about configuring Amazon Lex, see the Amazon Lex Developer Guide.

Finally, you can find the latest updates to the code that’s associated with this tutorial in the amazon-pinpoint-lex-bot repository on Github.

Creating custom Pinpoint dashboards using Amazon QuickSight, part 3

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/creating-custom-pinpoint-dashboards-using-amazon-quicksight-part-3/

Note: This post was written by Manan Nayar and Aprajita Arora, Software Development Engineers on the AWS Digital User Engagement team.


This is the third and final post in our series about creating custom visualizations of your Amazon Pinpoint metrics using Amazon QuickSight.

In our first post, we used the Metrics APIs to retrieve specific Key Performance Indicators (KPIs), and then created visualizations using QuickSight. In the second post, we used the event stream feature in Amazon Pinpoint to enable more in-depth analyses.

The examples in the first two posts used Amazon S3 to store the metrics that we retrieved from Amazon Pinpoint. This post takes a different approach, using Amazon Redshift to store the data. By using Redshift to store this data, you gain the ability to create visualizations on large data sets. This example is useful in situations where you have a large volume of event data, and situations where you need to store your data for long periods of time.

Step 1: Provision the storage

The first step in setting up this solution is to create the destinations where you’ll store the Amazon Pinpoint event data. Since we’ll be storing the data in Amazon Redshift, we need to create a new Redshift cluster. We’ll also create an S3 bucket, which will house the original event data that’s streamed from Amazon Pinpoint.

To create the Redshift cluster and the S3 bucket

  1. Create a new Redshift cluster. To learn more, see the Amazon Redshift Getting Started Guide.
  2. Create a new table in the Redshift cluster that contains the appropriate columns. Use the following query to create the table:
    create table if not exists pinpoint_events_table(
      rowid varchar(255) not null,
      project_key varchar(100) not null,
      event_type varchar(100) not null,
      event_timestamp timestamp not null,
      campaign_id varchar(100),
      campaign_activity_id varchar(100),
      treatment_id varchar(100),
      PRIMARY KEY (rowid)
    );
  3. Create a new Amazon S3 bucket. For complete procedures, see Create a Bucket in the Amazon S3 Getting Started Guide.

Step 2: Set up the event stream

This example uses the event stream feature of Amazon Pinpoint to send event data to S3. Later, we’ll create a Lambda function that sends the event data to your Redshift cluster when new event data is added to the S3 bucket. This method lets us store the original event data in S3, and transfer a subset of that data to Redshift for analysis.

To set up the event stream

  1. Sign in to the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint. In the list of projects, choose the project that you want to enable event streaming for.
  2. Under Settings, choose Event stream.
  3. Choose Edit, and then configure the event stream to use Amazon Kinesis Data Firehose. If you don’t already have a Kinesis Data Firehose stream, follow the link to create one in the Kinesis console. Configure the stream to send data to an S3 bucket. For more information about creating streams, see Creating an Amazon Kinesis Data Firehose Delivery Stream.
  4. Under IAM role, choose Automatically create a role. Choose Save.

Step 3: Create the Lambda function

In this section, you create a Lambda function that processes the raw event stream data, and then writes it to a table in your Redshift cluster.
To create the Lambda function:

  1. Download the psycopg2 binary from https://github.com/jkehler/awslambda-psycopg2. This Python library lets you interact with PostgreSQL databases, such as Amazon Redshift. It contains certain libraries that aren’t included in Lambda.
    • Note: This Github repository is not an official AWS-managed repository.
  2. Within the awslambda-psycopg2-master folder, you’ll find a folder called psycopg2-37. Rename the folder to psycopg2 (you may need to delete the existing folder with that name), and then compress the entire folder to a .zip file.
  3. Create a new Lambda function from scratch, using the Python 3.7 runtime.
  4. Upload the psycopg2.zip file that you created in step 1 to Lambda.
  5. In Lambda, create a new function called lambda_function.py. Paste the following code into the function:
    import datetime
    import json
    import re
    import uuid
    import os
    import boto3
    import psycopg2
    from psycopg2 import Error
    
    cluster_redshift = "<clustername>"
    dbname_redshift = "<dbname>"
    user_redshift = "<username>"
    password_redshift = "<password>"
    endpoint_redshift = "<endpoint>"
    port_redshift = "5439"
    table_redshift = "pinpoint_events_table"
    
    # Get the file that contains the event data from the appropriate S3 bucket.
    def get_file_from_s3(bucket, key):
        s3 = boto3.client('s3')
        obj = s3.get_object(Bucket=bucket, Key=key)
        text = obj["Body"].read().decode()
    
        return text
    
    # If the object that we retrieve contains newline-delineated JSON, split it into
    # multiple objects.
    def clean_and_split(json_raw):
        json_delimited = re.sub('}\s{','}---X-DELIMITER---{',json_raw)
        json_clean = re.sub('\s+','',json_delimited)
        data = json_clean.split("---X-DELIMITER---")
    
        return data
    
    # Set all of the variables that we'll use to create the new row in Redshift.
    def set_variables(in_json):
    
        for line in in_json:
            content = json.loads(line)
            app_id = content['application']['app_id']
            event_type = content['event_type']
            event_timestamp = datetime.datetime.fromtimestamp(content['event_timestamp'] / 1e3).strftime('%Y-%m-%d %H:%M:%S')
    
            if (content['attributes'].get('campaign_id') is None):
                campaign_id = ""
            else:
                campaign_id = content['attributes']['campaign_id']
    
            if (content['attributes'].get('campaign_activity_id') is None):
                campaign_activity_id = ""
            else:
                campaign_activity_id = content['attributes']['campaign_activity_id']
    
            if (content['attributes'].get('treatment_id') is None):
                treatment_id = ""
            else:
                treatment_id = content['attributes']['treatment_id']
    
            write_to_redshift(app_id, event_type, event_timestamp, campaign_id, campaign_activity_id, treatment_id)
                
    # Write the event stream data to the Redshift table.
    def write_to_redshift(app_id, event_type, event_timestamp, campaign_id, campaign_activity_id, treatment_id):
        row_id = str(uuid.uuid4())
    
        query = ("INSERT INTO " + table_redshift + "(rowid, project_key, event_type, "
                + "event_timestamp, campaign_id, campaign_activity_id, treatment_id) "
                + "VALUES ('" + row_id + "', '"
                + app_id + "', '"
                + event_type + "', '"
                + event_timestamp + "', '"
                + campaign_id + "', '"
                + campaign_activity_id + "', '"
                + treatment_id + "');")
    
        try:
            conn = psycopg2.connect(user = user_redshift,
                                    password = password_redshift,
                                    host = endpoint_redshift,
                                    port = port_redshift,
                                    database = dbname_redshift)
    
            cur = conn.cursor()
            cur.execute(query)
            conn.commit()
            print("Updated table.")
    
        except (Exception, psycopg2.DatabaseError) as error :
            print("Database error: ", error)
        finally:
            if (conn):
                cur.close()
                conn.close()
                print("Connection closed.")
    
    # Handle the event notification that we receive when a new item is sent to the 
    # S3 bucket.
    def lambda_handler(event,context):
        print("Received event: \n" + str(event))
    
        bucket = event['Records'][0]['s3']['bucket']['name']
        key = event['Records'][0]['s3']['object']['key']
        data = get_file_from_s3(bucket, key)
    
        in_json = clean_and_split(data)
    
        set_variables(in_json)

    In the preceding code, make the following changes:

    • Replace <clustername> with the name of the cluster.
    • Replace <dbname> with the name of the database.
    • Replace <username> with the user name that you specified when you created the Redshift cluster.
    • Replace <password> with the password that you specified when you created the Redshift cluster.
    • Replace <endpoint> with the endpoint address of the Redshift cluster.
  6. In IAM, update the execution role that’s associated with the Lambda function to include the GetObject permission for the S3 bucket that contains the event data. For more information, see Editing IAM Policies in the AWS IAM User Guide.

Step 4: Set up notifications on the S3 bucket

Now that we’ve created the Lambda function, we’ll set up a notification on the S3 bucket. In this case, the notification will refer to the Lambda function that we created in the previous section. Every time a new file is added to the bucket, the notification will cause the Lambda function to run.

To create the event notification

  1. In S3, create a new bucket notification. The notification should be triggered when PUT events occur, and should trigger the Lambda function that you created in the previous section. For more information about creating notifications, see Configuring Amazon S3 Event Notifications in the Amazon S3 Developer Guide.
  2. Test the event notification by sending a test campaign. If you send an email campaign, your Redshift database should contain events such as _campaign.send, _email.send, _email.delivered, and others. You can check the contents of the Redshift table by running the following query in the Query Editor in the Redshift console:
    select * from pinpoint_events_table;

Step 5: Add the data set in Amazon QuickSight

If your Lambda function is sending event data to Redshift as expected, you can use your Redshift database to create a new data set in Amazon QuickSight. QuickSight includes an automatic database discovery feature that helps you add your Redshift database as a data set with only a few clicks. For more information, see Creating a Data Set from a Database in the Amazon QuickSight User Guide.

Step 6: Create your visualizations

Now that QuickSight is retrieving information from your Redshift database, you can use that data to create visualizations. To learn more about creating visualizations in QuickSight, see Creating an Analysis in the Amazon QuickSight User Guide.

This brings us to the end of our series. While these posts focused on using Amazon QuickSight to visualize your analytics data, you can also use these same techniques to create visualizations using 3rd party applications. We hope you enjoyed this series, and we can’t wait to see what you build using these examples!

Creating custom Pinpoint dashboards using Amazon QuickSight, part 2

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/creating-custom-pinpoint-dashboards-using-amazon-quicksight-part-2/

Note: This post was written by Manan Nayar and Aprajita Arora, Software Development Engineers on the AWS Digital User Engagement team.


In our previous post, we discussed the process of visualizing specific, pre-aggregated Amazon Pinpoint metrics—such as delivery rate or open rate—using the Amazon Pinpoint Metrics APIs. In that example, we showed how to create a Lambda function that retrieves your metrics, and then make those metrics available for creating visualizations in Amazon QuickSight.

This post discusses shows a different approach to exporting data from Amazon Pinpoint and using it to build visualizations. Rather than retrieve specific metrics, you can use the event stream feature in Amazon Pinpoint to export raw event data. You can use this data in Amazon QuickSight to create in-depth analyses of your data, as opposed to visualizing pre-calculated metrics. As an added benefit, when you use this solution, you don’t have to modify any code, and the underlying data is updated every few minutes.

Step 1: Configure the event stream in Amazon Pinpoint

The Amazon Pinpoint event stream includes information about campaign events (such as campaign.send) and application events (such as session.start). It also includes response information related to all of the emails and SMS messages that you send from your Amazon Pinpoint project, regardless of whether they were sent from campaigns or on a transactional basis. When you enable event streams, Amazon Pinpoint automatically sends this data to your S3 bucket (via Amazon Kinesis Data Firehose) every few minutes.

To set up the event stream

  1. Sign in to the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint. In the list of projects, choose the project that you want to enable event streaming for.
  2. Under Settings, choose Event stream.
  3. Choose Edit, and then configure the event stream to use Amazon Kinesis Data Firehose. If you don’t already have a Kinesis Data Firehose stream, follow the link to create one in the Kinesis console. Configure the stream to send data to an S3 bucket. For more information about creating streams, see Creating an Amazon Kinesis Data Firehose Delivery Stream.
  4. Under IAM role, choose Automatically create a role. Choose Save.

Step 2: Add a data set in Amazon QuickSight

Now that you’ve started streaming your Amazon Pinpoint data to S3, you can set Amazon QuickSight to look for data in the S3 bucket. You connect QuickSight to sources of data by creating data sets.

To create a data set

    1. In a text editor, create a new file. Paste the following code:
      {
          "fileLocations": [
              {
                  "URIPrefixes": [ 
                      "s3://<bucketName>/"          
                  ]
              }
          ],
          "globalUploadSettings": {
              "format": "JSON"
          }
      }

      In the preceding code, replace <bucketName> with the name of the S3 bucket that you’re using to store the event stream data. Save the file as manifest.json.

    2. Sign in to the QuickSight console at https://quicksight.aws.amazon.com.
    3. Create a new S3 data set. When prompted, choose the manifest file that you created in step 1. For more information about creating S3 data sets, see Creating a Data Set Using Amazon S3 Files in the Amazon QuickSight User Guide.
    4. Create a new analysis. From here, you can start creating visualizations of your data. To learn more, see Creating an Analysis in the Amazon QuickSight User Guide.

Step 3: Set the refresh rate for the data set

You can configure your data sets in Amazon QuickSight to automatically refresh on a certain schedule. In this section, you configure the data set to refresh every day, one minute before midnight.

To set the refresh schedule

  1. Go to the QuickSight start page at https://quicksight.aws.amazon.com/sn/start. Choose Manage data.
  2. Choose the data set that you created in the previous section.
  3. Choose Schedule refresh. Follow the prompts to set up a daily refresh schedule.

Step 4: Create your visualizations

From this point, you can start creating visualizations of your data. To learn more about creating visualizations, see Creating an Analysis in the Amazon QuickSight User Guide.

Creating custom Pinpoint dashboards using Amazon QuickSight, part 1

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/creating-custom-pinpoint-dashboards-using-amazon-quicksight-part-1/

Note: This post was written by Manan Nayar and Aprajita Arora, Software Development Engineers on the AWS Digital User Engagement team.


Amazon Pinpoint helps you create customer-centric engagement experiences across the mobile, web, and other messaging channels. It also provides a variety of Key Performance Indicators (KPIs) that you can use to track the performance of your messaging programs.

You can access these KPIs through the console, or by using the Amazon Pinpoint API. In some cases, you might want to create custom dashboards that aren’t included by default, or even combine these metrics with other data. Over the next few days, we’ll discuss several different methods that you can use to create your own custom dashboards.

In this post, you’ll learn how to use the Amazon Pinpoint API to retrieve metrics, and then display them in visualizations that you create in Amazon QuickSight. This option is ideal for creating custom dashboards that highlight a specific set of metrics, or for embedding these metrics in your existing application or website.

In the next post (which we’ll post on Monday, August 19), you’ll learn how to export raw event data to an S3 bucket, and use that data to create dashboards in by using QuickSight’s Super-fast, Parallel, In-memory Calculation Engine (SPICE). This option enables you to perform in-depth analyses and quickly update visualizations. It’s also cost-effective, because all of the event data is stored in an S3 bucket.

The final post (which we’ll post on Wednesday, August 21) will also discuss the process of creating visualizations from event stream data. However, in this solution, the data will be sent from Amazon Kinesis to a Redshift cluster. This option is ideal if you need to process very large volumes of event data.

Creating a QuickSight dashboard that uses specific metrics

You can use the Amazon Pinpoint API to programmatically access many of the metrics that are shown on the Analytics pages of the Amazon Pinpoint console. You can learn more about using the API to obtain specific KPIs in our recent blog post, Tracking Campaign Performance Using the Metrics APIs.

The following sections show you how to parse and store those results in Amazon S3, and then create custom dashboards by using Amazon Quicksight. The steps below are meant to provide general guidance, rather than specific procedures. If you’ve used other AWS services in the past, most of the concepts here will be familiar. If not, don’t worry—we’ve included links to the documentation to make things easier.

Step 1: Package the Dependencies

Lambda currently uses a version of the AWS SDK that is a few versions behind the current version. However, the ability to retrieve Pinpoint metrics programmatically is a relatively new feature. For this reason, you have to download the latest version of the SDK libraries to your computer, create a .zip archive, and then upload that archive to Lambda.

To package the dependencies

    1. Paste the following code into a text editor:
      from datetime import datetime
      import boto3
      import json
      
      AWS_REGION = "<us-east-1>"
      PROJECT_ID = "<projectId>"
      BUCKET_NAME = "<bucketName>"
      BUCKET_PREFIX = "quicksight-data"
      DATE = datetime.now()
      
      # Get today's push open rate KPI values.
      def get_kpi(kpi_name):
      
          client = boto3.client('pinpoint',region_name=AWS_REGION)
      
          response = client.get_application_date_range_kpi(
              ApplicationId=PROJECT_ID,
              EndTime=DATE.strftime("%Y-%m-%d"),
              KpiName=kpi_name,
              StartTime=DATE.strftime("%Y-%m-%d")
          )
          rows = response['ApplicationDateRangeKpiResponse']['KpiResult']['Rows'][0]['Values']
      
          # Create a JSON object that contains the values we'll use to build QuickSight visualizations.
          data = construct_json_object(rows[0]['Key'], rows[0]['Value'])
      
          # Send the data to the S3 bucket.
          write_results_to_s3(kpi_name, json.dumps(data).encode('UTF-8'))
      
      # Create the JSON object that we'll send to S3.
      def construct_json_object(kpi_name, value):
          data = {
              "applicationId": PROJECT_ID,
              "kpiName": kpi_name,
              "date": str(DATE),
              "value": value
          }
      
          return data
      
      # Send the data to the designated S3 bucket.
      def write_results_to_s3(kpi_name, data):
          # Create a file path with folders for year, month, date, and hour.
          path = (
              BUCKET_PREFIX + "/"
              + DATE.strftime("%Y") + "/"
              + DATE.strftime("%m") + "/"
              + DATE.strftime("%d") + "/"
              + DATE.strftime("%H") + "/"
              + kpi_name
          )
      
          client = boto3.client('s3')
      
          # Send the data to the S3 bucket.
          response = client.put_object(
              Bucket=BUCKET_NAME,
              Key=path,
              Body=bytes(data)
          )
      
      def lambda_handler(event, context):
          get_kpi('email-open-rate')
          get_kpi('successful-delivery-rate')
          get_kpi('unique-deliveries')

      In the preceding code, make the following changes:

      • Replace <us-east-1> with the name of the AWS Region that you use Amazon Pinpoint in.
      • Replace <projectId> with the ID of the Amazon Pinpoint project that the metrics are associated with.
      • Replace <bucketName> with the name of the Amazon S3 bucket that you want to use to store the data. For more information about creating S3 buckets, see Create a Bucket in the Amazon S3 Getting Started Guide.
      • Optionally, modify the lambda_handler function so that it calls the get_kpi function for the specific metrics that you want to retrieve.

      When you finish, save the file as retrieve_pinpoint_kpis.py.

  1. Use pip to download the latest versions of the boto3 and botocore libraries. Add these libraries to a .zip file. Also add retrieve_pinpoint_kpis.py to the .zip file. You can learn more about all of these tasks in Updating a Function with Additional Dependencies With a Virtual Environment in the AWS Lambda Developer Guide.

Step 2: Set up the Lambda function

In this section, you upload the package that you created in the previous section to Lambda.

To set up the Lambda function

  1. In the Lambda console, create a new function from scratch. Choose the Python 3.7 runtime.
  2. Choose a Lambda execution role that contains the following permissions:
    • Allows the action mobiletargeting:GetApplicationDateRangeKpi for the resource arn:aws:mobiletargeting:<awsRegion>:<yourAwsAccountId>:apps/*/kpis/*/*, where <awsRegion> is the Region where you use Amazon Pinpoint, and <yourAwsAccountId> is your AWS account number.
    • Allows the action s3:PutObject for the resource arn:aws:s3:::<my_bucket>/*, where <my_bucket> is the name of the S3 bucket where you want to store the metrics.
  3. Upload the .zip file that you created in the previous section.
  4. Change the Handler value to retrieve_pinpoint_kpis.lambda_handler.
  5. Save your changes.

Step 3: Schedule the execution of the function

At this point, the Lambda function is ready to run. The next step is to set up the trigger that will cause it to run. In this case, since we’re retrieving an entire day’s worth of data, we’ll set up a scheduled trigger that runs every day at 11:59 PM.

To set up the trigger

  1. In the Lambda console, in the Designer section, choose Add trigger.
  2. Create a new CloudWatch Events rule that uses the Schedule expression rule type.
  3. For the schedule expression, enter cron(59 23 ? * * *).

Step 4: Create QuickSight Analyses

Once the data is populated in S3, you can start creating analyses in Amazon QuickSight. The process of creating new analyses involves a couple of tasks: creating a new data set, and creating your visualizations.

To create analyses in QuickSight
1.    In a text editor, create a new file. Paste the following code:

{
    "fileLocations": [
        {
            "URIPrefixes": [ 
                "s3://<bucketName>/quicksight-data/"          
            ]
        }
    ],
    "globalUploadSettings": {
        "format": "JSON"
    }
}

In the preceding code, replace <bucketName> with the name of the S3 bucket that you’re using to store the metrics data. Save the file as manifest.json.
2.    Sign in to the QuickSight console at https://quicksight.aws.amazon.com.
3.    Create a new S3 data set. When prompted, choose the manifest file that you created in step 1. For more information about creating S3 data sets, see Creating a Data Set Using Amazon S3 Files in the Amazon QuickSight User Guide.
4.    Create a new analysis. From here, you can start creating visualizations of your data. To learn more, see Creating an Analysis in the Amazon QuickSight User Guide.

Send text messages in Amazon Connect by integrating Amazon Pinpoint

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/send-text-messages-in-amazon-connect-by-integrating-amazon-pinpoint/

Because Amazon Pinpoint is a member of the AWS family, you can integrate it seamlessly with other AWS services. In the past, this blog has looked at the process of integrating Amazon Pinpoint with Amazon Comprehend and Amazon Redshift.

Earlier this week, Michael Woodward, a Solution Architect here at AWS, published a blog post about integrating Amazon Pinpoint with Amazon Connect, our cloud-based contact center service.

Integrating Amazon Pinpoint into Amazon Connect lets you expand the capabilities of your call center systems in several interesting ways. For example, you can use Amazon Pinpoint to send more information after a call ends, or to send a link to an after-call survey.

To learn more about this solution, see Michael’s post on the AWS Contact Center blog.

New Regions, New Features, and a New Web Site

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/new-regions-new-features-and-a-new-web-site/

It’s a busy time here on the Digital User Engagement Team at AWS!

Last week, we made Amazon Pinpoint available in the Asia Pacific (Mumbai) and Asia Pacific (Sydney) AWS Regions. This is great news for new Pinpoint customers in these areas of the globe who were previously concerned with issues related to latency and data residency. Existing Amazon Pinpoint customers can also use these new Regions to increase availability and create geographical redundancy.

On Tuesday of this week, we also launched two exciting improvements to the Amazon Pinpoint console. The first improvement is a tool that you can use to import customer segments in just a few clicks. Previously, if you wanted to import customer data into Pinpoint, you had to save the data in a CSV or JSON file, upload it to an S3 bucket, create a segment in Pinpoint, and enter the full path to the S3 bucket. Now, you can drag and drop files right into the segment importer. To learn more, see the Pinpoint User Guide.

The other new feature that we released this week is an improved email editor. Our previous email editor only allowed you to include a limited set of HTML tags in your emails. With our new editor, however, you can include any HTML tags that you want. The new editor also includes a helpful side-by-side view that renders your message in real-time, as shown in the following image.

Users who don’t want to work with HTML code can also use the Design view to create and modify emails in an intuitive, WYSIWYG interface. For more information, see the Pinpoint User Guide.

Finally, we launched a new website for Amazon Pinpoint at https://aws.amazon.com/pinpoint. On our new site, you can learn more about the capabilities of Amazon Pinpoint. You’ll find in-depth information about all of the features, channels, and use cases that Amazon Pinpoint supports.

Every day, we’re amazed by the things that our customers do with Amazon Pinpoint. We hope these changes help you do even more incredible things!

Learn About Amazon Pinpoint at Upcoming Events Around the World

Post Syndicated from Hannah Nilsson original https://aws.amazon.com/blogs/messaging-and-targeting/learn-about-amazon-pinpoint-at-upcoming-events/

Connect with the AWS Customer Engagement team at events around the world to learn how our technology can to help you better engage with your customers. Get demos on recent feature releases, discover how you can use Pinpoint for your specific use case, and attend informative sessions to hear how companies around the world are using AWS Customer Engagement solutions to deliver better experiences for their customers. Plus, read below to find out how Amazon Pinpoint and Amazon SES both enable you to create innovative email experiences with the recent AMP Project launch.

AWS Customer Engagement in the news: Amazon SES and Amazon Pinpoint support build the future of email with AMP

The AMP Project’s mission is to enable more user-first experiences on the web, including web-based technology like email. On March 26, the AMP Project announced that they are bringing AMP technology to email in order to give users an interactive, real-time experience that also keeps inboxes safe.

Amazon Pinpoint and Amazon SES both provide out-of-the-box support for AMP for email with no additional configuration. This allows you to easily create experiences for your customers such as  submitting RSVPs to events, filling out questionnaires, browsing catalogs, or responding to comments right within the email.

Read the AMP announcement for more information about these new capabilities. To learn how to use the AMP format with Amazon SES, visit the SES Developer Guide. To learn how to use the AMP format with Amazon Pinpoint, read this Amazon Pinpoint API Reference. View these instructions for more information on how to add AMP to an existing email.

Amazon Pinpoint has been busy building. You can now:

  • Learn how to set up an email preference management web page that enables customers to manage their email subscription preferences. Read now.
  • Learn how to set up a web form that collects information from new customers, and then sends them an SMS message to confirm that they want to receive content from you. Read now.
  • Use Amazon Pinpoint in the US West (Oregon), EU (Frankfurt), and EU (Ireland) regions in addition to the US East (Virginia) region. Learn more.
  • Deliver voice messages to your users with Amazon Pinpoint Voice. Learn more.
  • Set up campaigns that auto-send messages to your customers when they take specific actions. Learn more.
  • Detect and understand issues impacting your email deliverability with the Amazon Pinpoint Deliverability Dashboard. Learn more. 

Meet an Amazon Pinpoint expert at these upcoming events. We will teach you how to take advantage of recent updates so that you can create better engagement experiences for your customers. Plus, we can give you an inside look on what’s on our roadmap, and we’ll be giving out custom Pinpoint swag!

AWS Summit, Singapore 

April 10, 2019
Singapore Expo Convention & Exhibition Centre
Amazon Pinpoint will host an informative session about our Customer Engagement solutions at the AWS Singapore Summit. In this session, we will describe how AWS enables companies to better understand and engage their customers with personalized, timely, and relevant communications on multiple channels. You will also learn how Disney Streaming Services is using Amazon Pinpoint to engage their users.
Register for the Summit here.

“Mobile Days” at the AWS San Francisco Loft   

April 24, 2019 
AWS San Francisco Loft
Join us for an engaging day of discussion and education. Amazon Pinpoint experts will host the following sessions:

  • 2:30pm – 3:30pm: How Do You Measure Customer Success? Featuring Amazon Pinpoint. 
  • 3:30pm – 4:30pm: Using ML to Create Enhance Your Marketing. Featuring Amazon Pinpoint and Amazon Personalize. 

Space for this event is limited, please reserve your seat here.

AWS Summit, Sydney

May 1-2, 2019
International Convention Centre (ICC), Darling Harbour, Sydney
Don’t miss the customer engagement session on April 30th. This session, part of Amazon’s Innovation Day event, features a keynote address by Neil Lindsay, Vice President of Global Marketing at Amazon. The session explores how AWS technologies power organizations that deliver customer-centric innovations. Learn about how Australia’s largest brands and digital agencies use AWS technologies to engage customers, build new business models, and transform customer experiences.
Register for the Summit here

AWS Summit, Mumbai

May 15, 2019
Bombay Exhibition Center, Mumbai
The Amazon Pinpoint team will be at the “Ask an Expert” booth. Stop by to meet the team, ask questions, and pick up Amazon Pinpoint swag!
Register for the summit here