Tag Archives: Amazon Pinpoint

Deploy Amazon QuickSight dashboard for Amazon Pinpoint engagement events.

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/deploy-amazon-quicksight-dashboard-for-amazon-pinpoint-engagement-events/

Abstract

Business intelligence (BI) dashboards provide a graphical representation of metrics and key performance indicators (KPIs) to monitor the health of your business. By leveraging BI dashboards to analyze the performance of your customer communications, you gain valuable insights into how they are engaging with your messages and can make data-driven decisions to improve your marketing and communication strategies.

In this blog post we introduce a solution that automates the deployment of an Amazon QuickSight dashboard that enables marketers to analyze their long-term Amazon Pinpoint customer engagement data. These dashboards can be customized further depending the use case. This solution alleviates the need to create data pipelines for storage and analysis of Amazon Pinpoint’s engagement data, while offering a greater variety of widgets and views across email, SMS, campaigns, journeys and transactional messages when comparing to Amazon Pinpoint’s native dashboards.

Amazon Pinpoint is a flexible, scalable marketing communications service that connects you with customers over email, SMS, push notifications, or voice. The service offers ready to use dashboards to view key performance indicators (KPIs) for the various messaging channels, It provides 90 days of events for analysis. However, the raw events used to populate Amazon Pinpoint’s dashboards, can be streamed using Amazon Kinesis Data Firehose to a destination of your choice. This blog will walk you through leveraging this feature to create a data lake to store and analyze data beyond the initial 90 days.

Amazon QuickSight is a cloud-scale business intelligence (BI) service that you can use to deliver easy-to-understand insights in an interactive visual environment.

The solutions leverages the Amazon Cloud Development Kit (CDK) to deploy the needed infrastructure and dashboards.

Use Case(s)

The Amazon QuickSight dashboards deployed through this solution are designed to serve several use cases. Here are just a few examples:

  • View email and SMS costs per Campaign and Journey.
  • Deep dive into engagement insights and performance. (eg: SMS events, Email events, Campaign events, Journey events).
  • Schedule reports to various business stakeholders.
  • Track individual email & SMS statuses to specific endpoints.
  • Analyze open and click rates based on the message send time.

These are some of the use cases you can use these dashboards for and with all the data points being available in Amazon QuickSight, you can create your own views and widgets based on your specific requirements.

Solution Overview

This solution builds upon the Digital User Engagement (DUE) Event Database AWS Solution. It creates a long-term Amazon Pinpoint event data lake. This solution also builds a QuickSight dashboard to visualize and analyze this data. It leverages several other AWS services to tie i all together. It uses AWS Lambda for processing AWS CloudTrail data, Amazon Athena to build views using SQL for Amazon QuickSight, AWS CloudTrail to record any new campaign, journey and segment updates and Amazon DynamoDB to store the campaign, journey and segment metadata. This solution can be segmented into three logical portions: 1) Pinpoint campaign/journey/segment lookup tables. 2) Amazon Athena Views. 3) Amazon QuickSight resources.

The AWS Cloud Development Kit (CDK) is used to deploy this solution to your account. AWS CDK is an open-source software development framework for defining cloud infrastructure as code with modern programming languages and deploying it through AWS CloudFormation.

Pinpoint campaign/journey/segment lookup tables

architecture-diagram

  1. A CloudFormation AWS Lambda-backed custom resource function adds current Pinpoint campaign, journey and segment meta data to Amazon DynamoDB lookup tables. An AWS CloudFormation custom resource is managed by a Lambda function that runs only upon the deployment, update and deletion of the AWS CloudFormation stack.
  2. AWS CloudTrail logs record API actions to an S3 bucket every 5 minutes.
  3. When an AWS CloudTrail log is written to the S3 bucket an AWS Lambda function is invoked and checks for Amazon Pinpoint campaigns/journeys/segments management events such as create, update and delete.
  4. For every Amazon Pinpoint action the AWS Lambda function finds, it queries Amazon Pinpoint to get the respective resource details.
  5. The AWS Lambda function will create or update records in the Amazon DynamoDB table to reflect the changes.
  6. This solution also deploys an Amazon Athena DynamoDB connector. Amazon Athena uses this to query the Amazon DynamoDB lookup tables to enrich the data in the Amazon Pinpoint event data lake.
  7. The Amazon Athena to Amazon DynamoDB connector requires an Amazon S3 spill bucket for any data that exceeds the AWS Lambda function limits

Amazon Athena views

Amazon Athena views are crucial for querying and organizing the data. These views allow QuickSight to interact with the Pinpoint event data lake through standard SQL queries and views. Here’s how they’re set up:

The application creates several named queries (called saved queries in the Amazon Athena console). Each named query uses a SQL statement to create a database view containing a subset of the data from the Pinpoint event data lake (or joins data from a previous view with the Amazon DynamoDB tables created above. The views are also created using an AWS Lambda-backed custom resource.

Amazon QuickSight resources

quicksight-resources-diagram

  1. This solution creates several Amazon QuickSight resources to support the deployed dashboard. These include data sources, datasets, refresh schedules, and an analysis. The refresh schedule determines the frequency that Amazon QuickSight queries the Amazon Athena views to update the datasets.
  2. Amazon Athena retrieves live data from the DUE event database data lake and the Athena DynamoDB Connector whenever the Amazon QuickSight refresh schedule runs.

Prerequisites

  • Deploy the Digital User Engagement (DUE) Event Database solution before continuing
    • After you have deployed this solution, gather the following data from the stack’s Resources section.
      • DUES3DataLake: You will need the bucket name
      • PinpointProject: You will need the project Id
      • PinpointEventDatabase: This is the name of the Glue Database. You will only need this if you used something other than the default of due_eventdb

Note: If you are installing the DUE event database for the first time as part of these instructions, your dashboard will not have any data to show until new events start to come in from your Amazon Pinpoint project.

Once you have the DUE event database installed, you are ready to begin your deployment.

Implementation steps

Step 1 – Ensure that Amazon Athena is setup to store query results

Amazon Athena uses workgroups to separate users, teams, applications, or workloads, to set limits on amount of data each query or the entire workgroup can process, and to track costs. There is a default workgroup called “primary” However, before you can use this workgroup, it needs to be configured with an Amazon S3 bucket for storing the query results.

  1. If you do not have an existing S3 bucket you can use for the output, create a new Amazon S3 bucket.
  2. Navigate to the Amazon Athena console and from the menu select workgroups > primary > Edit > Query result configuration
    1. Select the Amazon S3 bucket and any specific directory for the Athena query result location

Note: If you choose to use a workgroup other that the default “primary” workgroup. Please take note of the workgroup name to be used later.

Step 2 – Enable Amazon QuickSight

Amazon QuickSight offers two types of data sets: Direct Query data sets, which provides real-time access to data sources, and SPICE (Super-fast, Parallel, In-memory Calculation Engine) data sets, which are pre-aggregated and cached for faster performance and scalability that can be refreshed on a schedule.

This solution uses SPICE datasets set to incrementally refresh on a cycle of your choice (Daily or Hourly). If you have already setup Amazon QuickSight, please navigate to Amazon QuickSight in the AWS Console and skip to step 3.

  1. Navigate to Amazon QuickSight on the AWS console
  2. Setup Amazon QuickSight account by clicking the “Sign up for QuickSight” button.
    1. You will need to setup an Enterprise account for this solution.
    2. To complete the process for the Amazon QuickSight account setup follow the instructions at this link
  3. Ensure you have the Admin Role
    1. Choose the profile icon in the top right corner, select Manage QuickSight and click on Manage Users
    2. Subscription details should display on the screen.
  4. Ensure you have enough SPICE capacity for the datasets
    1. Choose the profile icon, and then select Manage QuickSight
    2. Click on SPICE Capacity
  5. Make sure you enough SPICE for all three datasets
    1. if you are still in the free tier, you should have enough for initial testing.
    2. You will need about 2GB of capacity for every 1,000,000 Pinpoint events that will be ingested in to SPICE
    3. Note: If you do not have enough SPICE capacity, deployment will fail
  6. Please note the Amazon QuickSight username. You can find this by clicking profile icon. Example username: Admin/user-name

Step 3 – Collect the Amazon QuickSight Service Role name in IAM

For Amazon Athena, Amazon S3, and Athena Query Federation connections, Amazon QuickSight uses the following IAM “consumer” role by default: aws-quicksight-s3-consumers-role-v0

If the “consumer” role is not present, then QuickSight uses the following “service” role instead : aws-quicksight-service-role-v0.

The version number at the end of the role could be different in your account. Please validate your role name with the following steps.

  1. Navigate to the Identity and Access Management (IAM) console
  2. Go to Roles and search QuickSight
  3. If the consumer role exists, please note its full name
  4. If you only find the service role, please note its full name

Note: For more details on these service roles, please see the QuickSight User Guide

Step 4 – Prepare the CDK Application

Deploying this solution requires no previous experience with the AWS CDK toolkit. If you would like to familiarize yourself with CDK, the AWS CDK Workshop is a great place to start.

  1. Setup your integrated development environment (IDE)
    1. Option 1 (recommended for first time CDK users): Use AWS Cloud9 – a cloud-based IDE that lets you write, run, and debug your code with just a browser
      1. Navigate to Cloud9 in the AWS console and click the Create Environment button
      2. Provide a descriptive name to your environment (e.g. PinpointAnalysis)
      3. Leave the rest of the values as their default values and click Create
      4. Open the Cloud9 IDE
        1. Node, TypeScript, and CDK should be come pre-installed. Test this by running the following commands in your terminal.
          1. node --version
          2. tsc --version
          3. cdk --version
          4. If dependencies are not installed, follow the Step 1 instructions from this article
        2. Using AWS Cloud 9 will incur a nominal charge if you are no longer Free Tier eligible. However, using AWS Cloud9 will simply setup if you do not already have a local environment with AWS CDK and the AWS CLI installed
    2. Option 2: local IDE such as VS Code
      1. Setup CDK locally using this documentation
      2. Install Node, TypeScript and the AWS CLI
        1. Once the CLI is installed, configure your AWS credentials
          1. aws configure
  2. Clone the Pinpoint Dashboard Solution from your terminal by running the command below:
    1. git clone https://github.com/aws-samples/digital-user-engagement-events-dashboards.git
  3. Install the required npm packages from package.json by running the commands below:
    1. cd digital-user-engagement-events-dashboards
    2. npm install

Open the file at digital-user-engagement-events-dashboards/bin/pinpoint-bi-analysis.ts for editing in your IDE.

Edit the following code block your your solution with the information you have gathered in the previous steps. Please reference Table 1 for a description of each editable field.

const resourcePrefix = "pinpoint_analytics_";

...

new MainApp(app, "PinpointAnalytics", {
  env: {
    region: "us-east-1",
  }
  
  //Attributes to change
  dueDbBucketName: "{bucket-name}",
  pinpointProjectId: "{pinpoint-project-id}",
  qsUserName: "{quicksight-username}",

  //Default settings
  athenaWorkGroupName: "primary",
  dataLakeDbName: "due_eventdb",
  dateRangeNumberOfMonths: 6,
  qsUserRegion: "us-east-1", 
  qsDefaultServiceRole: "aws-quicksight-service-role-v0", 
  spiceRefreshInterval: "HOURLY",

  //Constants
  athena_util: athena_util,
  qs_util: qs_util,
});
Attribute Definition Example
resourcePrefix The prefix for all created Athena and QuickSight resources pinpoint_analytics_
region Where new resources will be deployed. This must be the same region that the DUE event database solution was deployed us-east-1
dueDbBucketName The name of the DUE event database S3 Bucket due-database-xxxxxxxxxxus-east-1
qsUserName The name of your QuickSight User Admin/my-user
athenaWorkGroupName The Athena workgroup that was previously configured primary
dataLakeDbName The Glue database created during the DUE event database solution. By default the database name is “due_eventdb” due_eventdb
dateRangeNumberOfMonths The number of months of data the Athena views will contain. QuickSight SPICE datasets will contain this many months of data initially and on full refresh. The QuickSight dataset will add new data incrementally without deleting historical data. 6
qsUserRegion The region where your quicksight user exists. By default, new users will be created in us-east-1. You can check your user location with the AWS CLI: aws quicksight list-users --aws-account-id {accout-id} --namespace default and look for the region in the arn us-east-1
qsDefaultServiceRole The service role collected during Step 3. aws-quicksight-service-role-v0
spiceRefreshInterval Options Include HOURLY, DAILY – This is how often the SPICE 7-day incremental window will be refreshed DAILY

Step 5 – Deploy

  1. CDK requires you to bootstrap in each region of an account. This creates a S3 bucket for deployment. You only need to bootstrap once per account/region
    1. cdk bootstrap
  2. Deploy the application
    1. cdk deploy

Step 6 – Explore

Once your solution deploys, look for the Outputs provided by the CDK CLI. You will find a link to your new Amazon Quicksight Analysis, or Dashboard, as well as a few other key resources. Also, explore the resources sections of the deployed stacks in AWS CloudFormation for a complete list of deployed resources. In the AWS CloudFormation, you should have two stacks. The main stack will be called PinpointAnalytics and a nested stack.

Pricing

The total cost to run this solution will depend on several factors. To help explore what the costs might look like for you, please look at the following examples.

All costs outlined below will assume the following:

  • 1 Amazon QuickSight author
  • 100 Amazon QuickSight analysis reader sessions
  • 100k write API actions for all services in AWS account
  • A total of 1k Amazon Pinpoint campaigns, journeys, and segments resulting in 1k Amazon DynamoDB records
  • 5 million monthly Amazon Pinpoint events – email send, email delivered, etc.

Base Costs:

  • 1 Amazon QuickSight author – can edit all Amazon QuickSight resources
    • $24 – There is a a 30 day trial for 4 authors in the free tier
  • 100 Amazon QuickSight analysis reader sessions OR 6 readers with unlimited access – max $5 per month per reader
    • $30
  • Total Monthly Costs: $54 / month

Variable Costs:

Even with the assumptions listed above, the costs will vary depending on the chosen data retention window as well as the the refresh schedule.

  • SPICE data storage costs.
    • Total size of storage will depend on how many months you choose to display in the dashboard
    • For the above assumptions, the SPICE datasets will cost roughly $3.25 for each month stored in the datasets.
  • Amazon Athena data volume costs
    • With Athena you are charged for the total number of bytes scanned in a query. The solution implements incremental data resfreshes in SPICE. Amazon QuickSight will only query and updates the most recent 7 days of data during each refresh cycle. This can be adjusted as needed.

Scenario 1 – 6-month data analysis with daily refresh:

  • Fixed costs: $57
  • SPICE datasets: $19.50
  • Athena Scans: $1.25
  • Total Costs: $77.75 / Month

Scenario 2 – 12-month data analysis with daily refresh:

  • Fixed costs: $57
  • SPICE datasets: $39
  • Athena Scans: $1.25
  • Total Costs: $97.25 / Month

Scenario 3 – 12-month data analysis with hourly refresh:

  • Fixed costs: $57
  • SPICE datasets: $39
  • Athena Scans: $27.50
  • Total Costs: $123.5 / Month

Note: Several services were not mentioned in the above scenarios (e.g., DynamoDB, Cloudtrail, Lambda, etc). The limited usage of these services resulted in a combined cost of less than a few US dollars per month. Even at a greater scale, the costs from these services will not increase in any significant way.

Clean up

  • Delete the CDK stack running the following from your command line
    • cdk destroy
  • Delete QuickSight account
  • Delete Athena views
    • Go to Glue > Data Catalog > Databases > Your Database Name
    • This should delete all Athena views no longer needed. Views created will start with the resourcePrefix specified in the bin/athena-quicksight-cdk.ts file
  • Delete S3 buckets
    • DynamoDB cloud watch log bucket
    • Dynamo Athena Connector Spill bucket
    • Athena workgroup output bucket
  • Delete DynamoDB tables
    • This solution creates two DynamoDB lookup tables prefixed with the Stack name

Conclusion

In this blog, you have deployed a solution that visualizes Amazon Pinpoint’s email and SMS engagement data using Amazon QuickSight. This solution provides you with an Amazon QuickSight functional dashboard as well as a foundation to design and build new Amazon QuickSight dashboards that meet your bespoke requirements. Parts of the solution, such as the Amazon Athena views, can be ingested with other business intelligence tools that your business might already be using.

Next steps

This solution can be expanded to include Amazon Pinpoint engagement events from other channels such as push notifications, Amazon Connect outbound calls, in-app and custom events. This will require certain updates on the Amazon Athena views and consequently on the Amazon QuickSight dashboards. Furthermore, the Amazon DynamoDB tables store only campaign, journey and segment meta-data. You can extend this part of the solution to include message template meta-data, which will help to analyze performance per message template.

Considerations / Troubleshooting

  • Pinpoint Standard account can be upgraded to an Enterprise account. Enterprise accounts cannot be downgraded to a Standard account.
  • SPICE capacity is allocated separately for each AWS Region. Default SPICE capacity is automatically allocated to your home AWS Region. For each AWS account, SPICE capacity is shared by all the people using QuickSight in a single AWS Region. The other AWS Regions have no SPICE capacity unless you choose to purchase some.
  • The QuickSight Analysis Event rates are calculated on Pinpoint message_id and endpoint_id grain – click rate will be the same if a user clicks an email link one or more than one times
  • All timestamps are in UTC. To display data in another timezone edit event_timestamp_timezone calculated field in every dataset
  • Data inside Amazon QuickSight will refresh depending on the schedule set during deployment. Current options include hourly and daily refreshes.
  • AWS CloudTrail has 5 cloudtrail trails per AWS account.

About the Authors

Spencer Harrison

Spencer Harrison

Spencer was a 2023 WWPS Solution Architect intern at Amazon Web Services. He will graduate with his Masters of Information Systems Management from Brigham Young University in the spring of 2024. After graduation he is aspiring to find opportunities as a solution architect, cloud engineer, or DevOps engineer. Outside of work, Spencer loves going outdoors to wake surf, downhill ski, and play pickle ball.

Daniel Wells

Daniel Wells

With over 20 years of IT experience, Daniel has held many architecture and director positions supporting a wide variety of technologies. He currently works as an AWS Solutions Architect supporting Education Technology companies striving to make a difference for learners and educators worldwide. Daniel’s interests outside of work include music, family, health, education and anything that allows him to express himself creatively.

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Senior Specialist Solutions Architect at AWS. He enjoys diving deep into customers’ technical issues and help in designing communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Prime Day 2023 Powered by AWS – All the Numbers

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/prime-day-2023-powered-by-aws-all-the-numbers/

As part of my annual tradition to tell you about how AWS makes Prime Day possible, I am happy to be able to share some chart-topping metrics (check out my 2016, 2017, 2019, 2020, 2021, and 2022 posts for a look back).

This year I bought all kinds of stuff for my hobbies including a small drill press, filament for my 3D printer, and irrigation tools. I also bought some very nice Alphablock books for my grandkids. According to our official release, the first day of Prime Day was the single largest sales day ever on Amazon and for independent sellers, with more than 375 million items purchased.

Prime Day by the Numbers
As always, Prime Day was powered by AWS. Here are some of the most interesting and/or mind-blowing metrics:

Amazon Elastic Block Store (Amazon EBS) – The Amazon Prime Day event resulted in an incremental 163 petabytes of EBS storage capacity allocated – generating a peak of 15.35 trillion requests and 764 petabytes of data transfer per day. Compared to the previous year, Amazon increased the peak usage on EBS by only 7% Year-over-Year yet delivered +35% more traffic per day due to efficiency efforts including workload optimization using Amazon Elastic Compute Cloud (Amazon EC2) AWS Graviton-based instances. Here’s a visual comparison:

AWS CloudTrail – AWS CloudTrail processed over 830 billion events in support of Prime Day 2023.

Amazon DynamoDB – DynamoDB powers multiple high-traffic Amazon properties and systems including Alexa, the Amazon.com sites, and all Amazon fulfillment centers. Over the course of Prime Day, these sources made trillions of calls to the DynamoDB API. DynamoDB maintained high availability while delivering single-digit millisecond responses and peaking at 126 million requests per second.

Amazon Aurora – On Prime Day, 5,835 database instances running the PostgreSQL-compatible and MySQL-compatible editions of Amazon Aurora processed 318 billion transactions, stored 2,140 terabytes of data, and transferred 836 terabytes of data.

Amazon Simple Email Service (SES) – Amazon SES sent 56% more emails for Amazon.com during Prime Day 2023 vs. 2022, delivering 99.8% of those emails to customers.

Amazon CloudFront – Amazon CloudFront handled a peak load of over 500 million HTTP requests per minute, for a total of over 1 trillion HTTP requests during Prime Day.

Amazon SQS – During Prime Day, Amazon SQS set a new traffic record by processing 86 million messages per second at peak. This is 22% increase from Prime Day of 2022, where SQS supported 70.5M messages/sec.

Amazon Elastic Compute Cloud (EC2) – During Prime Day 2023, Amazon used tens of millions of normalized AWS Graviton-based Amazon EC2 instances, 2.7x more than in 2022, to power over 2,600 services. By using more Graviton-based instances, Amazon was able to get the compute capacity needed while using up to 60% less energy.

Amazon Pinpoint – Amazon Pinpoint sent tens of millions of SMS messages to customers during Prime Day 2023 with a delivery success rate of 98.3%.

Prepare to Scale
Every year I reiterate the same message: rigorous preparation is key to the success of Prime Day and our other large-scale events. If you are preparing for a similar chart-topping event of your own, I strongly recommend that you take advantage of AWS Infrastructure Event Management (IEM). As part of an IEM engagement, my colleagues will provide you with architectural and operational guidance that will help you to execute your event with confidence!

Jeff;

Building Generative AI into Marketing Strategies: A Primer

Post Syndicated from nnatri original https://aws.amazon.com/blogs/messaging-and-targeting/building-generative-ai-into-marketing-strategies-a-primer/

Introduction

Artificial Intelligence has undoubtedly shaped many industries and is poised to be one of the most transformative technologies in the 21st century. Among these is the field of marketing where the application of generative AI promises to transform the landscape. This blog post explores how generative AI can revolutionize marketing strategies, offering innovative solutions and opportunities.

According to Harvard Business Review, marketing’s core activities, such as understanding customer needs, matching them to products and services, and persuading people to buy, can be dramatically enhanced by AI. A 2018 McKinsey analysis of more than 400 advanced use cases showed that marketing was the domain where AI would contribute the greatest value. The ability to leverage AI can not only help automate and streamline processes but also deliver personalized, engaging content to customers. It enhances the ability of marketers to target the right audience, predict consumer behavior, and provide personalized customer experiences. AI allows marketers to process and interpret massive amounts of data, converting it into actionable insights and strategies, thereby redefining the way businesses interact with customers.

Generating content is just one part of the equation. AI-generated content, no matter how good, is useless if it does not arrive at the intended audience at the right point of time. Integrating the generated content into an automated marketing pipeline that not only understands the customer profile but also delivers a personalized experience at the right point of interaction is also crucial to getting the intended action from the customer.

Amazon Web Services (AWS) provides a robust platform for implementing generative AI in marketing strategies. AWS offers a range of AI and machine learning services that can be leveraged for various marketing use cases, from content creation to customer segmentation and personalized recommendations. Two services that are instrumental to delivering customer contents and can be easily integrated with other generative AI services are Amazon Pinpoint and Amazon Simple Email Service. By integrating generative AI with Amazon Pinpoint and Amazon SES, marketers can automate the creation of personalized messages for their customers, enhancing the effectiveness of their campaigns. This combination allows for a seamless blend of AI-powered content generation and targeted, data-driven customer engagement.

As we delve deeper into this blog post, we’ll explore the mechanics of generative AI, its benefits and how AWS services can facilitate its integration into marketing communications.

What is Generative AI?

Generative AI is a subset of artificial intelligence that leverages machine learning techniques to generate new data instances that resemble your training data. It works by learning the underlying patterns and structures of the input data, and then uses this understanding to generate new, similar data. This is achieved through the use of models like Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Transformer models.

What do Generative AI buzzwords mean?

In the world of AI, buzzwords are abundant. Terms like “deep learning”, “neural networks”, “machine learning”, “generative AI”, and “large language models” are often used interchangeably, but they each have distinct meanings. Understanding these terms is crucial for appreciating the capabilities and limitations of different AI technologies.

Machine Learning (ML) is a subset of AI that involves the development of algorithms that allow computers to learn from and make decisions or predictions based on data. These algorithms can be ‘trained’ on a dataset and then used to predict or classify new data. Machine learning models can be broadly categorized into supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning.

Deep Learning is a subset of machine learning that uses neural networks with many layers (hence “deep”) to model and understand complex patterns. These layers of neurons process different features, and their outputs are combined to produce a final result. Deep learning models can handle large amounts of data and are particularly good at processing images, speech, and text.

Generative AI refers specifically to AI models that can generate new data that mimic the data they were trained on. This is achieved through the use of models like Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Generative AI can create anything from written content to visual designs, and even music, making it a versatile tool in the hands of marketers.

Large Language Models (LLMs) are a type of generative AI that are trained on a large corpus of text data and can generate human-like text. They predict the probability of a word given the previous words used in the text. They are particularly useful in applications like text completion, translation, summarization, and more. While they are a type of generative AI, they are specifically designed for handling text data.

Simply put, you can understand that Large Language Model is a subset of Generative AI, which is then a subset of Machine Learning and they ultimately falls under the umbrella term of Artificial Intelligence.

What are the problems with generative AI and marketing?

While generative AI holds immense potential for transforming marketing strategies, it’s important to be aware of its limitations and potential pitfalls, especially when it comes to content generation and customer engagement. Here are some common challenges that marketers should be aware of:

Bias in Generative AI Generative AI models learn from the data they are trained on. If the training data is biased, the AI model will likely reproduce these biases in its output. For example, if a model is trained primarily on data from one demographic, it may not accurately represent other demographics, leading to marketing campaigns that are ineffective or offensive. Imagine if you are trying to generate an image for a campaign targeting females, a generative AI model might not generate images of females in jobs like doctors, lawyers or judges, leading your campaign to suffer from bias and uninclusiveness.

Insensitivity to Cultural Nuances Generative AI models may not fully understand cultural nuances or sensitive topics, which can lead to content that is insensitive or even harmful. For instance, a generative AI model used to create social media posts for a global brand may inadvertently generate content that is seen as disrespectful or offensive by certain cultures or communities.

Potential for Inappropriate or Offensive Content Generative AI models can sometimes generate content that is inappropriate or offensive. This is often because the models do not fully understand the context in which certain words or phrases should be used. It’s important to have safeguards in place to review and approve content before it’s published. A common problem with LLMs is hallucination: whereby the model speaks false knowledge as if it is accurate. A marketing team might mistakenly publish a auto-generated promotional content that contains a 20% discount on an item when no such promotions were approved. This could have disastrous effect if safeguards are not in place and erodes customers’ trust.

Intellectual Property and Legal Concerns Generative AI models can create new content, such as images, music, videos, and text, which raises questions of ownership and potential copyright infringement. Being a relatively new field, legal discussions are still ongoing to discuss legal implications of using Generative AI, e.g. who should own generated AI content, and copyright infringement.

Not a Replacement for Human Creativity Finally, while generative AI can automate certain aspects of marketing campaigns, it cannot replace the creativity or emotional connections that marketers use in crafting compelling campaigns. The most successful marketing campaigns touch the hearts of the customers, and while Generative AI is very capable of replicating human content, it still lacks in mimicking that “human touch”.

In conclusion, while generative AI offers exciting possibilities for marketing, it’s important to approach its use with a clear understanding of its limitations and potential pitfalls. By doing so, marketers can leverage the benefits of generative AI while mitigating risks.

How can I use generative AI in marketing communications?

Amazon Web Services (AWS) provides a comprehensive suite of services that facilitate the use of generative AI in marketing. These services are designed to handle a variety of tasks, from data processing and storage to machine learning and analytics, making it easier for marketers to implement and benefit from generative AI technologies.

Overview of Relevant AWS Services

AWS offers several services that are particularly relevant for generative AI in marketing:

  • Amazon Bedrock: This service makes FMs accessible via an API. Bedrock offers the ability to access a range of powerful FMs for text and images, including Amazon’s Titan FMs. With Bedrock’s serverless experience, customers can easily find the right model for what they’re trying to get done, get started quickly, privately customize FMs with their own data, and easily integrate and deploy them into their applications using the AWS tools and capabilities they are familiar with.
  • Amazon Titan Models: These are two new large language models (LLMs) that AWS is announcing. The first is a generative LLM for tasks such as summarization, text generation, classification, open-ended Q&A, and information extraction. The second is an embeddings LLM that translates text inputs into numerical representations (known as embeddings) that contain the semantic meaning of the text. In response to the pitfalls mentioned above around Generative AI hallucinations and inaccurate information, AWS is actively working on improving accuracy and ensuring its Titan models produce high-quality responses, said Bratin Saha, an AWS vice president.
  • Amazon SageMaker: This fully managed service enables data scientists and developers to build, train, and deploy machine learning models quickly. SageMaker includes modules that can be used for generative AI, such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs).
  • Amazon Pinpoint: This flexible and scalable outbound and inbound marketing communications service enables businesses to engage with customers across multiple messaging channels. Amazon Pinpoint is designed to scale with your business, allowing you to send messages to a large number of users in a short amount of time. It integrates with AWS’s generative AI services to enable personalized, AI-driven marketing campaigns.
  • Amazon Simple Email Service (SES): This cost-effective, flexible, and scalable email service enables marketers to send transactional emails, marketing messages, and other types of high-quality content to their customers. SES integrates with other AWS services, making it easy to send emails from applications being hosted on services such as Amazon EC2. SES also works seamlessly with Amazon Pinpoint, allowing for the creation of customer engagement communications that drive user activity and engagement.

How to build Generative AI into marketing communications

Dynamic Audience Targeting and Segmentation: Generative AI can help marketers to dynamically target and segment their audience. It can analyze customer data and behavior to identify patterns and trends, which can then be used to create more targeted marketing campaigns. Using Amazon Sagemaker or the soon-to-be-available Amazon Bedrock and Amazon Titan Models, Generative AI can suggest labels for customers based on unstructured data. According to McKinsey, generative AI can analyze data and identify consumer behavior patterns to help marketers create appealing content that resonates with their audience.

Personalized Marketing: Generative AI can be used to automate the creation of marketing content. This includes generating text for blogs, social media posts, and emails, as well as creating images and videos. This can save marketers a significant amount of time and effort, allowing them to focus on other aspects of their marketing strategy. Where it really shines is the ability to productionize marketing content creation, reducing the needs for marketers to create multiple copies for different customer segments. Previously, marketers would need to generate many different copies for each granularity of customers (e.g. attriting customers who are between the age of 25-34 and loves food). Generative AI can automate this process, providing the opportunities to dynamically create these contents programmatically and automatically send out to the most relevant segments via Amazon Pinpoint or Amazon SES.

Marketing Automation: Generative AI can automate various aspects of marketing, such as email marketing, social media marketing, and search engine marketing. This includes automating the creation and distribution of marketing content, as well as analyzing the performance of marketing campaigns. Amazon Pinpoint currently automates customer communications using journeys which is a customized, multi-step engagement experience. Generative AI could create a Pinpoint journey based on customer engagement data, engagement parameters and a prompt. This enables GenAI to not only personalize the content but create a personalized omnichannel experience that can extend throughout a period of time. It then becomes possible that journeys are created dynamically by generative AI and A/B tested on the fly to achieve an optimal pre-defined Key Performance Indicator (KPI).

A Sample Generative AI Use Case in Marketing Communications

AWS services are designed to work together, making it easy to implement generative AI in your marketing strategies. For instance, you can use Amazon SageMaker to build and train your generative AI models which assist with automating marketing content creation, and Amazon Pinpoint or Amazon SES to deliver the content to your customers.

Companies using AWS can theoretically supplement their existing workloads with generative AI capabilities without the needs for migration. The following reference architecture outlines a sample use case and showcases how Generative AI can be integrated into your customer journeys built on the AWS cloud. An e-commerce company can potentially receive many complaints emails a day. Companies spend a lot of money to acquire customers, it’s therefore important to think about how to turn that negative experience into a positive one.

GenAIMarketingSolutionArchitecture

When an email is received via Amazon SES (1), its content can be passed through to generative AI models using GANs to help with sentiment analysis (2). An article published by Amazon Science utilizes GANs for sentiment analysis for cases where a lack of data is a problem. Alternatively, one can also use Amazon Comprehend at this step and run A/B tests between the two models. The limitations with Amazon Comprehend would be the limited customizations you can perform to the model to fit your business needs.

Once the email’s sentiment is determined, the sentiment event is logged into Pinpoint (3), which then triggers an automatic winback journey (4).

Generative AI (e.g. HuggingFace’s Bloom Text Generation Models) can again be used here to dynamically create the content without needing to wait for the marketer’s input (5). Whereas marketers would need to generate many different copies for each granularity of customers (e.g. attriting customers who are between the age of 25-34 and loves food), generative AI provides the opportunities to dynamically create these contents on the fly given the above inputs.

Once the campaign content has been generated, the model pumps the template backs into Amazon Pinpoint (6), which then sends the personalized copy to the customer (7).

Result: Another customer is saved from attrition!

Conclusion

The landscape of generative AI is vast and ever-evolving, offering a plethora of opportunities for marketers to enhance their strategies and deliver more personalized, engaging content. AWS plays a pivotal role in this landscape, providing a comprehensive suite of services that facilitate the implementation of generative AI in marketing. From building and training AI models with Amazon SageMaker to delivering personalized messages with Amazon Pinpoint and Amazon SES, AWS provides the tools and infrastructure needed to harness the power of generative AI.

The potential of generative AI in relation to the marketer is immense. It offers the ability to automate content creation, personalize customer interactions, and derive valuable insights from data, among other benefits. However, it’s important to remember that while generative AI can automate certain aspects of marketing, it is not a replacement for human creativity and intuition. Instead, it should be viewed as a tool that can augment human capabilities and free up time for marketers to focus on strategy and creative direction.

Get started with Generative AI in marketing communications

As we conclude this exploration of generative AI and its applications in marketing, we encourage you to:

  • Brainstorm potential Generative AI use cases for your business. Consider how you can leverage generative AI to enhance your marketing strategies. This could involve automating content creation, personalizing customer interactions, or deriving insights from data.
  • Start leveraging generative AI in your marketing strategies with AWS today. AWS provides a comprehensive suite of services that make it easy to implement generative AI in your marketing strategies. By integrating these services into your workflows, you can enhance personalization, improve customer engagement, and drive better results from your campaigns.
  • Watch out for the next part in the series of integrating Generative AI into Amazon Pinpoint and SES. We will delve deeper into how you can leverage Amazon Pinpoint and SES together with generative AI to enhance your marketing campaigns. Stay tuned!

The journey into the world of generative AI is just beginning. As technology continues to evolve, so too will the opportunities for marketers to leverage AI to enhance their strategies and deliver more personalized, engaging content. We look forward to exploring this exciting frontier with you.

About the Author

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. At work, he specializes in technical implementation of communications services in enterprise systems and architecture/solutions design. In his spare time, he enjoys chess, rock climbing, hiking and triathlon.

How to Manage Global Sending of SMS with Amazon Pinpoint

Post Syndicated from Tyler Holmes original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-manage-global-sending-of-sms-with-amazon-pinpoint/

Amazon Pinpoint has a global SMS reach, of 240 countries and regions around the world, enabling companies of all sizes to send SMS globally. Unlike the process of sending a personal message from your phone to someone in another country, sending Application to Person (A2P) messages, also known as bulk SMS, involves many more regulations and requirements that vary from country to country. In this post we will review best practices for sending Global SMS and share a selection of AWS resources to help you send SMS globally.

The first thing to understand about delivering SMS around the world is that it takes a vast network of components working seamlessly together around the globe to deliver an SMS globally. The image below gives a simple example of delivering an SMS in the United States. Mobile devices are at the center of this, connecting to mobile carriers or operators, who operate the infrastructure necessary for SMS transmission. Once you hit that send button from AWS, your message travels to an Aggregator, who has connections to Operators, Partners, and/or other Aggregators. The reason for this is that there is no one vendor who delivers globally. AWS uses many Aggregators that both enable us to send globally as well as improve resiliency and deliverability of your messages. The last stop on the journey is the Short Message Service Center (SMSC), a central hub that receives, stores, and forwards text messages. The SMSC acts as a gateway, routing your message to the recipient’s carrier or operator through a series of interconnected networks, thanks to agreements between different carriers known as interconnection agreements. The entire process is facilitated by the Signaling System 7 (SS7), a set of protocols that enables the exchange of information between telecommunication networks, ensuring messages reach their intended recipients.
Diagram showing how SMS is delivered using aggregators
Every country has its own regulations and processes that you need to comply with in order to successfully deliver SMS to handsets that are registered to a particular country. There are some countries with little regulation and others that will block all SMS traffic unless it has been registered with the proper authorities.

Each country’s requirements include the origination identities (OIDs) that their networks support, some of these include long codes (standard phone numbers that typically have 10 or more digits), short codes (phone numbers that contain between four and seven digits), and Sender IDs (names that contain 6–11 alphanumeric characters). Each of these types of origination identities has unique benefits and drawbacks and you will need one for each use case and country you plan on supporting. Here is a list of the countries that AWS currently sends to and the OIDs that are supported.

Pre-Planning and Country Selection
The first step to planning a global roll out of SMS is to know what countries you want to send to and what each of your use cases are. Put together a spreadsheet for each unique use case you have and the countries you plan on sending to with the below key details:

  • The volumes you expect to send to each country
  • The throughput (Also referred to as Messages per Second, MPS, Transactions per Second, or TPS) at which you expect to deliver these messages
  • Whether your use case is one-way or two-way
    • Not all countries support 2-way communications, which is the ability to have the recipient send a message back to the OID. Sender ID also does not support 2-way communication so if you are planning on using Sender ID you will need to account for how to opt recipients out of future communications.
  • Leave a column for the Origination Identity you will use for each country
  • Leave a column for whether this country requires advanced registration
  • Leave a column for any country specific limitations or requirements such as language limitations
  • Leave a column for the estimated time it takes to register
    • This chart has estimates for common countries but there are others that also have lead time in procuring an OID so please open a support case for review

Selecting an Origination Identity

Now that you have these details all in one place consult this table to determine what OIDs each country supports, and, if your use case requires it, which countries support two-way.

In countries where there are multiple options for OIDs there are several guidelines to consider when you’re deciding what type of origination identity to use:

  • Sender IDs are a great option for one-way use cases. However, they’re not available in all countries and if you are needing to opt-out your customers you will need to provide a way for them to do so since they are only one-way.
    • In some countries (such as India and Saudi Arabia), long codes can be used to receive incoming messages, but can’t be used to send outgoing messages. You can use these inbound-only long codes to provide your recipients with a way to opt out of messages that you send using a Sender ID.
  • Short codes are a great option for two-way use cases and have the highest throughput of all OIDs.
    • While short codes have a higher throughput they also come at a much higher cost than other OIDs so weigh your cost against your use case requirements.
  • In some countries, we maintain a pool of shared origination identities. If you send messages to recipients in a particular country, but you don’t have a dedicated origination identity in that country, we make an effort to deliver your message using one of these shared identities.
    • Shared identities are unavailable in some countries, including the United States and China.
    • Shared identities cannot be 2-way so make sure you have a way of opting customers out of communication

With these in mind consult this guide to help you decide which OID to use for each country and use case. Update your sheet as you review each country. Many of our customers opt for a phased roll-out, enabling SMS for the countries that do not require registration and can be put into production swiftly while working through the registration process for those countries that require it and bringing those to production as they are approved. A phased approach is also preferred as it allows customers to monitor for any problems with deliverability with a smaller volume than their full production workload.

Procurement and Registration of Origination Identities

In countries where registration is onerous it is important to have a few things about your process all in one place. Some registrations are very similar in the information that they ask for while others have special processes that you need to follow. Examples include:

Once you have decided on your OIDs for each of your countries you can begin the process of procuring them. Depending on where you plan on sending you may need to open a case to procure them. Short codes you also need to open a case but the process is slightly different so review the documentation here. If you are having trouble making a decision on OIDs you may have the option of engaging with AWS support or your Account Manager dependent on the support level you have opted for on your account.

Testing SMS Sending

Once you have procured OIDs and are ready to begin testing, it is essential that you set up a way of monitoring the events that Pinpoint generates. Pay attention to the Delivery Receipts (DLRs) that are returned back into the event stream. These provide you details on the success or failure of your sends. Pinpoint delivers all events via Amazon Kinesis, which needs to be enabled within each Project you are using. This is a common solution among our customers. It enables the stream, sends it to a user-specified S3 Bucket, and sets up Tables and Views within Amazon Athena, our serverless SQL query engine.. Kinesis can stream to many different destinations, including Redshift and HTTP endpoints, among many others. This gives you flexibility in how you deliver the events to their required locations. Monitoring SMS events is an important part of sending globally, these are the SMS Events that are possible to receive in your stream.

TPS limits can vary depending on the countries you’re sending to and the OIDs you’re using. If there’s a risk of exceeding these limits and triggering rate limiting errors, it’s crucial to devise a strategy for queuing your messages. Keep in mind, Amazon Pinpoint doesn’t offer queueing capabilities. Therefore, message queueing must be incorporated at your application level or by leveraging AWS services. For instance, you could deploy this commonly used architecture that’s adjustable according to your specific use case.

Once you have your monitoring solution in place, you are read to begin testing sends to real destination phone numbers. Keep in mind that at this point you are likely still in the Sandbox for SMS. This means you have much lower quotas for sending and can only send to verified phone numbers or the SMS Simulator numbers. Pinpoint includes an SMS simulator, which you can use to send text messages and receive realistic event records to 51 commonly sent to countries. Messages sent to these destination phone numbers are not sent over the carrier network but do incur the standard outbound SMS messaging rate for the country that the simulated phone number is based in.

Best Practices for Sending
Before beginning There are two common ways of sending SMS via Pinpoint. The first option is the Pinpoint API using the SendMessages Action, which you can send a direct message to as many as 100 recipients at a time. The second option is to use the SMS and Voice v2 API and the SendTextMessage Action, which has more options available to configure your sends and can send to a single recipient with each call. The V2 API is the preferred way of sending as it allows for more fine grained control over your messages and is the API upon which new functionality will be built. Keep in mind that sending via the API does not attribute any metrics back to an endpoint unless you are specifying an endpoint ID in your call, so if you are using other features of Pinpoint such as campaigns or journeys or sending via other channels such as email you will need to consider your strategy for measuring success and how you will tie all of your communication efforts together.

When sending SMS Pinpoint includes logic for selecting the best OID to send from based on the country code. If there are multiple OIDs available to send to a particular country Pinpoint will default to the highest throughput OID available in your Account/Region. If there are not OIDs specific to the country being sent to Pinpoint will default to SenderID or to a shared OID owned by Pinpoint in that order, if the country allows these OIDs to be used. Given this functionality the best practice for sending SMS is to not specify the OID needed to send to a specific country and to allow Pinpoint to select. You can restrict Pinpoint to send to only those countries that you have OIDs for by using Pools, and turning off Shared Routes, more on this below.

If you have multiple use cases and need to specify the correct OID for each, this is where the V2 API is useful. OIDs can be attached to Pools, which can be configured to serve a particular use case, and the pool can be specified in your SendTextMessage call. Sending using a PoolID and allowing Pinpoint to select the right OID from that pool for the destination phone number simplifies your sending process. This blogpost details the process for creating Pools and using them to send SMS.

As mentioned above Pools also serve an additional use case, which is to limit message sending to specific countries. Some countries allow messages without an OID. If you don’t modify your settings to disable this feature, Pinpoint will attempt to deliver messages to these countries, even if you don’t have an explicit OID for them. Restricting SMS sends only to countries that you have OIDs for can be accomplished by using Pools and configuring “SharedRoutesEnabled“ to false by using the UpdatePool Action. Once configured you will receive an error back if attempting to send to a destination phone number that you do not have an OID for in the Pool. This configuration gives you the ability to control your costs while simplifying your process.

Managing Opt-Outs

As we have seen, managing SMS in an environment of increasing global regulation is challenging. An area of importance that needs to be configured is how you plan on managing the ability for recipients to opt out of your communications. Pinpoint can automatically opt your customers out of SMS communications using predefined keywords such as, “stop” or “unsubscribe.” However, this would make for an Account wide opt-out, and not ideal for customers that have multiple use cases such as OTP and Marketing communications. This blogpost details the process of managing opt-outs for multiple use cases. The configuration is enabled through the V2 API and is another reason to standardize your process on this API.

Monitoring Sending

The last step in ensuring success for SMS sending is having a solid platform for monitoring your sending. SMS is not a guaranteed delivery channel. You will always receive an event for a successful send in the event stream but there is no guarantee of a return status event, if a DLR from a carrier is not sent. A list of SMS Events and possible statuses can be found here.

The first Event you should see returned when watching the Event Stream for an SMS send activity is the “PENDING” event. This means we’ve sent the message to the carrier, where it’s buffered, and we’re waiting for the carrier to return a status message. There are no status messages between the “PENDING” state and the “whatever happens next” state, so if the carrier is retrying, we simply stay in PENDING and do not create more events. If a message is successfully delivered and a DLR is sent back from the carrier then a new event will be generated with a status of “SUCCESSFUL/DELIVERED.”

Make sure to review all of the possible values for the record_status attribute so that you are aware of varying issues with your sending that can arise. For example, statuses such as “Blocked,” “Spam,” and “Carrier_Blocked“ can indicate systemic issues that should be investigated.

Updates sent from a carrier via a DLR can be delayed for up to 72 hours or never sent at all. This varies based on the carrier and the country being sent to. Should you require a higher level of reliability, you need to establish business logic around monitoring SMS messages. If messages remain in a PENDING status longer than your business requirements permit, you must make a decision on how to handle them. You need to consider whether missed or duplicated messages are acceptable, or if it’s preferable to retry messages that are stuck in pending. The following is an example architecture for failed SMS retries that you can adjust to your needs.

Conclusion

This post covers the general process for getting started with Global SMS but as you have learned each country presents a different challenge and the regulatory environment is constantly evolving. It’s important to make sure that you are receiving messages from AWS that detail new regulations, new feature launches, and other major announcements to continually improve your process and make sure your SMS are delivering at the highest rate possible.

Take the time to plan out your approach, follow the steps outlined in this blog, and take advantage of any resources available to you within your support tier.

Decide what origination IDs you will need here
Review the documentation for the V2 SMS and Voice API here
Review the Pinpoint API and SendMessage here
Check out the support tiers comparison here

Resources:
https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-countries.html
https://aws.amazon.com/blogs/messaging-and-targeting/how-to-utilise-amazon-pinpoint-to-retry-unsuccessful-sms-delivery/
https://datatracker.ietf.org/doc/html/draft-wilde-sms-uri-20#section-4
https://docs.aws.amazon.com/pinpoint/latest/developerguide/event-streams-data-sms.html
https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-limitations-opt-out.html
https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-simulator.html

How to send geofenced marketing messages using Amazon Pinpoint

Post Syndicated from Zach Elliott original https://aws.amazon.com/blogs/messaging-and-targeting/send-geofenced-marketing-messages-using-amazon-pinpoint/

Introduction

Geofencing, which creates a virtual geographical boundary that triggers a marketing action to a mobile device when a user enters or exits that boundary, can be used in marketing messages to drive more traffic and increase conversions. Amazon Pinpoint, AWS’ multichannel communication tool, can be used to create mobile notifications using geofencing technology, so customers receive notifications about a business when they’re close by that physical location.

Ways retailers can use geofencing:

There are a number of different use cases that retail or location-based businesses can use geofencing to drive customer conversions:

  1. Target the customer with real-time offers and promotions when the customer is near the store: Detecting and establishing an interaction with the customer while in the store improves the customer experience. Using geofencing, retailers will be able to detect the presence and will be able to send coupon or promotional notifications.
  2. Improve product search in the store: As the consumer enters the geofenced store, activate the product search for the store to help the consumer to search and navigate easily within the store.
  3. Get more information about the customer in the store: Retailers will be able to collect more accurate consumer behavior inside the store by recording the interaction between the consumers and product search, and using geofencing and position to calculate the dwell time inside the store or how long the consumer is waiting in the queue.

In this blog we will talk about how you can use Amazon Location Service to trigger a notification using Amazon Pinpoint when a consumer enters a geofenced store.

Architecture Overview

Architecture Overview for Pinpoint and Geofencing Solution

Fig. 1: Geofencing and Pinpoint – Sample Architecture

Figure 1 depicts the solution architecture and resources deployed by the AWS CloudFormation Template, described in more detail in later sections. In the solution workflow:

  1.  Store Management defines a Geofence around store locations they wish to enroll using Amazon Location Service Geofencing and circular geofences.
  2. A customer who has opted into location tracking using the app will update an Amazon Location Service Tracker Resource. This tracker will be evaluated against the store geofences.
  3. If a geofence ENTER event is triggered, a message is sent to Amazon EventBridge.
  4. EventBridge will trigger an AWS Lambda function.
  5. The Lambda function looks up the Store Information in an Amazon DynamoDB table that matches the geofence ID in order to enrich the email.
  6. Event is sent to a Pinpoint Journey with information from the Geofence event as well as store info.
  7. Personalized email is sent to customer via Pinpoint

Configuring AWS Cloudformation

To deploy the Amazon Location Service resources as well as EventBridge, DynamoDB, and Lambda, we have created an AWS Cloudformation Template.

Use this link to launch the CloudFormation stack in the US-West-2 region. Selecting the button next to “I acknowledge that AWS CloudFormation might create IAM resources.” click Create stack

Fig 2. Cloudformation Console

Fig. 2: AWS CloudFormation Console showing stack options.

Once the stack is complete. We can begin configuring Pinpoint.

Configuring Pinpoint

Our project was created for us via the CloudFormation template, but we still need to configure some items in Pinpoint. First, we’ll set up our email identity to send and receive messages from; for the purposes of this blog, you’ll use the same email address for sending and receiving the email, but in a production environment, your sending identity could either be a specific email address you’ve verified for messaging, or an entire email domain you’ve verified via DNS.

Configuring email channel

Adding an email

  1. On the left-side Pinpoint menu, expand the Email option and choose Email identities
  2. Select Verify email identity
  3. Enter an email address you have access to for the confirmation step
  4. Select Verify email address
Fig. 3: Verifying email identity

Fig. 3: AWS Console showing email verification

Fig. 4: Email verification options

Fig. 4: Email verification options

Now, check your inbox for a verification email. It should look something like this:

Fig. 5: Email Verification message from Amazon Pinpoint

Fig. 5: Email Verification message from Amazon Pinpoint

Click the link to verify your email address. Now we can begin sending and receiving messages at this address.

Now that we have a verified email, we can configure the email channel.

Configuring the email channel

  1. On the left-side Pinpoint menu, navigate to All projects and select CoffeeShop
  2. Navigate to Settings and select Email
  3. Select Edit next to Identity details
  4. Select the checkbox for Enable the email channel for this project
  5. Select Use an existing email address and select the address you verified in the previous step.
  6. Select Save
Fig. 6: Configuring the email channel

Fig. 6: Configuring the email channel

Configuring email template

Next, we need to define what our email looks like that is sent to our customers when they enter a geofence. We’ve provided HTML code for a basic Coffee Shop template here

Configure email template

  1. On the left-side Pinpoint menu, navigate to Message templates, select Create template
  2. Name the template CoffeeShopGeoTarget and set the subject to “We haven’t seen you in a while”
  3. Paste the contents of the HTML template into the Message field.
  4. Select Create
Fig. 7: Configuring the email template

Fig. 7: Configuring the email template

You can see multiple attributes are used in the template. These attributes come from our segment in the case of FirstName, and DynamoDB in the case of the store name and address.

Configuring email segment

Now we need to define who we are going to send an email to. For this, we need to set up our segment within Pinpoint. We’ve provided a sample segment file here. Download this file and open it in a text editor.

Fig. 8: Configuring the email segment

Fig. 8: Configuring the email segment

Replace all the values with your own information . The email needs to be the same email we verified in an earlier step. Create a UserID for the user that can be used to uniquely identify them. Leave ChannelType as “EMAIL” to indicate we are using the email channel in Pinpoint, and leave OptOut as “NONE” which indicates the user would like to receive all communications and has not opted-out of receiving notifications. Once the information is edited, save the file.

Importing the segment

  1. On the left-side Pinpoint menu, navigate to All projects, and select your CoffeeShop Project
  2. Navigate to Segments and select Import a segment
  3. Drag the downloaded csv file into the Drop files here box.
  4. Select Create Segment
Fig. 9: Importing a segment

Fig. 9: Importing a segment

Configuring Journey

In this post, we will be setting up a very simple Journey that sends an email anytime a user enters a geofence. If we wanted to go a step farther, we could add additional activities later in the Journey such as determining if the customer purchased something based on receiving the email, and sending them targeted emails based on the drink they ordered.
Now that we’ve added the email channel, we can set up our journey.

Configuring journey entry

  1. On the left-side Pinpoint menu, navigate to the CoffeeShop Project and select Journeys
  2. Select Create journey
  3. Name the journey “CoffeeShopGeoTarget
  4. Set the entry condition to “geofence enter”
  5. Select Save
Fig. 10: Journey event configuration

Fig. 10: Journey event configuration

Configuring journey activity

  1. Select the Add activity icon
  2. Select Send an email from the dropdown
  3. Choose the email template we created earlier
  4. Enter the verified email we configured earlier
  5. Select Save
Fig. 11: Journey email destination configuration

Fig. 11: Journey email destination configuration

Reviewing Journey

  1. Select Review
  2. Select Mark as reviewed
  3. Select Publish
Fig. 12: Reviewing the Journey

Fig. 12: Reviewing the Journey

Once we publish our journey, a 5 minute timer will start, which will give us time to set up our tracking environment.

Configuring Amazon Location Resources

Now that we’ve configured Pinpoint to send geotargeted emails, we need to set up our Geofences as well as emulate a person passing nearby our coffee shops. To do that, we will use the AWS CLI and AWS Cloudshell .

To open AWS CloudShell, select it in the upper right near the region selection.

Fig. 13: Location of AWS Cloudshell in the AWS Console

Fig. 13: Location of AWS Cloudshell in the AWS Console

AWS CloudShell will now open in the bottom half of the AWS Console , note it may take up to a minute on first launch. First, we’ll create our geofences. For this, we will use Circular geofences around a point location. In this case, we will create two geofences, one for a Coffee shop at Amazon’s Doppler office, and one for a shop at Amazon’s Nitro North office. These correlate with the DynamoDB store information table.

aws location put-geofence --collection-name StoreCollection --geofence-id store_1508 --geometry 'Circle={Center=[-122.33826293063228, 47.61530011310656], Radius=100}'

Successful Geofence creation will create output similar to the below:

{
"CreateTime": "2023-04-21T19:31:57.807000+00:00",
"GeofenceId": "store_1508",
"UpdateTime": "2023-04-21T19:31:57.807000+00:00"
}

Next we create our second geofence:

aws location put-geofence --collection-name StoreCollection --geofence-id store_1509 --geometry 'Circle={Center=[-122.34051934099395, 47.61751544952795], Radius=100}'

Successful Geofence creation will create output similar to the below:

{
"CreateTime": "2023-04-21T19:32:41.980000+00:00",
"GeofenceId": "store_1509",
"UpdateTime": "2023-04-21T19:32:41.980000+00:00"
}

Now that our geofences are created, we can emulate a person walking by and triggering a geofence. We will do this using Amazon Location Service Trackers. In CloudShell, enter the following command:

aws location batch-update-device-position --tracker-name CustomerDevices --updates Accuracy={Horizontal=0},DeviceId=111,Position=-122.33811005706218,47.61541094771129,SampleTime=$(date +%s)

When this command is issued, a geofence is then evaluated which will trigger an event sent to Amazon EventBridge. This event then triggers a Lambda, which creates an event with Pinpoint. This triggers the Journey, which sends an email.

Now check your email, you should see a customized email with the store you were close to and your name . Note because we are not using domain verification, you may receive a warning on the email message. See our documentation on how to use domain verification.

Fig. 14: Email received from Amazon Pinpoint

Fig. 14: Email received from Amazon Pinpoint

Next Steps

For this blog, we used the default Journey configuration. However, we can further optimize our Journey by following Tips and best practices for journeys. You can also set up push notifications or in-app notifications to further optimize the customer experience to catch them in the moment they walk by, instead of when they may check their email next. You can read more about push notifications here.

Clean up

Deleting CloudFormation template

  1. In the AWS Console, navigate to the AWS CloudFormation console. Select the PinpointGeotarget stack
  2. Select Delete Stack

Deleting Pinpoint resources

  1. In the AWS Console, navigate to the Pinpoint Console
  2. Select Message templates
  3. Select the CoffeeShop template
  4. Select Delete then confirm you wish to delete it

Removing email identity

  1. In the AWS Console, navigate to the Pinpoint Console
  2. Navigate to Email, and select Email identities
  3. Select the radio button next to the verified email you configured
  4. Select Remove email identity
  5. Type Delete to confirm the removal

Conclusion

In this post, we explored how you can detect the presence of the customer whenever they cross near the geofenced physical store, using Amazon Location Service in which Amazon EventBridge receives the event, triggers an AWS Lambda function, and then triggers a Journey in Amazon Pinpoint to send a notification to the customer with a coupon.

Further more, integrating this solution with your customer data platform and with Amazon Personalize will help you to personalize the promotions and vouchers to fit the tastes and tendencies of customers

Zach Elliott works as a Solutions Architect focusing on Amazon Location Service at AWS. He is passionate about helping customers build geospatial solutions on AWS. He is also part of the IoT Subject Matter Expert community at AWS and loves helping customers develop unique IoT-based solutions.

Anshul Srivastava Headshot

With an illustrious track record as a technology thought-leader, Anshul joined AWS in 2016 and is the EMEA technology leader for retail. He is responsible for defining and executing the company’s retail technology strategy, which includes building retail-focused solutions with services like Amazon Forecast and Amazon Personalize, as well as experiences like Frictionless Shopping with AI/ML and IoT services from AWS. Anshul also works very closely with AWS global retail customers to help transform their businesses with cutting-edge AWS technologies.

How to create a WhatsApp custom channel with Amazon Pinpoint

Post Syndicated from Sparsh Wadhwa original https://aws.amazon.com/blogs/messaging-and-targeting/whatsapp-with-amazon-pinpoint/

How to add WhatsApp as an Amazon Pinpoint Custom Channel

WhatsApp now reports over 2 billion users in 180 countries, making it a prime place for businesses to communicate with their customers. In addition to native channels like SMS, push notifications, and email, Amazon Pinpoint’s custom channels enable you to extend the capabilities of Amazon Pinpoint and send messages to customers through any API-enabled service, like WhatsApp. With these new channels, you have full control over the message delivery to the endpoints associated with each custom channel campaign.

In this post, we provide a quick overview of the features and capabilities of using a custom channel as part of campaigns. We also provide a blueprint that you can use to build your first sandbox integration with WhatsApp as a custom channel.

Note: WhatsApp is a third-party service subject to additional terms and charges. Amazon Web Services isn’t responsible for any third-party service that you use to send messages with custom channels. 

How to add WhatsApp as a custom channel:

Prerequisites

Before creating your new custom channel, you must have the integration ready and an Amazon Identity and Account Management (IAM) User created with the necessary permissions. First set up the following:

  1. Create an IAM administrator. For more information, see Creating your first IAM admin user and group in the IAM User Guide. Specify the credentials of this IAM User when you set up the AWS Command Line Interface (CLI).
  2. Configure the AWS CLI. For more information about setting up the AWS CLI, see Configuring the AWS CLI.
  3. Follow the steps at Meta documentation – https://developers.facebook.com/docs/whatsapp/cloud-api/get-started to register as a Meta Developer and getting started with WhatsApp Business Cloud API provided directly by Meta. By completing step 1 and step 2 of the above documentation, you should be able to
    1. Register as a Meta Developer,
    2. Claim a test phone for sending messages on WhatsApp,
    3. Verify a recipient phone number (since, currently you’re in Sandbox, you can send WhatsApp messages only to the verified phone numbers. You can verify upto 5 phone numbers)
    4. and finally send a test message on Whatsapp using a provided sample POST request. Remember to review the terms of use for WhatsApp.Screenshot of WhatsApp API in Meta console
  4. In the test message sent above, you have used temporary Access Token credentials which expires in 23 hours. In order to get permanent Access Token, generate a ‘System User Access Token’ by following the steps mention here – https://developers.facebook.com/docs/whatsapp/business-management-api/get-started/

Screenshot of WhatsApp test message sent from Meta Console.

Procedure:

Step 1: Create an Amazon Pinpoint project.

In this section, you create and configure a project in Amazon Pinpoint. Later, you use this data to create segments and campaigns.

To set up the Amazon Pinpoint project

  1. Sign in to the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint/.
  2. On the All projects page, choose Create a project. Enter a name for the project, and then choose Create.
  3. On the Configure features page, under SMS and Voice, choose Configure.
  4. Under General settings, select Enable the SMS channel for this project, and then choose Save changes.
  5. In the navigation pane, under Settings, choose General settings. In the Project details section, copy the value under Project ID. You need this value for later.

Step 2: Create an endpoint.

In Amazon Pinpoint, an endpoint represents a specific method of contacting a customer. This could be their email address (for email messages) or their phone number (for SMS messages) or a custom endpoint type. Endpoints can also contain custom attributes, and you can associate multiple endpoints with a single user. In this step, we create an SMS endpoint that is used to send a WhatsApp message.

To create an endpoint using AWS CLI, at the command line, enter the following command:

aws pinpoint update-endpoint –application-id <project-id> \
–endpoint-id 12456 –endpoint-request “Address='<mobile-number>’, \
ChannelType=’SMS’,Attributes={username=[‘testUser’],integrations=[‘WhatsApp’]}”

In the preceding example, replace <project-id> with the Amazon Pinpoint Project ID that you copied in step 1.

Replace <mobile-number> with your phone number with country code (for example, 12065550142). For the WhatsApp integration to work, you must use the mobile number that are registered on WhatsApp and are already verified on Meta Developer Portal (since your Meta account is currently in sandbox).

Note: WhatsApp Business Cloud message API doesn’t require ‘+’ symbol in the front of the Phone number. So in case you plan to use this segment for both SMS and Custom Channel, you may configure Phone Number in E.164 format (for example, +12065550142) and remove ‘+’ symbol in the Lambda function code that we create in the step 4.

Step 3: Storing WHATSAPP_AUTH_TOKEN, and WHATSAPP_FROM_NUMBER_ID in AWS Secrets Manager.

We can securely store the WhatsApp Auth Token and WhatsApp From Number Id which we have received in the previous steps in AWS Secrets Manager.

  1. Open the AWS Secrets Manager console at https://us-east-1.console.aws.amazon.com/secretsmanager/listsecrets?region=us-east-1 (in the required AWS region), and then click on “Store a new Secret”.
  2. Under “Secret Type”, choose Other type of secret.
  3. Under Key/value Pair, add the following Key-Value pairs:
    1. WHATSAPP_AUTH_TOKEN: <Pass the Auth Token generated previously>
    2. WHATSAPP_FROM_NUMBER_ID : <Pass the From Number Id>.
      AWS Secret Manager Console screenshot storing WHATSAPP_AUTH_TOKEN and WHATSAPP_FROM_NUMBER_ID secrets.
  4. Click Next
  5. Provide the Secret name “MetaWhatsappCreds” and provide a suitable description.
  6. Click Next twice and finally click “Store” button.

Step 4: Create an AWS Lambda.

You must create an AWS Lambda that has the code that calls Meta WhatsApp Business Cloud API and sends a message to the endpoint.

  1. Open the AWS Lambda console at http://console.aws.amazon.com/AWSLambda, and then click on Create Function.
  2. Choose Author from scratch.
  3. For Function Name, enter ‘WhatsAppTest’.
  4. For Runtime, select Python 3.9.
  5. Click Create Function.
  6. For the function code, copy the following and paste into the code editor in your AWS Lambda function:
import base64
import json
import os
import urllib
from urllib import request, parse
import boto3
from botocore.exceptions import ClientError

WhatsApp_messageAPI_URL = "https://graph.facebook.com/v15.0/" 

def get_secret():

    secret_name = "MetaWhatsappCreds"
    region_name = "us-east-1"
    # Pass the required AWS Region in which Secret is stored

    # Create a Secrets Manager client
    session = boto3.session.Session()
    client = session.client(
        service_name='secretsmanager',
        region_name=region_name
    )

    try:
        get_secret_value_response = client.get_secret_value(
            SecretId=secret_name
        )
    except ClientError as e:
        # For a list of exceptions thrown, see
        # https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
        raise e

    # Decrypts secret using the associated KMS key.
    secret = get_secret_value_response['SecretString']
    return secret
   
def lambda_handler(event, context):
    credentials = get_secret()
    WhatsApp_AUTH_TOKEN = json.loads(credentials)["WHATSAPP_AUTH_TOKEN"]
    WhatsApp_FROM_NUMBER_ID = json.loads(credentials)["WHATSAPP_FROM_NUMBER_ID"]
    if not WhatsApp_AUTH_TOKEN:
        return "Unable to access WhatsApp Auth Token."
    elif not WhatsApp_FROM_NUMBER_ID:
        return "Unable to access WhatsApp From Number Id."
    # Lets print out the event for our logs 
    print("Received event: {}".format(event))

    populated_url = WhatsApp_messageAPI_URL + WhatsApp_FROM_NUMBER_ID + "/messages"

    for key in event['Endpoints'].keys(): 
        to_number = event['Endpoints'][key]['Address']
        # Example body and using an attribute from the endpoint
        username = event['Endpoints'][key]['Attributes']['username'][0]
        body = "Hello {}, here is your weekly 10% discount coupon: SAVE10".format(username)
        post_params = {"messaging_product":"whatsapp","to": to_number ,"recipient_type": "individual","type": "text", "text":{"preview_url": "false","body": body}}
        # encode the parameters for Python's urllib 
        print(post_params)
        data = parse.urlencode(post_params).encode('ascii') 
        req = request.Request(populated_url)
        req.add_header("Authorization", WhatsApp_AUTH_TOKEN ) 
        req.add_header("Content-Type","application/json")
        try:
            # perform HTTP POST request
            with request.urlopen(req, data) as f:
                print("WhatsApp returned {}".format(str(f.read().decode('utf-8')))) 
        except Exception as e:
            # something went wrong!
            print(e)

    return "WhatsApp messages sent successfully"
  1. Add permissions to your AWS Lambda to allow Amazon Pinpoint to invoke it using AWS CLI:

aws lambda add-permission \
–function-name WhatsAppTest \
–statement-id sid \
–action lambda:InvokeFunction \
–principal pinpoint.us-east-1.amazonaws.com \
–source-arn arn:aws:mobiletargeting:us-east-1:<account-id>:apps/<Pinpoint ProjectID>/*

Step 5: Create a segment and campaign in Amazon Pinpoint.

Now that we have an endpoint, we must add it to a segment so that we can use it within a campaign. By sending a campaign, we can verify that our Amazon Pinpoint project is configured correctly, and that we created the endpoint correctly.

To create the segment and campaign:

    1. Open the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint, and then choose the project that you created in step 1.
    2. In the navigation pane, choose Segments, and then choose Create a segment.
    3. Name the segment “WhatsAppTest.” Under Segment group 1, include all audiences in the Base Segment and add the following Criteria:
    4. For Choose an endpoint attribute, choose integrations, then for values, choose WhatsApp.Amazon Pinpoint Create Segment Console Screenshot showing the various configurations of Pinpoint Segment.
    5. Confirm that the Segment estimate section shows that there is one eligible endpoint, and then choose Create segment.
    6. In the navigation pane, choose Campaigns, and then choose Create a campaign.
    7. Name the campaign “WhatsAppTest.” Under Choose a channel for this campaign, choose Custom, and then choose Next.
    8. On the Choose a segment page, choose the “WhatsAppTest” segment that you just created, and then choose Next.
    9. In Create your message, choose the AWS Lambda function we just created, ‘WhatsAppTest.’ Select SMS in the Endpoint Options. On the Choose when to send the campaign page, keep all of the default values, and then choose Next. On the Review and launch page, choose Launch campaign.

Screenshot of Pinpoint console showing creation of message for Custom Channel.

Within a few seconds, you should receive a WhatsApp message at the phone number that you specified when you created the endpoint and verified on the Meta Developer portal.

Your Custom channel solution for WhatsApp is now ready to use. But first, review and upgrade your WhatsApp sandbox. This post is simply a walkthrough to show you how quickly you can prototype and start sending WhatsApp messages with Pinpoint and Meta. However, for production usage, you need to make sure to review all of the additional terms and charges. Start here to understand more: https://developers.facebook.com/docs/whatsapp/cloud-api/get-started

As a next steps, you can go ahead and claim a Phone number for sending WhatsApp messages in production. You can further configure a Webhook which can help you in receiving WhatsApp message delivery status and other WhatsApp supported events.

There are several ways you can make this solution your own.

  • Customize your messaging: This post used an example message to be sent to your endpoints within the AWS Lambda. You can customize that message to fit your needs. See the various ways in which you can send WhatsApp messages here.
  • Expand endpoints in your application: This post only used one endpoint for the integration. You can use your WhatsApp integration with new endpoints by importing a segment that can be used with a new campaign. Learn how to import a segment here: https://docs.aws.amazon.com/pinpoint/latest/userguide/segments-importing.html
  • Use new integrations: This post focused on integrating your custom channel with WhatsApp but there are many other integrations that are possible when using AWS Lambda.

Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service. Learn more here: https://aws.amazon.com/pinpoint/

Send WhatsApp messages via Amazon Pinpoint

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/send-whatsapp-messages-via-amazon-pinpoint/

In this blog you will deploy a solution that integrates Amazon Pinpoint with WhatsApp for outbound and inbound messages.

Amazon Pinpoint is a multichannel customer engagement platform allowing you to engage with your customers across 6 different channels (push notifications, email, SMS, voice, in-app messages and custom channel). Using Amazon Pinpoint’s custom channel you can extend its capabilities via a webhook or AWS Lambda function. Among many other possibilities, you can use custom channels to send messages to your customers through any API-enabled service, for example WhatsApp.

According to statista, WhatsApp is one of the most used apps in the world and the most popular messaging app in over 100 countries. It reached 2.3 billion active users in 2022 while in January 2022, WhatsApp was the most downloaded chat and messaging app worldwide, amassing approximately 40.6 million downloads across the Apple App Store and the Google Play Store.

Note: WhatsApp is a third-party service subject to additional terms and charges. Amazon Web Services isn’t responsible for any third-party service that you use to send messages with custom channels.

Solution & Architecture

An integration between Amazon Pinpoint and WhatsApp can be achieved for both outbound and inbound messages. The next section dives deeper into the architecture for both outbound and inbound messages. The solution uses Amazon Pinpoint custom channel, AWS Lambda, Amazon API Gateway, AWS Cloudformation and AWS Secrets Manager.

Outbound messages

For outbound messages Amazon Pinpoint integrates with WhatsApp via its custom channel allowing users to send WhatsApp messages using Pinpoint campaigns and journeys. Specifically, Pinpoint invokes an AWS Lambda function and performs an API call to WhatsApp. The API call contains the WhatsApp access token, the customer’s mobile number and the WhatsApp message template name.

outbound-message

  1. Amazon Pinpoint campaign or journey using endpoint type CUSTOM invokes an AWS Lambda function. The payload along with the endpoint data should contain the WhatsApp message template name as part of the Custom Data field.
  2. The AWS Lambda obtains the WhatsApp access token from the AWS Secrets Manager and performs a POST API call to the WhatsApp API.
  3. The WhatsApp message gets delivered to the customer.

Inbound messages

For inbound messages WhatsApp requires a Callback URL. This solution utilizes Amazon API Gateway to create the Callback URL and AWS Lambda to authorize and process inbound messages.

inbound-message

  1. Customer sends a message to your WhatsApp number.
  2. WhatsApp makes a GET API call to the Amazon API Gateway endpoint for verification purposes. All subsequent calls containing the customers’ messages are POST.
  3. If the API call method is GET, the AWS Lambda checks if the verify token matches the one stored as an AWS Lambda Environment Variable. If it’s TRUE, it returns a code called HubChallenge that WhatsApp is expecting in order to verify the connection. For POST API calls, the AWS Lambda loops through the customer messages and retrieves the customer’s phone number, timestamp, message_id and message_body. For each message processed, the AWS Lambda function performs an API call to WhatsApp to mark the message as read.

Considerations

  • Message delivery/engagement events aren’t being recorded.
  • Messages sent aren’t personalized and they are currently using message templates hosted by WhatsApp.
  • It is recommended to use endpoint type CUSTOM and not SMS for the following reasons:
    • WhatsApp’s phone number format doesn’t contain + comparing to Pinpoint SMS address format. If you decide to use the endpoint type SMS you will need to process the endpoint Address by removing the +.
    • Using the endpoint type SMS forces you to send WhatsApp messages with the same throughput (messages per second) as your Pinpoint SMS channel.

Prerequisites

  1. AWS account.
  2. An Amazon Pinpoint project – How to create an Amazon Pinpoint project.
  3. An Amazon Pinpoint CUSTOM endpoint with address a mobile number which is associated to a WhatsApp account. See example CUSTOM endpoint in a CSV here.
  4. A Meta (Facebook) developer account, for more details please go to the Meta for Developers console.

Implementation

Meta for Developers console

  1. Navigate and login into the Meta for Developers console, click My Apps and select Create App (or use an existing app of type Business).
  2. Select Business as an app type, which supports WhatsApp and click Next.
  3. Provide a display name, contact email, choose whether or not to attach Business Account (optional) and select Create App.
  4. Navigate to the Dashboard and select Set Up in the WhatsApp service in the Add product to your app section.
  5. Create or select an existing Meta Business Account and select Continue.
  6. Navigate to WhatsApp/Getting Started and take a note of the Phone number ID, which will be needed in AWS CloudFormation template later on. WhatsAppPhoneNumberId
  7. On the WhatsApp/Getting Started page, add your customer phone number you are going to use for testing in the Select a recipient phone number dropdown. Follow the instructions to add and verify your phone number. Note: You must have WhatsApp registered with the number and the WhatsApp client installed on your mobile device. Verification message could appear in the Archived list in your WhatsApp client and not in the main list of messages.

Create a new user to access WhatsApp via API

  1. Open Meta’s Business Manager and select business you created or associated your app with earlier.
  2. Below Users, select System Users and choose Add to create a new system user.
  3. Give a name to the system user and set their role as Admin and click Create System User.
  4. Use the Add Assets button to associate the new user with your WhatsApp app. From the Select asset type list, select Apps, then in the Select assets, select your WhatsApp app’s name. Enable the Test app Partial access for the user, select Save Changes and Done.
  5. Click on the Generate new token button, select the WhatsApp app created earlier and choose Permanent as Token expiration.
  6. Select whatsapp_business_messaging and whatsapp_business_management from the list of Available Permissions and click Generate token at the bottom.
  7. Copy and save your access token. This will be needed in AWS CloudFormation template later on. Make sure you copied the token before clicking on OK.

For more details on creating the access token, you can navigate to WhatsApp/Configuration and click on Learn how to create a permanent token.

Solution deployment

  1. Download the AWS CloudFormation template and navigate to the AWS CloudFormation console under the AWS region you want to deploy the solution.
  2. Select Create stack and With new resources. Choose Template is ready as Prerequisite – Prepare template and Upload a template file as Specify template. Upload the template downloaded in step 1.
  3. Fill the AWS CloudFormation parameters as shown below:
    1. ApiGatewayName: This is the name of the Amazon API Gateway resource.
    2. PhoneNumberId: This is the WhatsApp phone number Id you obtained from the Meta for Developers console under WhatsApp/Getting Started.
    3. PinpointProjectId: Paste your Amazon Pinpoint’s project Id. This allows Amazon Pinpoint to invoke the AWS Lambda, which sends WhatsApp messages as part of a campaign or journey.
    4. VerifyToken: The verify token is an alphanumeric token that you provide to WhatsApp when setting up the Webhook Callback URL for inbound messages and notifications. You can decide the value of this token e.g. 123abc.
    5. WhatsAppAccessToken: The access token should start with Bearer EEAEAE… and you should have obtained it from the section of this blog Create a new user to access WhatsApp via API.
  4. Once the AWS CloudFormation stack is deployed, copy the Amazon API GateWay endpoint from the AWS CloudFormation outputs tab. Navigate to the Meta for Developers App dashboard, choose Webhooks, select Whatsapp Business Account and subscribe to messages. SubscribeToMessages
  5. Paste the Amazon API Gateway endpoint as a Callback URL. For the Verify token, provide the same value as the AWS CloudFormation template parameter VerfiyToken and select Verify and save. VerifyAndSave

Testing

  • Sending messages: To test sending a message to WhatsApp using Amazon Pinpoint:
    • Navigate to the Amazon Pinpoint Campaigns
    • Create a new Campaign with WhatsAppCampaign as the Campaign name, select Standard campaign as the Campaign type, choose Custom as Channel and select Next.
    • Select a segment that includes the CUSTOM endpoint that you will send the message to
    • Choose the AWS Lambda Function containing the name WhatsAppSendMessageLambda. Under Custom data type hello_world, for Endpoint Options choose Custom and select Next. Note that the hello_world is the WhatsApp default message template.
    • In Step 4 leave everything with the default values, scroll to the bottom of the page and select Next.
    • Choose Launch campaign.
  • Receiving messages: Text or reply to the WhatsApp number. The inbound messages are being printed in the Amazon CloudWatch logs of the AWS Lambda function containing the name WhatsAppWebHookLambda. ReceivedMessage

Next steps

There are several ways to extend this solution’s functionality, see some of them below:

  • Instead of specifying the WhatsApp message template name, provide directly the text you want to send using the Pinpoint’s custom channel Custom data field. To do this, update the AWS Lambda function code responsible for sending messages with the one below:
    import os
    import json
    import boto3
    from urllib import request, parse
    from botocore.exceptions import ClientError
    phone_number_id = os.environ['PHONE_NUMBER_ID']
    secret_name = os.environ['SECRET_NAME']
    def handler(event, context):
        print("Received event: {}".format(event))
        session = boto3.session.Session()
        client = session.client(service_name='secretsmanager')
        try:
            get_secret_value_response = client.get_secret_value(SecretId=secret_name)
        except ClientError as e:
            raise e
        else:
            secret = get_secret_value_response['SecretString']
            url = 'https://graph.facebook.com/v15.0/'+ phone_number_id + '/messages'
            message = event['Data'] # Obtaining the message from the Custom Data field
            for key in event['Endpoints'].keys(): 
                to_number = str(event['Endpoints'][key]['Address'])
                send_message(secret, to_number, url, message_template)
    def send_message(secret, to_number, url, message_template):
        headers = {
            'content-type': 'application/json',
            'Authorization': secret
        }
        # Building the request body and insted of type = template, it's replaced with type = text
        data = parse.urlencode({
            'messaging_product': 'whatsapp',
            'to': to_number,
            'type': 'text',
            'text': {
                'body': message
            }
        }).encode()
        req =  request.Request(url, data=data, headers=headers)
        resp = request.urlopen(req)
  • Use WhatsApp’s message template components to populated dynamically variables. This requires an update on the respective WhatsApp message template and API request body to WhatsApp’s API. The message template should look like this:

PersonalizedMessageTemplate

And the API request body should look like this. Note that the value for each variable should be obtained from the Pinpoint endpoint or user attributes.

{
  "from": from_number,
  "to": to_number,
  "channel": "whatsapp",
  "content": {   
    "contentType": "template",
    "template": {
        "templateId" : "first_pinpoint_message",
        "templateLanguage" : "en",
        "components" : {
            "body" : [
                    {
                        "type": "text",
                        "text": "Pavlos"
                    }
            ]           
        }
   }
  }
}

Clean-up

To delete the solution, navigate to the AWS CloudFormation console and delete the stack deployed.

About the Authors

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Senior Specialist Solutions Architect at AWS. He enjoys diving deep into customers’ technical issues and help in designing communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Customize marketing messages and promotions for personalized outreach

Post Syndicated from binpazho original https://aws.amazon.com/blogs/messaging-and-targeting/customize-marketing-messages-and-promotions-for-personalized-outreach/

Introduction

Amazon Pinpoint is widely used by many customers for their various user engagement use cases like marketing campaigns, scheduled communications (newsletters, reminders, etc.), and transactional messaging. By using the message template feature in Amazon Pinpoint, customers can design messages personalized to the specific end users, by using variable attributes. While Amazon Pinpoint enables customers to include up to 250 attributes for each user, often times there might be need to pick and choose from a wide range of attributes about a user, that can lead to needing more than the allowed number of attributes.

The CampaignHook feature of Amazon Pinpoint can come to rescue for a situation like this. Using the CampainHook feature, we can filter out attributes that are not applicable to a specific user, while allowing to add new attributes, right before of sending the message. In this blog, I will walk you through how I have implemented the CampaignHook feature for a similar use case.

Sample Use-Cases

When setting up your Pinpoint campaign, following are the use cases where a CampaignHook can be enabled:

  • Retrieving data and perform custom compute logic in real time from third party data stores.
  • Filter endpoints out of the send: This is useful if you need to do some type of custom logic that you can’t do in Segmentation (custom opt-out, quiet time, campaign prioritization, etc.)
  • Avoid costly and time consuming Extract, Transform & Load (ETL) processes by accessing the data sources directly and applying custom compute logic in real-time.

Solution overview

CampaignHook Demo Architecture

The diagram above shows the solution that we will setup in this blog. As you can see, the Campaign event will trigger the Amazon Pinpoint Campaign. The event can be triggered from your web or mobile app that are accessed by your end-users, and can be setup to be triggered when the user performs a certain action. You can read more about setting up Amazon Pinpoint campaign in the user guide. By having the CampaignHook enabled on your Amazon Pinpoint campaign, the Lambda function that is configured with the CampaignHook will be triggered. This function will have access to the endpoint attributes passed by the Campaign event, and perform additional logic to derive new attributes for the user. Once all the new fields are derived, the function will update the user endpoint. Amazon pinpoint will then perform the next steps in the Campaign, and substitute the variables in the message template, before the personalized message is sent to the end user.

Prerequisites

  • AWS Account with Console and Programmatic access
  • Access to AWS CloudShell
  • Email channel enabled in Amazon Pinpoint

Building the demo

Build the Amazon Pinpoint Project

From the AWS Management console, go to Amazon Pinpoint and create a new project called “PinpointCampaignHookDemo”, and choose the option to enable the email channel. For more information about creating a project see the user guide, and follow the instructions here to setup your email channel.

If your account is in the Sandbox account, you will need to verify the email address, before you can send the email. You can follow the steps here to upgrade your account to a Production status if you are ready to deploy this solution to production.

Create the segment.

A segment is a group of your users that share certain attributes. For example, a segment might contain all of your users who use version 2.0 of your app on an Android device, or all users who live in the city of Los Angeles. You can send multiple campaigns to a single segment, and you can send a single campaign to multiple segments.

For this demo, let’s create a Dynamic Segment. Let’s call it ‘CampaignHookDemoSegment’.  Follow the steps here to create your Dynamic Segment.

Create a Segment

Setup message template

Let’s create our first template and call it “CampaignHookDemoTemplate”. You can read more about Amazon Pinpoint templates in the user guide.

For this demo, I have used the HTML template shown below, and I have 3 endpoint attribute variables: 2 that are passed from the campaign event trigger, and the third one (Company) that will be generated by the CampaignHook lambda function. For the subject of the email, I used “Campaign Hook Demo Campaign“.

Create eMail Template

The email template can be found in this GitHub repository.

Create Campaign

Next, create your campaign and use the Segment and email Template that you created in the previous steps by following the instructions here.

Select the ‘when an event occurs’ option to trigger the campaign when an event occurs. (This option will trigger the campaign when a specific event occurs). Yoy may also schedule your campaign to run on a scheduled bases as available in the setup screen. I used ‘CampaignHookTrigger’ as my event name.

Create a campaign

Set your Campaign Start date, time and end date. I have left all the other settings to default and saved the campaign. Now that you have successfully created your first Campaign, you are ready for the next steps.

Set Campaign Start and End Times

Create the Lambda function

This is the function that we will configure to trigger the Amazon pinpoint campaign event . From the Lambda console page, create a new function by clicking on the ‘Create function’ button. You can then pick the following options and create the function.

Name: Campaign_event_trigger_function

Runtime: Python 3.9 or higher.

Replace the default script with the code from the GitHub repository, and then deploy your code by clicking on the “Deploy” button.

Assign permissions

In-order for the Lambda function trigger to trigger the Pinpoint Campaign, you will need to add an inline policy to the IAM role that is attached to your Lambda function, by selecting Pinpoint as the service and PutEvents from the Write options. You can select the Lambda function as the resource to which the access will be granted.

{

    "Version" :"2012-10-17",

    "Statement":[

        {

            "Sid": "VisualEditor0",

            "Effect": "Allow",

            "Action": [

                "mobiletargeting:PutEvents"

            ],

            "Resource":"ARN of your Lambda function goes here."

        }

    ]

}

Create the CampaignHook Lambda function

This is the function that we will triggered from the CampaignHook. From your Lambda console, click on “Create function” and enter the basic information as shown below to create your function.

Name: CampaignHookFunction

Runtime: Python 3.9 or higher.

Next replace your default code with the sample GitHub code, and then deploy your code by clicking on the “Deploy” button.

Assign permissions

Next add permissions for Amazon Pinpoint to invoke the Lambda function by running the command below from your Command Shell. Replace the Lambda function name and Account number with yours.

aws lambda add-permission \

--function-name [YourCampaignHookLambdaFunctionName] \

--statement-id my-hook-id1 \

--action lambda:InvokeFunction \

--principal pinpoint.us-east-1.amazonaws.com \

--source-arn 'arn:aws:mobiletargeting:us-east-1:[YourAccountNumber]:apps/*'

You can also do this from the Lambda console, by clicking on “Configuration” and then scrolling down to “Resource based Policy” and by clicking on “Add permissions“.

Update Campaign settings to add the Campaign Hook

Now that you have the Lambda function that needs to act as the hook is created, and granted Amazon Pinpoint service to invoke that function, run the command below to update the Campaign settings to add the Campaign Hook. You can also set a default CampaignHook for ALL campaigns in the project by setting the CampaignHook property on the Project Settings via this API.

Replace the application-id (project id), campaign-id, and the arn of the Campaign Hook lambda function and run the command below. (You can find the Project ID by clicking on All Projects at the top-left of the Pinpoint Console. The Campaign ID can be found by opening your Pinpoint Project and then clicking Campaigns in the Pinpoint Console.)

aws pinpoint   update-campaign --application-id /

[your-application-id-goes-here] –campaign-id /

[your-campaign-id-goes-here] --cli-input-json '{"ApplicationId": /

"","CampaignId": "","WriteCampaignRequest": {"Hook": {"LambdaFunctionName": /

"your-CampaignHook-Function-goes-here","Mode": "FILTER","WebUrl": ""}}}'

You can optionally run the command below to make sure that the campaign settings have been updated:

aws pinpoint get-campaign –application-id [your-application-id-goes-here]  –campaign-id [your-campaign-id-goes-here]

Test your Campaign.

Go back to your Lambda function that you have created to trigger the Campaign in the “Create the Lambda function” step above. I have used the test event as shown below. Update the Application id to reflect your Project id and change the email address to the email you verified earlier and click on “Test” button.

{

    "application_id": "your application id",

    "endpoint_id": "223",

    "event_type": "CampaignHookEvent",

    "nextTestDate": "12/15/2025",

    "FirstName": "Jack",

    "email": "[email protected]",

    "userid": "Jack123"

}

You should now receive an email with the variables replaced with the values that was passed from your json payload. Further you can see the Company name was added to the endpoint from the CampaignHook Lambda, which is passed to the email template. If you have not received the email, please check the following:

  • The Lambda function ran without any errors
  • The LambdaHook function has the proper rights assigned to be invoked from Pinpoint
  • The From and To email id that you have used are verified in SES.

Verify email identity

Clean up resources

Once you are satisfied with your setup and testing, you can now clean up the resources by following the steps below:

  • Delete your Amazon Pinpoint Project, Campaign and Segment.
    • aws pinpoint delete-campaign –application-id [your appl id] –campaign-id [your campaign id]
    • aws pinpoint delete-segment –application-id [your app id]  –segment-id [your segment id]
    • aws pinpoint delete-app –application-id [your app id]
  • Delete you Lambda functions
    • aws lambda delete-function –function-name CampaignHookFunction
    • aws lambda delete-function –function-name Campaign_event_Trigger_Function

Conclusion

By dynamically generating the attributes in real-time, customers can now add greater levels of personalization within a single user message template. By invoking a Lambda function, you can perform custom compute logic, calculate new attribute values, and access external data stores, to modify the campaign’s segment, right before Amazon Pinpoint sends the message. Campaign Hook feature makes this possible as explained in this blog by running few basic CLI commands to enable the feature on your Amazon Pinpoint Campaign. You can read more about Amazon Pinpoint Campaign from the user guide documentation”.

How to build LINE messaging into business communications

Post Syndicated from nnatri original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-build-line-messaging-into-business-communications/

In today’s interconnected world, businesses need to communicate with their customers through multiple channels. This means using a variety of messaging apps, social media platforms, and other communication tools to reach customers where they are. One such platform that has gained immense popularity in select Asian markets is LINE. As the biggest social network in Japan, LINE offers businesses a unique opportunity to connect with customers in this region. Within Japan alone, LINE’s 2021 data shows 86 million users, constituting approximately 85% of Japan’s adult population. However, managing communication through multiple channels can be challenging for businesses.

That’s where Amazon Pinpoint comes in. Amazon Pinpoint is a flexible communication service for businesses that simplifies the process of sending targeted messages to customers across multiple channels. In this blog post, we’ll focus on how to integrate LINE with Amazon Pinpoint. This post is part of a series on integrating different communication channels with Amazon Pinpoint, and it is intended for both marketing operations and communication developers.

If you are already using LINE, this blog post will help you centralize management within Amazon Pinpoint. Additionally, if you are looking to integrate another messaging service with an open API, the steps outlined here will provide a helpful guide. Finally, if you’re a business looking to tap into Asian markets, this blog post is essential reading. By integrating LINE with Amazon Pinpoint, you’ll be able to reach your customers on the platform they are already using, providing seamless end-to-end customer engagements that will greatly enhances customer experience.

Note
Line is a third-party service that is subject to additional terms and charges. Amazon Web Services isn’t responsible for any third-party service that you use to send messages with custom channels.

Why Integrate LINE with Amazon Pinpoint?

Integrating LINE with Amazon Pinpoint has several benefits for businesses:

  • Centralized communication management: With LINE integrated into Amazon Pinpoint, businesses can centralize the management of outbound communication channels and simplify their communication workflows.
  • Increased flexibility for marketing campaigns: With LINE added as a custom channel in Amazon Pinpoint, businesses can create targeted messaging campaigns and reach customers through multiple channels, including LINE. Along with Pinpoint journeys, businesses can craft end-to-end customer engagement journeys that start from one channel and end in another.
  • Access to LINE’s popular messaging platform: With LINE integrated into Amazon Pinpoint, businesses can tap into the app’s massive user base in select Asian markets and engage with their customers through a popular and widely used messaging platform. Having access to LINE’s demographics of approximately 50% office workers with high penetration into 20s-30s age band, brands can tap into this high-spending power segment to drive revenue for their products.

Architecture

This solution uses Amazon Pinpoint,AWS Lambda, Amazon API Gateway, Amazon Simple Storage Service (Amazon S3), AWS Secrets Manager and LINE Messaging API

Line Pinpoint Solution Architecture

The solution architecture can be broken up into two main sections:

  • Steps 1-4 cover handling inbound user events and managing user data within Amazon Pinpoint.
  • Steps 5-8 cover how to send outbound campaigns via Amazon Pinpoint Custom Channel.
  1. The customer subscribes to the business’ LINE channel.
  2. The subscribe/unsubscribe event is received and checked via Amazon API Gateway.
  3. The edge-optimized Amazon API Gateway passes valid requests via a proxy integration to the backend Lambda.
  4. The backend Lambda compares the request body with the x-line-signature request header to confirm that the request was sent from the LINE Platform, as recommended by LINE API document. Afterwards, the Lambda function processes the user events:
    1. If the user subscribes to the channel, a new endpoint will be added to Amazon Pinpoint’s user database.
    2. If the user unsubscribes from the channel, the corresponding endpoint (identified by the LINE User ID) is deleted from Amazon Pinpoint’s user database.
  5. Amazon Pinpoint initiates a call to a Lambda function via Custom Channel with a payload. Of particular importance would be the Data field contained within the payload, which can be specified within the Amazon Pinpoint console to modify the content of the message.
  6. If the message contains image/audio/video files, the Lambda will request the file from the corresponding Amazon S3 buckets to be included for step 7. Amazon S3 then sends back the presigned URL containing the requested file(s).
  7. The Lambda function puts the message in the correct format expected by the LINE Messaging API and sends it over to the LINE Platform.
  8. The LINE Messaging API receives the request and processes the message content. If necessary, it will retrieve and download the file from Amazon S3 using the presigned URLs generated in step 6 then finally send the message to the corresponding user on the LINE Mobile App.

Step-by-Step Deployment Guide

Prerequisites

To deploy this solution, you must have the following:

  1. An AWS account, with the appropriate AWS CLI profile.
    • Named Profile: Run aws configure with the --profile option. The following steps assumed you have created a profile called line-integration to use with AWS CDK.
  2. Minimum Python v3.7, with pip and venv
  3. AWS CDK v2 installed.
  4. Docker Engine installed. You can download and install the appropriate Docker Desktop Distribution for your system via this link
  5. A LINE Account.
    • If you have never worked with LINE Messaging API before, you should login to to LINE Developers Console using one of the following accounts.
      • LINE account
      • Business account
    • Afterwards, you should create a new provider. Create Line provider
    • Within the provider page, you can then choose to create a new channel. For our Integration purposes, we will be choosing Messaging API channel type.
      Create Line channel

Preparation

The source code can be found in this GitHub Repository.

  1. Fork the GitHub Repo into your account. This way you can experiment with changes as necessary to fit your workload.
  2. In your local compute environment, clone the GitHub Repository and cd into the project directory.
  3. Run the following commands to create a virtual environment, activate it and install required dependencies.
python3 -m venv env \
&& source env/bin/activate \
&& python -m pip install -r requirements.txt

Deploy the CDK

  1. We can set the AWS CLI profile in CDK commands by adding the --profile flag. Run the following commands to bootstrap your AWS environment, synthesize the CDK template and deploy to your environment.
cdk bootstrap --profile LINE-integration \
&& cdk synth --profile LINE-integration  \
&& cdk deploy --profile LINE-integration 

Note
Enter y when prompted with Do you wish to deploy these changes (y/n)?

  1. After the deployment is done, the CDK template will output the API Gateway endpoint URL which takes the form of https://[********].execute-api.[region].amazonaws.com/prod/. Copy down this information as you will need it to set up the webhook connection later on.

Getting LINE Official Account Credentials

  1. Log in to LINE developer console.
    Login to Line account
  2. Once inside, choose the channel you’d like to have integrated with Amazon Pinpoint. This assumes that you’ve created a provider and a channel as mentioned in the Prerequisite section.
    Inside Line account console
  3. In the Basic settings tab, scroll down and note down the Channel Secret.
  4. In the Messaging API tab, scroll down and click on Edit under Webhook URL and enter the API Gateway endpoint URL you have noted down in step 5. Click on Update to save the changes.
    Line Webhook settings
    NOTE Once you have finished entering your Channel Secret token in step 14, you can return to this page to Verify your webhook URL is set up correctly).
  5. Finally, issue a Channel Access Token (at the bottom of the Messaging API tab) and note it down.
    Line channel access token settings

Registering Secrets in AWS Secrets Manager

  1. Navigate to the AWS Secrets Manager console. Make sure you’re in the same region as your CDK deployment region.
  2. Click on Secrets in the left side pane. You should find a secret with the name LINE_secrets
  3. Click on Retrieve Secret Value.
    Set Line secrets in Secrets Manager
  4. Then click on Edit:
    • Replace YOUR_CHANNEL_SECRET secret value with the channel secret you issued in step 10.
    • Replace YOUR_CHANNEL_ACCESS_TOKEN secret value with the access token you issued in step 10

Marketing Operations Demonstration

Once you’ve successfully deployed the CDK and configured your secrets, you can immediately get started sending communications campaign to your customers.

LINE supports multimedia messaging formats, meaning that you can choose to send texts, images, audio and even video files to your customers as part of your campaigns. You just need to make sure that your customers have subscribed to your channel.

Create a segment of subscribed users

The deployed solution has integrated user database management with Amazon Pinpoint so once users start subscribing to your LINE channel, they will be added as endpoints. To start filtering out who we should send to, you can create segments of your subscribers.

  1. Navigate to the Amazon Pinpoint console.
  2. On the All projects page, a project named Line-Pinpoint-Project has been created for you.
  3. On the left-side pane, choose Segments and then Create a segment.Create Segment
  4. Give your segment a descriptive name and add the appropriate criteria to filter down to your target audience (E.g.: filter down to customers who have Custom channel type).Set segment attributes
  5. Confirm the number of endpoints that you will be sending in the Segment estimate section matches your expectations and then choose Create segment.

Upload media files for campaign

If you’d like to use your own image, audio and video files for the campaign, follow along with this section. Otherwise, proceed to the Create Campaigns section (step 9).

Note
Depending on the media type, there are restrictions imposed such as maximum file size and file format extensions. You can find more information here.

  1. Navigate to the Amazon S3 console.
  2. Here you will find a list of buckets which corresponds to the type of media files you want to upload:
    • part-1-stack-images3bucket...: contains image files.
    • part-1-stack-audios3bucket...: contains audio files.
    • part-1-stack-videos3bucket...: contains both video and image cover files.
  3. Upload the corresponding files that you want to use for your campaign by choosing Upload.
    Asset bucket image

Create campaigns

  1. In the navigation pane, choose Campaigns, and then choose Create a campaign.
  2. Give your campaign a descriptive name. Under Campaign Type choose Standard campaign and under Channel, choose Custom. Click Next to confirm.
    Campaign Creation
  3. On the Choose a segment page, choose the segment that you created in step 5, and then choose Next.
  4. In Create your message, depending on the type of message that you want to send, choose the corresponding Lambda function. Your function should be named part-1-stack-send[text/image/audio/video]lambda...
    Choose Lambda function
  5. In the custom data section, you can choose to leave it blank, which will trigger the campaign to send the sample message.
  6. Otherwise, depending on the type of message, you can customize your campaigns to send the content that you want by inputting the following values into Custom Data.
    • Text Campaign: Enter the Text Message that you want to send.
    • Image Campaign: Enter the name of the image file you’ve uploaded in step 8 including the extension name (E.g.: sample_image.png)
    • Audio Campaign: Enter the name of the audio file you’ve uploaded in step 8 including the extension name and the duration of the audio file in milliseconds separated by a comma (E.g.: sample_audio.mp3,5000)
    • Video Campaign: Enter the name of the video file you’ve uploaded in step 8 including the extension name and the name of the image file you’ve uploaded in step 8 including the extension name, separated by a comma (E.g.: sample_video.mp4,sample_image.png)
  7. Choose Next and configure when to send the campaign depending on your needs. Once done, choose Next again.
  8. On the Review and launch page, verify all your information is correct and then click on Launch campaign.

That’s it! Your message will be sent through LINE to the designated recipients.

Cleanup

To delete the sample application that you created, use the AWS CDK.


cdk destroy

You’ll be asked:


Are you sure you want to delete: part-1-stack (y/n)?

Hit “y” and you’ll see your stack being destroyed.

What’s Next?

In conclusion, integrating LINE with Amazon Pinpoint provides businesses with a powerful tool to centralize their communication management, create more flexible marketing campaigns, and tap into LINE’s massive user base. With the step-by-step guide and demo provided in this blog post, you can easily get started with integrating LINE with Pinpoint and start leveraging its benefits for your business.

The solution presented in this blog post serves as a template that you can develop and customize to make it your own:

  1. Adding additional message types: The LINE messaging platform is famous for its rich messaging types and format. The deployed solution only utilized a fraction of what is available. You can add additional Lambda functions to send Stickers, Locations, Image Maps, Buttons or Carousel and more.
  2. Orchestrate LINE with other channels: Using Amazon Pinpoint Journeys, you can now meet the customer where they are most likely to see and respond to your message. Create a journey that starts with an SMS, send targeted communications based on yes/no or multivariate splits via emails and seal the deal with LINE. With Pinpoint and journey custom channel input and response support, you can craft the perfect omni-channel journey for your customers.
  3. Watch this space: Do stay tuned for the next blog post in this series, where we’ll show you how to manage inbound communications through LINE using Amazon Connect and Amazon Lex bots.

Push notification engagement metrics tracking

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/push-notification-engagement-metrics-tracking/

In this blog you will learn how to track and attribute Amazon Pinpoint push notification events for Campaigns and Journeys via API.

Amazon Pinpoint is a multichannel customer engagement platform allowing you to engage with your customers across 6 different channels. Amazon Pinpoint’s push notification channel, can send messages to your mobile app users via Firebase Cloud Messaging (FCM), Apple Push Notification service (APNs), Baidu Cloud Push, Amazon Device Messaging (ADM).

Push notifications is a preferable channel of communication as it notifies your app users even when they are not on your app. This increases app engagement and probability of customers to convert. Additionally, users who download your app but don’t register, can still be targeted and receive your messages.

Using Amazon Pinpoint’s push notification channel you can engage users with highly curated content. The messages can be personalized with customer data stored in Amazon Pinpoint, images, deep links and custom alert sounds – read more here. Amazon Pinpoint Campaigns and Journeys enable marketers to schedule communications, build multichannel experiences and for developers it offers a rich API to send messages. By default, all Amazon Pinpoint accounts are configured to send 25,000 messages per second, which can be increased by requesting a quota increase.

Measuring success of your communications is paramount for optimizing future customer engagements. Amazon Pinpoint push notifications offer the following three events:

  • _opened_notification – This event type indicates that the recipient tapped the notification to open it.
  • _received_foreground – This event type indicates that the recipient received the message as a foreground notification.
  • _received_background – This event type indicates that the recipient received the message as a background notification.

To track the above events from your mobile application, it is recommended using AWS Amplify’s push notification library which is currently available only in React Native.

Solution description

This blog provides an alternative for AWS Amplify for Amazon Pinpoint push notification tracking. Specifically, it utilizes Amazon Pinpoint’s Events API operation, which can be used to record events your customers generate on your mobile or web application. The same API operation can be used to record push notification engagement events.

The Events API operation request body is populated with the Campaign or Journey attributes received via the push notification payload metadata. These attributes help Amazon Pinpoint to attribute the events back to the correct Campaign or Journey

This blog provides examples of campaign, journey & transactional push notification payloads and how to correctly populate the Events API operation. Furthermore it shares an architecture to securely call Amazon Pinpoint’s API from your application’s frontend.

Prerequisites

This post assumes that you already have an Amazon Pinpoint project that is correctly configured to send push notification to your various endpoints using Campaigns or Journeys. Refer to the getting started guide and setting up Amazon Pinpoint mobile push channels for information on how to set up your Amazon Pinpoint project.

You will also need the AWS Mobile SDKs for the respective platform of your apps. The following are the repositories that can be used:

Implementation

The push notification payload received from the application differs between campaign, journey and transactional messages. This blog provides examples for campaign, journey and transactional message payloads as well as how to populate the Amazon Pinpoint Events API request body correctly to report push notification tracking data to Amazon Pinpoint.

Push notification message payload examples:

Campaign payload example:

{
   "pinpoint.openApp":"true",
   "pinpoint.campaign.treatment_id":"0",
   "pinpoint.notification.title":"Message title",
   "pinpoint.notification.body":"Message body",
   "data":"{\"pinpoint\":{\"endpointId\":\"endpoint_id1\",\"userId\":\"user_id1\"}}",
   "pinpoint.campaign.campaign_id":"5befa9dc28b1430cb0469554789e3f99",
   "pinpoint.notification.silentPush":"0",
   "pinpoint.campaign.campaign_activity_id":"613f918c7a4440b69b09c4806d1a9357",
   "receivedAt":"1671009494989",
   "sentAt":"1671009495484"
}

Journey payload example:

{
   "pinpoint.openApp":"true",
   "pinpoint.notification.title":"Message title",
   "pinpoint":{
      "journey":{
         "journey_activity_id":"ibcF4z9lsp",
         "journey_run_id":"5df6dd97f9154cb688afc0b41ab221c3",
         "journey_id":"dc893692ea9848faa76cceef197c5305"
      }
   },
   "pinpoint.notification.body":"Message body",
   "data":"{\"pinpoint\":{\"endpointId\":\"endpoint_id1\",\"userId\":\"user_id1\"}}",
   "pinpoint.notification.silentPush":"0"
}

Transactional payload example:

Note the transactional payload is the same for both messages sent to a push notification token and endpoint-id. Additionally the pinpoint.campaign.campaign_id is always set to _DIRECT.

{
   "pinpoint.openApp":"true",
   "pinpoint.notification.title":"Message title",
   "pinpoint.notification.body":"Message body",
   "pinpoint.campaign.campaign_id":"_DIRECT",
   "pinpoint.notification.silentPush":"0",
   "receivedAt":"1671731433375",
   "sentAt":"1671731433565"
}

Recording push notification events

To record push notification events from your mobile or web application, we will leverage the AWS Mobile SDKs or the Amazon Pinpoint Events API. To prevent inaccurate metrics such as double counting” it is recommended using the appropriate endpoint_id as Pinpoint uses this for de-duplication. Below you can find examples for both Events REST API and put_events AWS Python SDK – Boto3. Visit this page for more information on how to create a signed AWS API request.

Campaign event example – REST API:

Required fields: endpoint_id1, EventType, Timestamp, campaign_id and campaign_activity_id

POST https://pinpoint.us-east-1.amazonaws.com/v1/apps/<Pinpoint-App-id>/events

{
   "BatchItem":{
      "<endpoint_id1>":{
         "Endpoint":{}
       },
      "Events":{
         "<event_id>":{
            "EventType":"_campaign.opened_notification",
            "Timestamp":"2022-12-14T09:50:00.000Z",
            "Attributes":{
               "treatment_id":"0",
               "campaign_id":"5befa9dc28b1430cb0469554789e3f99",
               "campaign_activity_id":"613f918c7a4440b69b09c4806d1a9357"
            }
         }
      }
   }
}

Campaign event example – Python SDK:

Required fields: ApplicationId, endpoint_id, EventType, Timestamp, campaign_id and campaign_activity_id

import boto3 
client = boto3.client("pinpoint")
response = client.put_events(
  ApplicationId = <Pinpoint-App-id>,
  EventsRequest = { 
    "BatchItem": {
      "<event_id>": {
        "Endpoint": {},
        "Events": { 
          "<endpoint_id1>": { 
            "EventType":"_campaign.opened_notification",
            "Timestamp": "2022-12-14T09:50:00.000Z",
            "Attributes": {
              "treatment_id":"0",
              "campaign_id":"5befa9dc28b1430cb0469554789e3f99",
              "campaign_activity_id":"613f918c7a4440b69b09c4806d1a9357"
            }
          }
        }
      }
    }
  }
)
print(response)

Journey event example – REST API:

Required fields: endpoint_id, EventType, Timestamp, journey_id and journey_activity_id

POST https://pinpoint.us-east-1.amazonaws.com/v1/apps/<Pinpoint-App-id>/events

{
   "BatchItem":{
      "<endpoint_id1>":{
         "Endpoint":{}
      },
      "Events":{
         "<event_id>":{
            "EventType":"_journey.opened_notification",
            "Timestamp":"2022-12-14T09:50:00.000Z",
            "Attributes":{
               "journey_id":"5befa9dc28b1430cb0469554789e3f99",
               "journey_activity_id":"613f918c7a4440b69b09c4806d1a9357"
            }
         }
      }
   }
}

Journey event example – Python SDK:

Required fields: ApplicationId, endpoint_id1, EventType, Timestamp, journey_id and journey_activity_id

import boto3 
client = boto3.client("pinpoint")
response = client.put_events(
  ApplicationId = <Pinpoint-App-id>,
  EventsRequest = { 
    "BatchItem": {
      "<endpoint_id1>": {
        "Endpoint": {},
        "Events": { 
          "<event_id>": { 
            "EventType":"_journey.opened_notification",
            "Timestamp": "2022-12-14T09:50:00.000Z",
            "Attributes": {
              "journey_id":"5befa9dc28b1430cb0469554789e3f99",
              "journey_activity_id":"613f918c7a4440b69b09c4806d1a9357"
            }
          }
        }
      }
    }
  }
)
print(response)

Transactional event:

Amazon Pinpoint doesn’t support push notification metrics for transactional messages. Specifically, transactional messages don’t offer a field that can be used to attribute engagement events. These engagement events can still be recorded using the Amazon Pinpoint’s Events API. However, unlike Campaign & Journey events, the transactional push notification message payload doesn’t provide an identifier such as Campaign id or Journey Id that can be used as an Amazon Pinpoint event attribute for data reconciliation purposes.

Next steps

Requests to the Amazon Pinpoint Events API must be signed using AWS Signature version 4. We recommend using the AWS Mobile SDKs which handle request signing on your behalf. You can use the AWS Mobile SDKs with temporary limited-privilege Amazon Cognito credentials. For more information and examples, see Getting credentials.

 

About the Authors

Franklin Ochieng

Franklin Ochieng

Franklin Ochieng is a senior software engineer at the Amazon Pinpoint team. He has attained over 7 years experience at AWS building highly scalable system that solve complex problems for our customers. Outside of work, Frank enjoys getting out in nature and playing basketball or pool.

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Senior Specialist Solutions Architect at AWS. He enjoys diving deep into customers’ technical issues and help in designing communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

How to send web push notifications using Amazon Pinpoint

Post Syndicated from arrohan original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-send-web-push-notifications-using-amazon-pinpoint/

How to send push notifications on any website using AWS messaging tools

Web Push Notifications (also known as browser push notifications) are messages from a website you receive in your browser. These messages are intended to be rich, contextual, timely, personalized and best used to engage, re-engage, and retain website visitors. For instance, as a website owner you could use web push notifications to notify users about sales, important updates or new content on your website.

How are web push notifications different from native app push notifications?

Push notifications are short messages that are displayed directly on the user’s screen sent via mobile applications, providing timely information and messages like order status, promotions, or relevant news in the application.

Web push notifications are simply push notifications sent via web browsers (the browser application on the device), and they work across platforms – Desktop, mobile and tablet.  They are a newer channel than push notifications, and have now become a part of the modern marketing strategy alongside native app push notifications, emails and SMS.

In the case of mobile apps, the user must install the application to receive push notifications. In the case of web push, there is no need to download any software—it just takes one click on your website.

Why are Web Push Notifications useful?

Let’s consider a real-world example. Suppose you are an e-commerce website where customers can purchase products. Once, the purchase has been made, customers would be interested in getting real time updates of where the package is in transit, when is it likely to be delivered, a confirmation that the shipment has been delivered and so on. Web push notifications can be an excellent way of providing such updates. Accessing email on mobile devices is often unwieldy, SMS messages cannot support images and are constrained in length (also they typically they cost more money to send!). Push notifications are perfect for such a use case. Till now, the major constraint was that it would require users to install your app on their device. Web push notifications gives website owners and customers the power of push notifications without any need for driving app installs.

Marketers in a variety of sectors like travel, publishing, restaurant & delivery, finance and insurance can use push notifications to improve their down-to-funnel conversions.  From new content alerts to limited-time promotions to upcoming events, push messages are short, crisp and drive engagement, conversion, and retention. A short search on the AWS blogs website gives us a number of examples of businesses who have created value for their customers with the help of push notifications. Some of the key advantages of web push notifications are:

  • Easy opt in model: Unlike other marketing channels like email or SMS, web push notifications offer users a seamless opt-in experience ― Users simply select `Allow’ on a browser permission prompt. Users do not have to worry about sharing their personal data, like their name, email, or phone number nor do they have to go to the play store/app store and install an app on their device.
  • Increased Engagement:  Push notifications appear on a user’s desktop or mobile screen and are quick to grab attention. Since push messages are real time and have high visibility – they typically enjoy higher “Click Through Rate (CTR)” as compared to other channels like SMS or email.
  • Reach users even when they are not on your website: Web Push Notifications from your website are delivered and shown to the customer even if the user is visiting some other site or on some other app. In this respect (and most others), it is quite similar to app push notifications. Even if subscribers were offline when you sent your push campaign, they will get the push message delivered to them the next time they come online.
  • No need for users to install native apps: One of the most compelling reason for installing mobile apps, is because users could stay updated with the latest and the greatest – thanks to app push notifications. The additional cost of going to the play store/app store and installing the app is something which would often discourage users. This is especially true for countries and regions where users are still on lower end phones with limited storage space. Users would often have to uninstall apps (which might include yours too) that they do not use frequently in order to make space for other stuff.
  • Makes websites richer and more memorable: If you ask a room of developers what mobile device features are missing from the web, push notifications are always high on the list. This is no longer the case since browsers are increasingly adding support for web push notifications and this has offered website owners a powerful cross platform (Desktop & Mobile, Android & iOS) alternative as against developing and maintaining different native apps for different platforms. Web push notifications even appear quite similar to native mobile push on most smartphones.
  • Lower Cost: Unlike channels like SMS, sending web push notifications is absolutely free as browsers themselves offer support for it by adhering to the web push protocol. The only costs incurred will be that of sending push notifications as per the Pinpoint pricing policy.
  • Popular browsers support web push: Google Chrome, Firefox, Opera, Edge support web push on both Mobile and Desktop. What’s more the support for web push is continuously getting better. Refer to this link for the latest support status matrix across browsers and form factors.

What is Amazon Pinpoint?

Amazon Pinpoint is an AWS service that provides scalable, targeted multichannel communications. Amazon Pinpoint enables companies to send messages to customers through SMS, push notifications, in-app notifications, email, and voice channels. To learn more about Amazon Pinpoint, visit the website and documentation.

Web Push support on Firebase Cloud Messaging (FCM):

Firebase uses cloud services for its notification services on Android, iOS & Web. Firebase Cloud Messaging or FCM run on basic principles of tokens, which is uniquely generated for each device & later used for sending messages to respective devices. There are two key advantages of using FCM for sending web push notifications:

  • Abstracts away the complexity of onboarding to the web push protocol for push messages: Sending web push notifications directly without any third party in between requires your website to add support for the web push protocol. Adherence to the web push protocol requires website owners to perform some steps specific to wpn like adding VAPID headers and payload encryption of push messages. This would be additional work for website owners, especially for those businesses which are already onboarded to FCM for sending native app push notifications. FCM server side apis for sending web push notifications work pretty much the same way as they work for native apps. They abstract away the additional complexity of sending web push messages.
  • Send push notifications from Amazon Pinpoint via FCM: Amazon Pinpoint already supports integration with FCM, refer to documentation. Similar to how we add a FCM project in Pinpoint to send push messages to native android apps, in this blog post we will see how a similar integration can be leveraged to send web push notifications.

Advantages of sending Web Push Notifications with Amazon Pinpoint:

Now at this point, you might be thinking, Web push notifications can go a long way towards delighting customers and FCM already abstracts the complexities of sending web push. So why do I need Amazon Pinpoint?

Well, integration with Amazon Pinpoint offers a number of advantages. Here are a few:

  • Map FCM tokens to actual users and web app ‘installs’: FCM would give you tokens for each user on your website who subscribes for web push. Roughly speaking, an FCM token for each web app install with permissions to send push messages. To be able to send messages to these users we would need to store the FCM tokens for each user/web app install/browser instance. Amazon Pinpoint treats each browser instance as an endpoint and enables you to save the push tokens in the same way in which we would store native push tokens/mobile numbers/email addresses, i.e., as a primary identifier for that endpoint. This enables us to send messages to Pinpoint endpoints without caring about the underlying complexity of storing and managing push tokens.
  • Intelligently send web push, map user attributes to push tokens: Along with the push token, each pinpoint endpoint can also store other attributes like device characteristics, user Id and user attributes. This helps us to create dynamic and complex segments which can be used to send targeted web push notifications.
  • It is essentially the same as sending android native push: Create an FCM project, create FCM tokens, Create Pinpoint endpoints with the tokens, send push campaigns to those endpoints. Swap out native android code with service workers, JavaScript on the client and you get web push. It really is that simple.
  • Web push, native push, SMS or emails. One stop shop for reaching out to users on all channels: Pinpoint becomes your single backend for reaching out to users across multiple channels. For app users, send them app push, for users who prefer the web, you have web push.
  • Leverage Pinpoint features like Campaign Management, Events, Analytics and Segments: Read up about Amazon Pinpoint. It has a lot of great features which can help you better engage your users.

In this blog we will see how to send web push notifications using Amazon Pinpoint on a website built using AWS Amplify.

 Overview of solution

Enable web push by using FCM as an intermediary service and Pinpoint as an app server (map FCM tokens to actual users) and a push campaign management tool. Integrate web push protocol, FCM and Amazon Pinpoint.

Overview of how to setup Web Push - registering the customer

Overview of how to setup Web Push - sending push notifications

Walkthrough

In this blog post, we will create a simple demo website using Amplify which can be used to create web push subscriptions and also receive web push messages. We will integrate this website with FCM js sdk and Amazon Pinpoint to store the FCM push tokens on Pinpoint. Later we will see how to send web push notifications using Amazon Pinpoint with FCM acting as an intermediary.

The above can be broken down into the below simple and independent steps:

  • Create a project on FCM.
  • Generate web push notifications server keys on FCM.
  • Create a simple web app (website) using Amplify
  • Create an Amazon Pinpoint project. This is a one-line command which will be done as part of Amplify web app setup.
  • Make your amplify website web push capable. In this step we will also integrate with the FCM sdk for web push.
  • Configure the Pinpoint project and integrate it with FCM. It just involves adding the FCM server key to Pinpoint.
  • Go to the Amazon Pinpoint console and send a test web push message from your website. And we are done!

You can see checkout my demo website here.

The source code for this demo website (and the blog) is available here.

Prerequisites – Essentials

For this walkthrough, you should have the following prerequisites:

Prerequisites – Recommended

In addition to the necessary prerequisites mentioned above, I would highly recommend readers to go through the below in order to derive maximum value from this blog post.

  • Web Push fundamentals: Some basic reading up on web push notifications and going through a couple of relevant code samples. It is not compulsory to implement and understand everything, but it would be beneficial to have an elementary understanding of service workers, permissions, push subscriptions and notifications apis.
    • Introduction: Some of the sections are a bit detailed and complex, you need not go through all the sections completely at once. However, at least go through the overview and the how push works sections carefully.
    • Simple Code demo with explanations to help you get started.
  • FCM client-side code : You need not go through the send Message sections since we will not directly use FCM apis or the console. Instead, we will use the Pinpoint console to manage our push campaigns.
  • Building web apps with amplify: By the end of the tutorial, you should get clarity on how to build and host web apps using amplify. It will also help you become familiar with the amplify cli tool.
  • Read up on Amazon Pinpoint.

Setting up the demo web app

Let’s deploy the demo web app using AWS Amplify to see how all the parts come together.

Clone the code for the sample web app

git clone ssh://git.amazon.com/pkg/ArrohanWebPushPoc (branch: PinpointBlog) <github_link>

Create an FCM account and a project on the FCM developer console, on the FCM project add web push as a channel

It is possible that Firebase may change the UI of the console in the future so the given screenshots may not be exactly reflective of the UI, but the broad steps would remain the same.

  • Under “Engage”  click on ‘Cloud Messaging Tab’.  The page url should typically be of the form: https://console.firebase.google.com/u/0/project/<name_of_your_project>/notification.

Setting up web push - Getting the firebase push config

  • Under the option “Add an app to get started”, Click on the “web/javascript” (the one with the </> symbol) app.
  • Once you have created the project,  go to project settings. Click on General Tab. Replace the values in firebaseConfig main.js with the actual values for your project.

Setting up web push - Copying the firebase push config

Setting up web push - Code pointer for the firebase push config

Generate a public-private key pair for the FCM cloud messaging project

  • Under project settings, switch to the cloud messaging tab. Click generate key pair under web push certificates to generate a public-private key pair.

Setting up web push - Generate a public-private vapid key pair

  • Replace the <YOUR PUBLIC KEY> in the file main.js in the source code with the vapid public key you generated in the previous step.

Setting up web push - Copying the FCM server key

  • Note the Server key, you will need it during pinpoint project setup.

Setup an Amplify web app and integrate with pinpoint

  • Clone the code in the given repo, replace your FCM config and keys. Run npm install.
    • In case you face build errors due to package versions getting outdated (firebase, especially gets updated often, sometimes with breaking changes), please update the dependencies to the latest version. This post offers an easy way to identify outdated dependencies and update them.
  • Setup an Amplify web app. Note, for the purpose of this demo, you just need to setup a simple static website. Simply run amplify init. Enter the required details, the default config should work fine.

Setting up the example app for web push using Amazon Pinpoint

  • Create a pinpoint project and integrate with our web app through amplify cli:
    • Create a pinpoint project: Run amplify add analytics. Choose Amazon Pinpoint as the analytics provider and accept all defaults.
    • Please note when you add “analytics” to the project you will get a prompt which says something like – “Apps need authorization to send analytics events. Do you want to allow guests and unauthenticated users to send analytics events? (We recommend you allow this when getting started)” – Please accept and answer Yes when you get this.
    • Push to AWS: Run amplify push. A configuration file (aws-exports.js) will be added to the source directory. Notice, we are calling this file from our main.js file.

Setting up the example app for web push using Amazon Pinpoint

Pinpoint Project setup

  • Get the Server key of the FCM project you created earlier.

Setting up web push - Add FCM key to Pinpoint console

Run the web app and subscribe for push notifications

  • Run npm start, our web app will be running on http://localhost:8080/index.html
  • Click on enable push messages and click allow/accept on the browser permission prompt which follows. Once it is enabled, you will see a FCM token on the page, copy the token.

How the example app looks

Send a web push notification from the Amazon Pinpoint console

  • Open the pinpoint project you created in the previous step on the Pinpoint console. Click on your project and then go to test messaging. The process is exactly the same as the one for native apps described here. Under “destination type” select “Device Tokens” and paste the FCM token you copied in the previous step.

Sending a web push from the Pinpoint console

  • Fill in title, body and optionally URL (“Go to a URL” under “Actions”). Click on Send Message, you should get a push message on your browser.

How an example web push notification looks like on Desktop

Next steps

  • Host your app. Simply run amplify add hosting followed by amplify publish. Remember that for web push (service workers) to work, your site should be https.

Deploying the example app for web push using AWS Amplify

  • Create segments, campaigns and journeys on pinpoint and try sending web push messages through them.

Code Walkthrough

  • Gitfarmlink: https://code.amazon.com/packages/ArrohanWebPushPoc/trees/heads/PinpointBlog
  • File wise description:
    • package.json: Simple npm config file. It includes the list of dependencies and their versions used by our web app. For our use case, all we need is webpack and AWS Amplify.
    • package-lock.json: Auto generated config file generated by npm after resolving modules and package.json.
    • aws-exports.js: Auto generated configuration file created by Amplify cli. This file contains the configuration and endpoint metadata used to link your front end to your backend services. It will be structured similar to the sample config file.
    • webpack.config.js: Simple webpack configuration file
    • src: The folder which contains the source code for our web app. It contains:
      • service-worker.js: The service worker that we register for our website which is used to display push notifications. In the service worker we parse the notification payload sent through pinpoint and call the notification apis to display push notification with the appropriate fields.
      • index.html: The website html.
      • main.js: The heart of the web app. It does permission handling, push subscription management and communicates with FCM and Amazon Pinpoint.
      • images/icon-192×192.png: static icon that we display on our push messages. This would essentially be your website logo.

Conclusion

This small demo shows how we can send web push notifications using Amazon Pinpoint. As next steps to come up with an actual production ready solution, you can look into the following:

  • Develop deeper understanding and expertise on web push
  • Richer and smarter push notification: Add big images, action buttons, replace notifications using tags (for example, sports score updates) and explore other features in the show notifications api.
  • Smart push notifications: add custom business logic in the payload. Hint: use the “body” (“pinpoint.notification.body“) field on the pinpoint console to send a custom json string.
  • Driving more subscriptions: Leverage Amplify Analytics to track how users interact with the push subscribe UI. Think of where and how you might ask users to subscribe to drive maximum engagement.
  • Easy unsubscribe: Allow users an easy option to disable push notifications without having to block you from browser settings. Also, make sure that you are disabling that endpoint on pinpoint. Hint: use the updateEndpoint api and pass optOut from ‘ALL’ as the argument.
  • Targeted and personalised push notifications: Leverage Pinpoint segments to send users push notifications according to their interests and requirements. Hint: add user data to endpoints and use it to filter and create targeted segments.
  • Campaign management: Leverage pinpoint features like segments, analytics, campaigns, journeys and more!

Project Cleanup

In this section we will quickly go over the steps to delete the resources we created for this demo to make sure that we do not incur any charges.

  • Cleaning up all AWS resources including the Amazon Pinpoint project, S3 buckets for hosting (and any other resources you may have added): Simply run amplify delete from the project directory on your command line.
  • Cleaning up the FCM Project: Refer to the FCM support page for the steps to delete a project – https://support.google.com/firebase/answer/9137886 .
    • Open the project settings page: The URL will be of the form https://console.firebase.google.com/u/0/project/<your_project_identifier>/settings/general
    • Click on the delete project button at the bottom of the page.

Build AI and ML into Email & SMS for customer engagement

Post Syndicated from Vinay Ujjini original https://aws.amazon.com/blogs/messaging-and-targeting/build-ai-and-ml-into-email-sms-for-customer-engagement/

Build AI and ML into Email & SMS for customer engagement

Customers engage with businesses through various channels like email, SMS, Push, and in-app. With the availability and ease of usage of mobile phones, businesses can use 2-way Short Service Messages (SMS) to engage with their customers. Text messaging does not need applications and provides immediate interaction with your customers. Amazon Pinpoint enables businesses & organizations to interact in 2-way SMS messages with their customers. Since it is not practical and scalable for organizations to have people responding to millions of their customer’s texts, we can leverage Amazon Lex which helps build the conversational AI into the 2-way SMS. Amazon Lex is a fully managed artificial intelligent (AI) AWS service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications. Machine Learning (ML) is used in digital marketing to help businesses detect patterns in customer bhevaior.

Today, if customers want to know the latest status on their order, they have to send an email, which is hard for businesses to monitor and respond, and time consuming for the customer to call regarding their order status and also expensive for businesses to field the calls.

This blog post shows how you can elevate your customer’s experience using Amazon Pinpoint’s omni-channel capabilities, Amazon Lex’s AI powered chat, and ML-powered personalization using Amazon Personalize.

The solution presented in this blog helps resolve all the above issues. The example I have used to depict this where a customer orders a bike and since the delivery has been delayed, he wants to get timely updates on the progress. He has been given a phone number by the bike company to text them with any questions. This solution elevates the customer’s experience by providing him with timely update by checking the latest from the database and also sending additional product recommendations, predicting what the customer might need.

Architecture

This solution uses Amazon Pinpoint, Amazon Lex, AWS Lambda, Amazon Dynamo DB, Amazon Simple Notification Services, Amazon Personalize.

AWS architecture diagram AI/ML, Email, SMS.

  1. The customer sends a message to the number provided by the store asking about their order status.
  2. Pinpoint 2-way SMS has as SNS topic tied to it.
  3. The SNS topic relays the message to the Lex integration Lambda.
  4. This Lex integration lambda has the integration between Pinpoint & Lex.
  5. When the customer checks on their order status, Lex taps into the fulfillment lambda that is tied to it.
  6. That lambda checks on the order status from the DynamoDB and sends it back to Lex.
  7. Lex sends the order details to Amazon Pinpoint and Amazon Pinpoint delivers the SMS with the order details to the customer’s phone number.
  8. Amazon Lex lets fulfillment Lambda know to send an email to the customer with the order details.
  9. Fulfillment Lambda create an event called ‘Order Status’ for Amazon Pinpoint Journey to consume in its Journey.
  10. Amazon Pinpoint’s message template reaches out to Amazon Personalize to get the 3 recommendations.
  11. Amazon Pinpoint’s Journey triggers an email message to the customer with the order information and recommendations

Prerequisites

To deploy this solution, you must have the following:

  1. An AWS account.
  2. An Amazon Pinpoint project.
  3. An originating identity that supports 2 way SMS in the country you are planning to send SMS to – Supported countries and regions (SMS channel).
  4. A mobile number to send and receive text messages.
  5. An SMS customer segment – Download the example CSV, that contains a sample SMS & email endpoints. Replace the phone number (column C) with yours, and email with your email and import it to Amazon Pinpoint – How to import an Amazon Pinpoint segment.
  6. Add your mobile number in the Amazon Pinpoint SMS sandbox.
  7. Verify your email address that needs to receive messages from this account.
  8. Download the LexIntegration.zip & RE_Order_Validation.zip Lambda files from this Github location.

Preparation:

  1. Download the CloudFormation template.
  2. Go to Amazon S3 console and create a bucket. I created one for this example as ‘pinpointreinventaiml-code’. Under that S3 bucket, create a sub-folder and name it Lambda.
  3. Upload the 2 zip files you downloaded earlier from the Github.
  4. In Amazon Pinpoint > Phone numbers, Check to make sure the phone number you are using is enabled for SMS and its status is active.
  5. Add the machine learning generated product recommendations using Amazon Personalize.
Check if phone number is enabled & active in Pinpoint console

Phone numbers in Pinpoint console

Solution implementation

Create a Lex Chat bot:

  1. Now it’s time to create your bot. To create your bot, sign in to the Lex console at https://console.aws.amazon.com/lex.
  2. For more information about creating bots in Lex, see https://docs.aws.amazon.com/lex/latest/dg/gs-console.html.
  3. Click on Create bot button. Next steps:
    1. Select Create a blank bot radio button.
    2. Give a Bot name ‘Order Status’ under Bot name Configuration. (Use the same Bot name as mentioned here. If you change the Bot name here, your CloudFormation will fail)
    3. Under IAM permissions, select the radio button Create a role with basic Amazon Lex permissions.
    4. For COPPA, choose No. Click Next
    5. Under Language dropdown, choose the language of usage. I chose Language as English in my example.
    6. Click Done, to complete the Bot creation.
  4. You have to create an Intent within the Bot you just created
    1. Click on the Bot you just created. Click on Intents and click the dropdown Add intent and select Add empty intent.
    2. Give an intent name and click Ok.
  5. Once the intent is created, go to the intent and open the Conversation flow section in the intent and create a flow that that has the following info and looks like below image:
    1. Click on Sample utterance and it takes you to Sample Utterance and type in Order status.
    2. Click on initial response and type in Okay, I can help with that. What is your order number?
    3. Click on the slot value and click on Add a slot. Name: OrderNumber and Slot type is AMAZON.AlphaNumeric. In the prompt, enter Please enter your order number.
    4. Click on Save Intent button. The conversation flow should look like the below screenshot:

Amazon Lex intent

6. Go back to the Intent you just created and click on the Build button that is to the right side of the page.

Build intent

7. Once the build is successfully completed, go back to the Bot you created and click on Aliases on the left frame. Click on the Alias that was created earlier, TestBotAlias.

Bot Alias

8. In the Languages section, click on the English language that we created earlier.
9. Open the Lambda function – optional section and point the source to RE_Order_Validation Lambda that we downloaded earlier.
10. For Lambda function version or alias, select $LATEST. Click on Save.

Add Lambda to Alias

11. Go to Intents, choose the intent you just built and click on Build button again. Once build is complete, you can test the intent.

Import and execute CloudFormation:

  1. Navigate to the Amazon CloudFormation console in the AWS region you want to deploy this solution.
  2. Select Create stack and With new resources. Choose Template is ready as Prerequisite – Prepare template and Upload a template file as Specify template. Upload the template downloaded in step 1 under Preparation section of this document. Click Next.
  3. Fill the AWS CloudFormation parameters as shown below:
  4. Stack name: Give a name to this stack.
    1. Under Parameters, for BotAlias: The Bot Alias that you created as part of Amazon Lex above.
    2. BotId: The Bot ID for the bot that you created as part of Amazon Lex above.
    3. CodeS3Bucket: Give the name of the S3 bucket you created in step3 of the Preparation topic above.
    4. OriginationNumber: This is the origination identity phone number you created in step4 of the Preparation topic above.
    5. PinpointProjectId: Use the ProjectID you have from step2 of the Prerequisites phase above.
  5. After entering all the parameter info, it would look something like this below:
  6. CloudFormation parameters
  7. Click Next. Leave the default options on the next page and click Next again.
  8. Check the box I acknowledge that AWS CloudFormation might create IAM resources with custom names. Click Submit.

Set up data in Amazon Dynamo DB

  1. We are using DynamoDB table here as the transactional database that stores order information for the bike store.
  2. Once the solution has been successfully deployed, navigate to the Amazon DynamoDB console and access the OrderStatus DynamoDB table. Each row created in this table represents an order and it’s details. Each row should have a unique Order_Num that holds the order number and it’s related information. You can put additional information about the order like the example below:
  3. {
       "Order_Num":{
          "Value":"ABC123"
       },
       "Delivery_Dt":{
          "Value":"12/01/2022"
       },
       "Order_Dt":{
          "Value":"11/01/2022"
       }
       "Shipping_Dt":{
          "Value":"11/24/2022"
       }
       "UserId":{
          "Value":"example-iser-id-3"
       }
    }
  4. Once you enter the data, it should look like the image below. Click on Create item.
  5. Dynamo DB values

Set up Amazon Simple Notification Service (SNS) topic

  1. We need the Amazon Simple Notification Service here, to provide internal message delivery from publishers (customer’s text message) to subscribers (Amazon Lex in this example). This is used for internal notifications in this use case.
  2. As part of the CloudFormation above, check if you have an SNS topic created by the name LexPinpointIntegrationDemo.
  3. Now, we have successfully created an Amazon SNS topic.

Set up Lambda Functions

  1. Go to AWS Lambda console and open the Lambda function LexIntegration. Under the Function overview, click on the Add trigger. Under Trigger configuration dropdown, select SNS and under SNS Topic select LexPinpointIntegrationDemo topic. Click on Add.
  2. Note: In this example, I used Node.js in a Lambda and Python in another, to show how AWS Lambda functions are flexible to use the scripting language of your choice.

Setting up 2-way SMS in Amazon Pinpoint

  1. Go to Amazon Pinpoint console and click on Phone numbers under SMS & Voice in the left frame. If you don’t see any phone numbers, please refer to #3 under prerequisites section above.
  2. This is how your screen should look like
  3. Phone numbers in Pinpoint
  4. Click on the number.
  5. On the right frame, expand Two-way SMS drop down arrow.
  6. Click on the check box ‘Enable two-way SMS’.
  7. In the ‘Incoming message destination’ select the radio button ‘Choose an existing SNS topic’ and in the drop down below, choose the SNS topic you built above.
  8. The result would look like the screenshot below:
  9. 2-way SMS
  10. Click on Save.

Import Machine Learning model into Pinpoint

  1. Go to Amazon Pinpoint.
  2. Click on Machine Learning Models. Click on Add recommender model.
  3. Give a recommender model name and description under model details.
  4. Under Model configuration, choose the radio button ‘Automatically create a role’ and give an IAM role name in the textbox below.
  5. Under recommender model, choose the recommender model campaign that you created in Amazon Personalize earlier in the project.
    1. If you did not create it, use this Pinpoint workshop to create a recommender model in Amazon Personalize.
    2. The data used in this example is for retail industry, please edit the data as needed for your use case and industry.
  6. Under the settings section:
    1. Select ‘User Id’ as identifier.
    2. Click on the drop down ‘Number of recommendations per message’ and select 3.
  7. For Processing method, choose ‘Use value returned by model’.
  8. Click on Next.
  9. You are presented with attributes section. Give a display name as ‘product_name’ for the attributes and click next.
  10. On the next screen, you can review all the information provided and click on Publish.
  11. The completed model after publishing looks like the screen below:
  12. Personalize model in Pinpoint

Create a Message Template in Amazon Pinpoint

  1. Use chapter 6.4 in this workshop Amazon Pinpoint Workshop to create a message template.
  2. Once the template is created, you need to add recommendations to the message template using this Amazon Pinpoint Workshop details. Change the type of data needed for your use case and industry in this workshop. I used sample retail data.
  3. To create a Amazon Pinpoint Journey, navigate to the Amazon Pinpoint console , select Journeys and click on Create journey.
  4. Give a name, click on Set entry condition in the Journey entry block.
  5. Choose the radio button Add participants when they perform an activity.
  6. Click in the ‘Events’ text box and type in OrderStatus.
  7. Pinpoint Journey entry
  8. Click on Add activity and select Send an email.
  9. Click on choose an email template and select the email message template we created earlier in this blog. Click on choose template button.
  10. Select a Sender email address from the drop down list.
  11. Choose sender email here
  12. Click Save. The final journey should look like this:
  13. This is the final journey
  14. Click on Actions > Settings where you will review the journey settings. There you set the start and end date of the journey if applicable as well as other advanced settings. Configure your journey settings to look like the screenshot below and click Save.
  15. Journey settings
  16. To publish your journey click on Review. On the Review your journey click Next > Mark as reviewed > Publish. A 5 minutes countdown will begin after, which your journey will be live.
  17. Once the journey is live, we need to pass the event OrderStatus and the endpoints will go through that journey and will receive an email.

Testing the solution

  1. Use a phone with a valid number (in this example, I took a US phone number) and send a text ‘Order Status’ to the number generated in Amazon Pinpoint above.
  2. You should get a response “Okay, I can help with that. What is your order number?”
  3. You should type in the order number you generated earlier and stored it in Amazon DynamoDB table.
  4. You should get a response “Your order <order number> was shipped on <shipped_dt> and is expected to be delivered to your address on <delivery_dt>. Your order details have been emailed to you.”
  5. Text message flow
  6. Alternatively, you can test this solution from the Lex bot.
  7. In Amazon Lex, go to the intent you created above and click on the Test button. Next steps:
    1. In the text box, enter Order Status.
    2. Bot should respond with Okay, I can help with that. What is your order number?
    3. You can respond with the order number you entered in the DynamoDB table.
    4. Bot should respond with Your order <Order_Num> was shipped on <Shipping_Dt> and is expected to be delivered to your address on <Delivery_Dt>. Your order details have been emailed to you.
    5. Testing the 2 way messaging in Lex console

Conclusion

Using this blog post, you can elevate your customer’s experience by using Amazon Lex’s AI chat capabilities, Amazon Personalize’s ML recommendation models and trigger a Pinpoint Journey. This blog highlights how organizations can interact in a 2-way SMS with their customers and convert that engagement to a triggered email, with product recommendations, if needed.

Next Steps

You can use the above solution and modify it easily to use it across different verticals and applicable use cases. You can also extend this solution to Amazon Connect to an agent via SMS chat, using this blog.

Clean-up

  1. To delete the solution, go to CloudFormation you created as part od this project. Click on the stack and click Delete.
  2. Navigate to Amazon Pinpoint and stop the Journey you ran in this solution. Delete the Journey, Machine learning models, Message templates you created for this solution. Delete the Project you created for this solution.
  3. In Amazon Lex, delete the intent and bot you created for this solution.
  4. Delete the folder and bucket you created in S3 as part of this project.
  5. Amazon Personalize resources like Dataset groups, datasets, etc. are not created via AWS Cloudformation, thus you have to delete them manually. Please follow the instructions in the AWS documentation on how to clean up the created resources.

Additional resources

Retry delivering failed SMS using Pinpoint

How to target customers using ML, based on their interest in a product

 About the Authors

Vinay Ujjini

Vinay Ujjini is an Amazon Pinpoint and Amazon Simple Email Service Principal Specialist Solutions Architect at AWS. He has been solving customer’s omni-channel challenges for over 15 years. In his spare time, he enjoys playing tennis & cricket.

Shift management using Amazon Pinpoint’s 2-way-SMS

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/shift-management-using-amazon-pinpoints-2-way-sms/

Businesses with physical presence such as retail, restaurants and airlines need reliable solutions for shift management when demand fluctuates, employees call in sick, or other unforeseen circumstances arise. Their linear dependence on staff forces them to create overtime shifts as a way to cope with demand.

The overtime shift communication between the business and employees needs to be immediate and scalable. Employees in such industries might not have access to internet, which rules out communication channels like email and push notification. Furthermore once the employees receive the available shifts, they need an easy way to book them and if required request further support.

In these situations, businesses need communication methods that are accessible by any mobile device, available without internet connection, and allow for replies. SMS fills all of these requirements and more. This blog showcases how with Amazon Pinpoint SMS channel and other AWS services you can develop an application that communicates available shifts to employees who are interested while allowing them to book by replying.

The solution presented in this blog is for shift management but with small changes it can also communicate available appointments to customers / patients. An example of such use case can be found in health care, where there is a waiting list to see a doctor and patients might cancel the very last moment. The solution can communicate these available slots to all patients in the waiting list and allow them to book via SMS.

Architecture

The solution uses Amazon Pinpoint two way SMS, Amazon DynamoDB, AWS Lambda, Amazon Simple Notification Service (SNS) and Amazon Connect (optional). The next section dives deeper into the architecture diagram and logic flow.

shift_management_architecture

  1. The operator adds an item to the Shift’s campaigns Amazon DynamoDB table. The item consists of a unique key that is used as the Amazon Pinpoint Campaign Id and an attribute Campaign Message, which is used as the SMS message text that the campaign recipients will receive. The Campaign Message needs to include all available shifts that the operator wants to notify the employees about.
    1. Note: This can be done either from the AWS console or programmatically via API.
  2. Amazon DynamoDB streams invokes an AWS Lambda function upon creation of a new Amazon DynamoDB item. The AWS Lambda function uses the Amazon DynamoDB data to create and execute an Amazon Pinpoint SMS Campaign based on an existing customer segment of employees who are interested in overtime shifts. The customer segment is a prerequisite and it includes SMS endpoints of the employees who are interested in receiving shift updates.
  3. Employees who belong in that segment receive an SMS with the available shifts and they can reply to book the ones they are interested in.
  4. Any inbound SMS is published on an Amazon SNS topic.
  5. The 2 way SMS AWS Lambda function subscribes to the Amazon Simple Notification Service and processes all inbound SMS based on their message body.
  6. The Shift’s status Amazon DynamoDB table stores the status of the shifts, which gets updated depending on the inbound SMS.
  7. If the employee requires further assistance, they can trigger an Amazon Connect outbound call via SMS.

The diagram below illustrates the four possible messages an employee can send to the application. To safeguard the application from outsiders and bad actors, the 2 way SMS AWS Lambda function looks up if the senders mobile number is in an allow list. In this solution, the allow list is hardcoded as an AWS Lambda environment variable but it can be stored in a data base like Amazon DynamoDB.

shift management inbound-sms-business-logic

Solution implementation

Prerequisites

To deploy this solution, you must have the following:

  1. An originating identity that supports 2 way SMS in the country you are planning to send SMS to – Supported countries and regions (SMS channel).
  2. A mobile phone to send and receive SMS.
  3. An AWS account.
  4. An Amazon Pinpoint project – How to create an Amazon Pinpoint project.
  5. An SMS customer segment – Download the example CSV, that contains one SMS endpoint. Replace the phone number (column C) with yours and import it to Amazon Pinpoint – How to import an Amazon Pinpoint segment.
  6. Add your mobile number in the Amazon Pinpoint SMS sandbox – Amazon Pinpoint SMS sandbox.
  7. An Amazon Connect instance, number & contact flow if you want your employees to be able to request an agent call back. Download the example Connect contact flow that you can import to your Amazon Connect instance.

Note: UK numbers with a +447 prefix are not allowed by default. Before you can dial these UK mobile numbers, you must submit a service quota increase request. For more information, see Amazon Connect Service Quotas in the Amazon Connect Administrator Guide.

Deploy the solution

  1. Download the CloudFormation template and navigate to the AWS CloudFormation console in the AWS region you want to deploy the solution.
  2. Select Create stack and With new resources. Choose Template is ready as Prerequisite – Prepare template and Upload a template file as Specify template. Upload the template downloaded in step 1.
  3. Fill the AWS CloudFormation parameters as shown below:
    1. ApprovedNumbers: The mobile numbers that are allowed to use this service. The format should be E164 and if there is more than one number separate them by comma e.g. +4457434243,+432434324.
    2. OriginationNumber: The mobile number that you have in your Amazon Pinpoint account in E164 format e.g. +44384238975.
    3. PinpointProjectId: The existing Amazon Pinpoint project Id.
    4. SegmentId: The Amazon Pinpoint existing segment Id that you want to send the SMS notifications to.
    5. ConnectEnable: Select YES if you already have an Amazon Connect instance with a Contact Flow and Queue. If you select NO ignore all the fields below, the solution will still be deployed but employees won’t be able to request a call back.
    6. InstanceId: The Amazon Connect InstanceId. Follow this link to learn how to find your Amazon Connect InstanceId.
    7. ContactFlowID: The Amazon Connect Contact Flow Id that you want this solution to use. Follow this link to learn how to find your Amazon Connect ContactFlow id.
    8. QueueID: The Amazon Connect Queue Id. To obtain the Amazon Connect Queue Id navigate to your Amazon Connect instance > Routing > Queues and it should appear on the browser URL, see example: https://your-instance.awsapps.com/connect/queues/edit?id=0c7fed63-815b-4040-8dbc-255800fca6d7.
    9. SourcePhoneNumber: The Amazon Connect number in E164 format that is connected to the Contact Flow provided in step 7.
  4. Once the solution has been successfully deployed, navigate to the Amazon DynamoDB console and access the ShiftsStatus DynamoDB table. Each item created represents a shift and should have a unique shift_id that employees use to book the shifts, a column shift_status with value = available and a column shift_info where you can put additional information about the shift – see example below:
    {
       "shift_id":{
          "S":"XYZ1234"
       },
       "shift_info":{
          "S":"15/08 5h nightshift"
       },
       "shift_status":{
          "S":"available"
       }
    }

    Pinpoint_shift_status_dynamoDB

  5. Navigate to Amazon Pinpoint console > SMS and voice > Phone numbers, select the phone number that you used as OriginationNumber for this solution and enable Two-way SMS. Under the Incoming messages destination section, select Choose an existing SNS topic and select the one containing the name TwoWaySMSSNSTopic.
  6. Navigate to the Amazon DynamoDB console and access the ShiftsCampaignDynamoDB table. Each item you create represents an Amazon Pinpoint SMS campaign. Create an Amazon DynamoDB item and provide a unique campaign_id, which will be used as the Amazon Pinpoint Campaign name. Create a new attribute (string) with the name campaign_message and type all available shifts that you want to communicate via this campaign. It is important to include the shift id(s) for each of shifts you want your employees to be able to request – see example below.
    • Note: By completing this step, you will trigger an Amazon Pinpoint SMS Campaign. You can access the campaign information and analytics from the Amazon Pinpoint console.
{
  "campaign_id": {
    "S": "campaign_id1"
  },
  "campaign_message": {
    "S": "15/08 5h nightshift XYZ123, 18/08 3h dayshift XYZ124"
  }
}

shift_campaign_dynamoDB

Testing the solution

  • Make sure you have created the shifts in the ShiftsStatusDynamoDB Amazon DynamoDB table.
  • To test the SMS Campaign, replicate step 6 under Deploy the solution section.
  • Reply to the SMS received with the options below:
    1. Send a shift_id that doesn’t exist to receive an automatic response “This is not a valid shift code, please reply by typing REQUEST to view the available shifts”.
    2. Send a valid & available shift_id to book the shift and then check the ShiftsStatusDynamoDB Amazon DynamoDB table, where the shift_status should change to taken and there should be a new column employee with the mobile number of the employee who has requested it.
    3. Send REQUEST to receive all shifts with shift_status = available.
    4. If you have deployed the solution along with Amazon Connect, send AGENT and await for the call.

Next steps

This solution can be extended to support SMS sending to multiple countries by acquiring the respective originating identities. Using Amazon Pinpoint phone number validate service API the application can identify the country for each recipient and choose the correct originating identity accordingly.

Depending your business requirements you might want to change the agent call back option to chat via SMS. You can extend this solution to connect the agent via SMS chat by following the steps in this blog.

Clean-up

To delete the solution, navigate to the AWS CloudFormation console and delete the stack deployed.

About the Authors

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Senior Specialist Solutions Architect at AWS. He loves diving deep into his customer’s technical issues and help them design communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Retry delivering failed SMS using Amazon Pinpoint

Post Syndicated from satyaso original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-utilise-amazon-pinpoint-to-retry-unsuccessful-sms-delivery/

Organizations in many sectors and verticals have user bases to whom they send transactional SMS messages such as OTPs (one-time passwords), Notices, or transaction/purchase confirmations, among other things. Amazon Pinpoint enables customers to send transactional SMS messages to a global audience through a single API endpoint, and the messages are routed to recipients by the service. Amazon Pinpoint relies on downstream SMS providers and telecom operators to deliver the messages to end user’s device. While most of the times the SMS messages gets delivered to recipients but sometimes these messages could not get delivered due to  carrier/telecom related issues which are transient in nature. This impacts customer’s brand name. As a result, customers need to implement a solution that allows them to retry the transmission of SMS messages that fail due to transitory problems caused by downstream SMS providers or telecom operators.

In this blog post, you will discover how to retry sending unsuccessfully delivered SMS messages caused by transitory problems at the downstream SMS provider or telecom operator side.

Prerequisites

For this post, you should be familiar with the following:

Managing an AWS account
Amazon Pinpoint
Amazon Pinpoint SMS events
AWS Lambda
AWS CloudFormation
Amazon Kinesis Firehose
Kinesis Streams
Amazon DynamoDB WCU and RCU accordingly

Architecture Overview

The architecture depicted below is a potential architecture for re-sending unsuccessful SMS messages at real time. The application sends the SMS message to Amazon Pinpoint for delivery using sendMessge API. Pinpoint receives the message and returns a receipt notification with the Message ID; the application records the message content and ID to a Datastore or DynamoDB. Amazon Pinpoint delivers messages to users and then receives SMS engagement events. The same SMS engagement events are provided to Amazon Kinesis Data Streams which as an event source for Lambda function that validates the event type, If the event type indicates that the SMS message was unable to be sent and it make sense to retry, the Lambda function logic retrieves respective “message id” from the SMS events and then retrieves the message body from the database. Then it sends the SMS message to Amazon  Pinpoint for redelivery, you can choose same or an alternative origination number as origination identity while resending the SMS to end users. We recommend configuring the number of retries and adding a retry message tag within Pinpoint to analyse retries and also to avoid infinite loops. All events are also sent to Amazon Kinesis Firehose which then saved to your S3 data lake for later audit and analytics purpose.

Note: The Lambda concurrency and DynamoDB WCU/RCUs need to be provisioned accordingly. The AWS CloudFormation template provided in this post automatically sets up the different architecture components required to retry unsuccessful SMS messages

Retry delivering failed SMS using Amazon Pinpoint

At the same time, if you use Amazon Kinesis Firehose delivery stream instead of Kinesis data stream to stream data to a storage location, you might consider utilising Transformation lambda as part of the kinesis Firehose delivery stream to retry unsuccessful messages. The architecture is as follows; application sends the SMS payload to Amazon Pinpoint using SendMessage API/SDK while also writing the message body to a persistent data store, in this case a DynamoDB database. The SMS related events are then sent to Amazon Kinesis Firehose, where a   transformation lambda is setup. In essence, if SMS event type returns no errors, the event is returned to Firehose as-is. However, if an event type fails and it makes sense to retry, lambda logic sends another SendMessage until the retry count (specified to 5 within the code) is reached. If just one retry attempt is made, S3 storage is not loaded with an event (thus the result=Dropped). Since Pinpoint event do not contain actual SMS content, a call to DynamoDB is made to get the message body for a new SendMessage.

Retry SMS diagram

Amazon Pinpoint provides event response for each transactional SMS communications for retrying unsuccessful SMS connections, there are primarily two factors to consider in this architecture. 1/ Type of event (event_type) 2/ Record Status (record_status). So whenever the event_type is “_SMS.FAILURE” and record_status is any of “UNREACHABLE”, “UNKNOWN”, “CARRIER_UNREACHABLE”, “EXPIRED”. Then surely customer application need to retry the SMS message delivery. Following pseudo code snippet explains the conditional flow for failed SMS sending logic within the lambda function.

Code Sample:
If event.event_type = '_SMS.FAILURE': and event.record_status == 'UNREACHABLE' 
	'| UNKNOWN | CARRIER_UNREACHABLE | TTL_EXPIRED'
	sendMessage(message content, Destination) # resend the SMS message then 
	output_record = { "recordId": record["recordId"], 'result': 'Dropped', 'data': 
		base64.b64encode(payload.encode('utf-8')) } 
else 
	output_record = { "recordId": record["recordId"], 'result': 'Ok', 
						'data': base64.b64encode(payload.encode('utf-8')) }

Getting started with solution deployment

Prerequisite tasks to be completed before deploying the logging solution

  1. Go to CloudFormation Console and Click Create Stack.
  2. Select Amazon S3 Url redio button and provide the cloud formation linkAWS console creating a Pinpoint template
  3. Click Next on Create Stack screen.
  4. Specify Stack Name, for example “SMS-retry-stack”
  5. Specify event stream configuration option, this will trigger the respective child cloud formation stack . There are three Event stream configuration you can choose from.
    • No Existing event stream setup – Select this option if you don’t have any event stream setup for Amazon Pinpoint.
    • Event stream setup with Amazon Kinesis Stream – Select this option if your Amazon Pinpoint project already have Amazon Kinesis as event stream destination.
    • Event stream setup with Amazon Kinesis Firehose – Select this option if you have configured Kinesis Firehose delivery stream as event stream destination.AWS console specifying Pinpoint stack details
  6. Specify the Amazon Pinpoint project app ID (Pinpoint project ID), and click Next.
  7. Click Next on Configure stack options screen.
  8. Select “I acknowledge that AWS CloudFormation might create IAM resources” and click Create Stack.
  9. Wait for the CloudFormation template to complete and then verify resources in the CloudFormation stack has been created. Click on individual resources and verify.
    • Parent stack-SMS retry parent stack
    • Child Stack –SMS retry child stack
  10. As described in the architectural overview session, the maxRetries configuration inside “RetryLambdaFunction” ensures that unsuccessful SMS messages are tried resending repeatedly. This number is set to 3 by default.” If you want to adjust the maxRetry count, go to the settings “RetryLambdaFunction” and change it to the desired number.SMS retry lambda

Notes :- The Cloudformation link in the blog specifically points to the parent cloudformation template, which has links to the child Cloudformation stack, these child stacks will be deployed automatically as you go through the patent stack.

Testing the solution

You can test the solution using “PinpointDDBProducerLambdaFunction” and SMS simulator numbers . PinpointDDBProducerLambdaFunction has sample code that shall trigger the SMS using Amazon Pinpoint.

testing SMS retry solution

Now follow the steps below to test the solution.

  1. Go to environment variables for PinpointDDBProducerLambdaFunction­­
  2. Update “destinationNumber” and “pinpointApplicationID,” where destination number is the recipient number for whom you wish to send the SMS as a failed attempt and Amazon Pinpoint application id is the Pinpoint Project ID for which the Pinpoint SMS channel has already been configured.
  3. Deploy and test the Lambda function.
  4. Check the “Pinpoint Message state” DyanamoDB table and open the Latest table ITEM.
  5. If you observe the table Items, it states the retry_count=2 (SMS send retry has been attempted 2 times and all_retries_failed=true ( for both of the times the SMS could not get delivered.)
Notes :
  • If existing Kinesis stream has pre-defined destination lambda then current stack will not replace it but exit gracefully.
  • If existing Kinesis firehose has pre-existing transformation lambda then current stack shall not replace the current stack.

Remarks

This SMS retry solution is based on best effort. This means that the solution is dependent on event response data from SMS aggregators. If the SMS aggregator data is incorrect, this slotion may not produce the desired effec

Cost

Considering that the retry mechanism is applicable for 1000000 unsuccessful SMS messages per month, this solution will approximately cost around $20 per month. Here is AWS calculator link for reference

Clean up

When you’re done with this exercise, complete the following steps to delete your resources and stop incurring costs:

  • On the CloudFormation console, select your stack and choose Delete.
  • This cleans up all the resources created by the stack.

Conclusion

In this blog post, we have demonstrated how customers can retry sending the undelivered/failed SMS messages via Amazon Pinpoint. We explained how to leverage the Amazon kinesis data streams and AWS Lambda functions to assess the status of unsuccessful SMS messages and retry delivering them in an automatic manner.

Extending the solution

This blog provides a rightful frame work to Implement a solution to retry sending failed SMS messages. You can download the AWS Cloudformation templates, code, and scripts for this solution from our GitHub repository and modify it to fit your needs.


About the Authors
Satyasovan Tripathy works as a Senior Specialist Solution Architect at AWS. He is situated in Bengaluru, India, and focuses on the AWS Digital User Engagement product portfolio. He enjoys reading and travelling outside of work.

Nikhil Khokhar is a Solutions Architect at AWS. He specializes in building and supporting data streaming solutions that help customers analyze and get value out of their data. In his free time, he makes use of his 3D printing skills to solve everyday problems.

Amazon Personalize customer outreach on your ecommerce platform

Post Syndicated from Sridhar Chevendra original https://aws.amazon.com/blogs/architecture/amazon-personalize-customer-outreach-on-your-ecommerce-platform/

In the past, brick-and-mortar retailers leveraged native marketing and advertisement channels to engage with consumers. They have promoted their products and services through TV commercials, and magazine and newspaper ads. Many of them have started using social media and digital advertisements. Although marketing approaches are beginning to modernize and expand to digital channels, businesses still depend on expensive marketing agencies and inefficient manual processes to measure campaign effectiveness and understand buyer behavior. The recent pandemic has forced many retailers to take their businesses online. Those who are ready to embrace these changes have embarked on a technological and digital transformation to connect to their customers. As a result, they have begun to see greater business success compared to their peers.

Digitizing a business can be a daunting task, due to lack of expertise and high infrastructure costs. By using Amazon Web Services (AWS), retailers are able to quickly deploy their products and services online with minimal overhead. They don’t have to manage their own infrastructure. With AWS, retailers have no upfront costs, have minimal operational overhead, and have access to enterprise-level capabilities that scale elastically, based on their customers’ demands. Retailers can gain a greater understanding of customers’ shopping behaviors and personal preferences. Then, they are able to conduct effective marketing and advertisement campaigns, and develop and measure customer outreach. This results in increased satisfaction, higher retention, and greater customer loyalty. With AWS you can manage your supply chain and directly influence your bottom line.

Building a personalized shopping experience

Let’s dive into the components involved in building this experience. The first step in a retailer’s digital transformation journey is to create an ecommerce platform for their customers. This platform enables the organization to capture their customers’ actions, also referred to as ‘events’. Some examples of events are clicking on the shopping site to browse product categories, searching for a particular product, adding an item to the shopping cart, and purchasing a product. Each of these events gives the organization information about their customer’s intent, which is invaluable in creating a personalized experience for that customer. For instance, if a customer is browsing the “baby products” category, it indicates their interest in that category even if a purchase is not made. These insights are typically difficult to capture in an in-store experience. Online shopping makes gaining this knowledge much more straightforward and scalable.

The proposed solution outlines the use of AWS services to create a digital experience for a retailer and consumers. The three key areas are: 1) capturing customer interactions, 2) making real-time recommendations using AWS managed Artificial Intelligence/Machine Learning (AI/ML) services, and 3) creating an analytics platform to detect patterns and adjust customer outreach campaigns. Figure 1 illustrates the solution architecture.

Digital shopping experience architecture

Figure 1. Digital shopping experience architecture

For this use case, let’s assume that you are the owner of a local pizzeria, and you manage deliveries through an ecommerce platform like Shopify or WooCommerce. We will walk you through how to best serve your customer with a personalized experience based on their preferences.

The proposed solution consists of the following components:

  1. Data collection
  2. Promotion campaigns
  3. Recommendation engine
  4. Data analytics
  5. Customer reachability

Let’s explore each of these components separately.

Data collection with Amazon Kinesis Data Streams

When a customer uses your web/mobile application to order a pizza, the application captures their activity as click-stream ‘events’. These events provide valuable insights about your customers’ behavior. You can use these insights to understand the trends and browsing pattern of prospects who visited your web/mobile app, and use the data collected for creating promotion campaigns. As your business scales, you’ll need a durable system to preserve these events against system failures, and scale based on unpredictable traffic on your platform.

Amazon Kinesis is a Multi-AZ, managed streaming service that provides resiliency, scalability, and durability to capture an unlimited number of events without any additional operational overhead. Using Kinesis producers (Kinesis Agent, Kinesis Producer Library, and the Kinesis API), you can configure applications to capture your customer activity. You can ingest these events from the frontend, and then publish them to Amazon Kinesis Data Streams.

Let us start by setting up Amazon Kinesis Data Streams to capture the real-time sales transactions from the online channels like a portal or mobile app. For this blog post, we have used the Kaggle’s public data set as a reference. Figure 2 illustrates a snapshot of sample data to build personalized recommendations for a customer.

Sample sales transaction data

Figure 2. Sample sales transaction data

Promotion campaigns with AWS Lambda

One way to increase customer conversion is by offering discounts. When the customer adds a pizza to their cart, you want to make sure they are receiving the best deal. Let’s assume that by adding an additional item, your customer will receive the best possible discount. Just by knowing the total cost of added items to the cart, you can provide these relevant promotions to this customer.

For this scenario, the AWS Lambda service polls the Amazon Kinesis Data Streams to read all the events in the stream. It then matches the events based on your criteria of items in the cart. In turn, these events will be processed by the Lambda function. The Lambda function will read your up-to-date promotions stored in Amazon DynamoDB. As an option, caching recent or most popular promotions will improve your application response time, as well as improve the customer experience on your platform. Amazon DynamoDB DAX is an integrated caching for DynamoDB that caches the most recent or popular promotions or items.

For example, when the customer added the items to their shopping cart, Lambda will send promotion details to them based on the purchase amount. This can be for free shipping or discount of a certain percentage. Figure 3 illustrates the snapshot of sample promotions.

Promotions table in DynamoDB

Figure 3. Promotions table in DynamoDB

Recommendations engine with Amazon Personalize

In addition to sharing these promotions with your customer, you may also want to share the recommended add-ons. In order to understand your customer preferences, you must gather historical datasets to determine patterns and generate relevant recommendations. Since web activity consists of millions of events, this would be a daunting task for humans to review, determine the patterns, and make recommendations. And since user preferences change, you need a system that can use all this volume of data and provide accurate predictions.

Amazon Personalize is a managed AI/ML service that will help you to train an ML model based on datasets. It provides an inference point for real-time recommendations prior to having ML experience. Based on the datasets, Amazon Personalize also provides recipes to generate recommendations. As your customers interact on the ecommerce platform, your frontend application calls Amazon Personalize inference endpoints. It then retrieves a set of personalized recommendations based on your customer preferences.

Here is the sample Python code to display the list of available recommenders, and associated recommendations.

import boto3
import json
client = boto3.client('personalize')

# Connect to the personalize runtime for the customer recommendations

recomm_endpoint = boto3.client('personalize-runtime')
response = recomm_endpoint.get_recommendations(itemId='79323P',
  recommenderArn='arn:aws:personalize:us-east-1::recommender/my-items',
  numResults=5)

print(json.dumps(response['itemList'], indent=2))

[
  {
    "itemId": "79323W"
  },
  {
    "itemId": "79323GR"
  },
  {
    "itemId": "79323LP"
  },
  {
  "itemId": "79323B"
  },
  {
    "itemId": "79323G"
  }
]

You can use Amazon Kinesis Data Firehose to read the data near real time from the Amazon Kinesis Data Streams collected the data from the front-end applications. Then you can store this data in Amazon Simple Storage Service (S3). Amazon S3 is peta-byte scale storage help you scale and acts as a repository and single source of truth. We use S3 data as seed data to build a personalized recommendation engine using Amazon Personalize. As your customers interact on the ecommerce platform, call the Amazon Personalize inference endpoint to make personalized recommendations based on user preferences.

Customer reachability with Amazon Pinpoint

If a customer adds products to their cart but never checks out, you may want to send them a reminder. You can set up an email to suggest they re-order after a period of time after their first order. Or you may want to send them promotions based on their preferences. And as your customers’ behavior changes, you probably want to adapt your messaging accordingly.

Your customer may have a communication preference, such as phone, email, SMS, or in-app notifications. If an order has an issue, you can inform the customer as soon as possible using their preferred method of communication, and perhaps follow it up with a discount.

Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service. You can add users to Audience Segments, create reusable content templates integrated with Amazon Personalize, and run scheduled campaigns. With Amazon Pinpoint journeys, you can send action or time-based notifications to your users.

The following workflow shown in Figure 4, illustrates customer communication workflow for promotion. A journey is created for a cohort of college students: a “Free Drink” promotion is offered with a new order. You can send this promotion over email. If the student opens the email, you can immediately send them a push notification reminding them to place an order. But if they didn’t open this email, you could wait three days, and follow up with a text message.

Promotion workflow in Amazon Pinpoint

Figure 4. Promotion workflow in Amazon Pinpoint

Data analytics with Amazon Athena and Amazon QuickSight

To understand the effectiveness of your campaigns, you can use S3 data as a source for Amazon Athena. Athena is an interactive query service that analyzes data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

There are different ways to create visualizations in Amazon QuickSight. For instance, you can use Amazon S3 as a data lake. One option is to import your data into SPICE (Super-fast, Parallel, In-memory Calculation Engine) to provide high performance and concurrency. You can also create a direct connection to the underlying data source. For this use case, we choose to import to SPICE, which provides faster visualization in a production setup. Schedule consistent refreshes to help ensure that dashboards are referring to the most current data.

Once your data is imported to your SPICE, review QuickSight’s visualization dashboard. Here, you’ll be able to choose from a wide variety of charts and tables, while adding interactive features like drill downs and filters.

The process following illustrates how to create a customer outreach strategy using ZIP codes, and allocate budgets to the marketing campaigns accordingly. First, we use this sample SQL command that we ran in Athena to query for top 10 pizza providers. The results are shown in Figure 5.

SELECT name, count(*) as total_count FROM "analyticsdemodb"."fooddatauswest2"
group by name
order by total_count desc
limit 10

Athena query results for top 10 pizza providers

Figure 5. Athena query results for top 10 pizza providers

Second, here is the sample SQL command that we ran in Athena to find Total pizza counts by postal code (ZIP code). Figure 6 shows a visualization to help create customer outreach strategy per ZIP codes and budget the marketing campaigns accordingly.

SELECT postalcode, count(*) as total_count FROM "analyticsdemodb"."fooddatauswest2"
where postalcode is not null
group by postalcode
order by total_count desc limit 50;

QuickSight visualization showing pizza orders by zip codes

Figure 6. QuickSight visualization showing pizza orders by zip codes

Conclusion

AWS enables you to build an ecommerce platform and scale your existing business with minimal operational overhead and no upfront costs. You can augment your ecommerce platform by building personalized recommendations and effective marketing campaigns based on your customer needs. The solution approach provided in the blog will help organizations build re-usable architecture pattern and personalization using AWS managed services.

Target your customers with ML based on their interest in a product or product attribute.

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/use-machine-learning-to-target-your-customers-based-on-their-interest-in-a-product-or-product-attribute/

Customer segmentation allows marketers to better tailor their efforts to specific subgroups of their audience. Businesses who employ customer segmentation can create and communicate targeted marketing messages that resonate with specific customer groups. Segmentation increases the likelihood that customers will engage with the brand, and reduces the potential for communications fatigue—that is, the disengagement of customers who feel like they’re receiving too many messages that don’t apply to them. For example, if your business wants to launch an email campaign about business suits, the target audience should only include people who wear suits.

This blog presents a solution that uses Amazon Personalize to generate highly personalized Amazon Pinpoint customer segments. Using Amazon Pinpoint, you can send messages to those customer segments via campaigns and journeys.

Personalizing Pinpoint segments

Marketers first need to understand their customers by collecting customer data such as key characteristics, transactional data, and behavioral data. This data helps to form buyer personas, understand how they spend their money, and what type of information they’re interested in receiving.

You can create two types of customer segments in Amazon Pinpoint: imported and dynamic. With both types of segments, you need to perform customer data analysis and identify behavioral patterns. After you identify the segment characteristics, you can build a dynamic segment that includes the appropriate criteria. You can learn more about dynamic and imported segments in the Amazon Pinpoint User Guide.

Businesses selling their products and services online could benefit from segments based on known customer preferences, such as product category, color, or delivery options. Marketers who want to promote a new product or inform customers about a sale on a product category can use these segments to launch Amazon Pinpoint campaigns and journeys, increasing the probability that customers will complete a purchase.

Building targeted segments requires you to obtain historical customer transactional data, and then invest time and resources to analyze it. This is where the use of machine learning can save time and improve the accuracy.

Amazon Personalize is a fully managed machine learning service, which requires no prior ML knowledge to operate. It offers ready to use models for segment creation as well as product recommendations, called recipes. Using Amazon Personalize USER_SEGMENTATION recipes, you can generate segments based on a product ID or a product attribute.

About this solution

The solution is based on the following reference architectures:

Both of these architectures are deployed as nested stacks along the main application to showcase how contextual segmentation can be implemented by integrating Amazon Personalize with Amazon Pinpoint.

High level architecture

Architecture Diagram

Once training data and training configuration are uploaded to the Personalize data bucket (1) an AWS Step Function state machine is executed (2). This state machine implements a training workflow to provision all required resources within Amazon Personalize. It trains a recommendation model (3a) based on the Item-Attribute-Affinity recipe. Once the solution is created, the workflow creates a batch segment job to get user segments (3b). The job configuration focuses on providing segments of users that are interested in action genre movies

{ "itemAttributes": "ITEMS.genres = \"Action\"" }

When the batch segment job finishes, the result is uploaded to Amazon S3 (3c). The training workflow state machine publishes Amazon Personalize state changes on a custom event bus (4). An Amazon Event Bridge rule listens on events describing that a batch segment job has finished (5). Once this event is put on the event bus, a batch segment postprocessing workflow is executed as AWS Step Function state machine (6). This workflow reads and transforms the segment job output from Amazon Personalize (7) into a CSV file that can be imported as static segment into Amazon Pinpoint (8). The CSV file contains only the Amazon Pinpoint endpoint-ids that refer to the corresponding users from the Amazon Personalize recommendation segment, in the following format:

Id
hcbmnng30rbzf7wiqn48zhzzcu4
tujqyruqu2pdfqgmcgkv4ux7qlu
keul5pov8xggc0nh9sxorldmlxc
lcuxhxpqh/ytkitku2zynrqb2ce

The mechanism to resolve an Amazon Pinpoint endpoint id relies on the user id that is set in Amazon Personalize to be also referenced in each endpoint within Amazon Pinpoint using the user ID attribute.

State machine for getting Amazon Pinpoint endpoints

The workflow ensures that the segment file has a unique filename so that the segments within Amazon Pinpoint can be identified independently. Once the segment CSV file is uploaded to S3 (7), the segment import workflow creates a new imported segment within Amazon Pinpoint (8).

Datasets

The solution uses an artificially generated movies’ dataset called Bingewatch for demonstration purposes. The data is pre-processed to make it usable in the context of Amazon Personalize and Amazon Pinpoint. The pre-processed data consists of the following:

  • Interactions’ metadata created out of the Bingewatch ratings.csv
  • Items’ metadata created out of the Bingewatch movies.csv
  • users’ metadata created out of the Bingewatch ratings.csv, enriched with invented data about e-mail address and age
  • Amazon Pinpoint endpoint data

Interactions’ dataset

The interaction dataset describes movie ratings from Bingewatch users. Each row describes a single rating by a user identified by a user id.

The EVENT_VALUE describes the actual rating from 1.0 to 5.0 and the EVENT_TYPE specifies that the rating resulted because a user watched this movie at the given TIMESTAMP, as shown in the following example:

USER_ID,ITEM_ID,EVENT_VALUE,EVENT_TYPE,TIMESTAMP
1,1,4.0,Watch,964982703 
2,3,4.0,Watch,964981247
3,6,4.0,Watch,964982224
...

Items’ dataset

The item dataset describes each available movie using a TITLE, RELEASE_YEAR, CREATION_TIMESTAMP and a pipe concatenated list of GENRES, as shown in the following example:

ITEM_ID,TITLE,RELEASE_YEAR,CREATION_TIMESTAMP,GENRES
1,Toy Story,1995,788918400,Adventure|Animation|Children|Comedy|Fantasy
2,Jumanji,1995,788918400,Adventure|Children|Fantasy
3,Grumpier Old Men,1995,788918400,Comedy|Romance
...

Users’ dataset

The users dataset contains all known users identified by a USER_ID. This dataset contains artificially generated metadata that describe the users’ GENDER and AGE, as shown in the following example:

USER_ID,GENDER,E_MAIL,AGE
1,Female,[email protected],21
2,Female,[email protected],35
3,Male,[email protected],37
4,Female,[email protected],47
5,Agender,[email protected],50
...

Amazon Pinpoint endpoints

To map Amazon Pinpoint endpoints to users in Amazon Personalize, it is important to have a consisted user identifier. The mechanism to resolve an Amazon Pinpoint endpoint id relies that the user id in Amazon Personalize is also referenced in each endpoint within Amazon Pinpoint using the userId attribute, as shown in the following example:

User.UserId,ChannelType,User.UserAttributes.Gender,Address,User.UserAttributes.Age
1,EMAIL,Female,[email protected],21
2,EMAIL,Female,[email protected],35
3,EMAIL,Male,[email protected],37
4,EMAIL,Female,[email protected],47
5,EMAIL,Agender,[email protected],50
...

Solution implementation

Prerequisites

To deploy this solution, you must have the following:

Note: This solution creates an Amazon Pinpoint project with the name personalize. If you want to deploy this solution on an existing Amazon Pinpoint project, you will need to perform changes in the YAML template.

Deploy the solution

Step 1: Deploy the SAM solution

Clone the GitHub repository to your local machine (how to clone a GitHub repository). Navigate to the GitHub repository location in your local machine using SAM CLI and execute the command below:

sam deploy --stack-name contextual-targeting --guided

Fill the fields below as displayed. Change the AWS Region to the AWS Region of your preference, where Amazon Pinpoint and Amazon Personalize are available. The Parameter Email is used from Amazon Simple Notification Service (SNS) to send you an email notification when the Amazon Personalize job is completed.

Configuring SAM deploy
======================
        Looking for config file [samconfig.toml] :  Not found
        Setting default arguments for 'sam deploy'     =========================================
        Stack Name [sam-app]: contextual-targeting
        AWS Region [us-east-1]: eu-west-1
        Parameter Email []: [email protected]
        Parameter PEVersion [v1.2.0]:
        Parameter SegmentImportPrefix [pinpoint/]:
        #Shows you resources changes to be deployed and require a 'Y' to initiate deploy
        Confirm changes before deploy [y/N]:
        #SAM needs permission to be able to create roles to connect to the resources in your template
        Allow SAM CLI IAM role creation [Y/n]:
        #Preserves the state of previously provisioned resources when an operation fails
        Disable rollback [y/N]:
        Save arguments to configuration file [Y/n]:
        SAM configuration file [samconfig.toml]:
        SAM configuration environment [default]:
        Looking for resources needed for deployment:
        Creating the required resources...
        [...]
        Successfully created/updated stack - contextual-targeting in eu-west-1
======================

Step 2: Import the initial segment to Amazon Pinpoint

We will import some initial and artificially generated endpoints into Amazon Pinpoint.

Execute the command below to your AWS CLI in your local machine.

The command below is compatible with Linux:

SEGMENT_IMPORT_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
aws s3 sync ./data/pinpoint s3://$SEGMENT_IMPORT_BUCKET/pinpoint

For Windows PowerShell use the command below:

$SEGMENT_IMPORT_BUCKET = (aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
aws s3 sync ./data/pinpoint s3://$SEGMENT_IMPORT_BUCKET/pinpoint

Step 3: Upload training data and configuration for Amazon Personalize

Now we are ready to train our initial recommendation model. This solution provides you with dummy training data as well as a training and inference configuration, which needs to be uploaded into the Amazon Personalize S3 bucket. Training the model can take between 45 and 60 minutes.

Execute the command below to your AWS CLI in your local machine.

The command below is compatible with Linux:

PERSONALIZE_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 sync ./data/personalize s3://$PERSONALIZE_BUCKET

For Windows PowerShell use the command below:

$PERSONALIZE_BUCKET = (aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 sync ./data/personalize s3://$PERSONALIZE_BUCKET

Step 4: Review the inferred segments from Amazon Personalize

Once the training workflow is completed, you should receive an email on the email address you provided when deploying the stack. The email should look like the one in the screenshot below:

SNS notification for Amazon Personalize job

Navigate to the Amazon Pinpoint Console > Your Project > Segments and you should see two imported segments. One named endpoints.csv that contains all imported endpoints from Step 2. And then a segment named ITEMSgenresAction_<date>-<time>.csv that contains the ids of endpoints that are interested in action movies inferred by Amazon Personalize

Amazon Pinpoint segments created by the solution

You can engage with Amazon Pinpoint customer segments via Campaigns and Journeys. For more information on how to create and execute Amazon Pinpoint Campaigns and Journeys visit the workshop Building Customer Experiences with Amazon Pinpoint.

Next steps

Contextual targeting is not bound to a single channel, like in this solution email. You can extend the batch-segmentation-postprocessing workflow to fit your engagement and targeting requirements.

For example, you could implement several branches based on the referenced endpoint channel types and create Amazon Pinpoint customer segments that can be engaged via Push Notifications, SMS, Voice Outbound and In-App.

Clean-up

To delete the solution, run the following command in the AWS CLI.

The command below is compatible with Linux:

SEGMENT_IMPORT_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
PERSONALIZE_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 rm s3://$SEGMENT_IMPORT_BUCKET/ --recursive
aws s3 rm s3://$PERSONALIZE_BUCKET/ --recursive
sam delete

For Windows PowerShell use the command below:

$SEGMENT_IMPORT_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
$PERSONALIZE_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 rm s3://$SEGMENT_IMPORT_BUCKET/ --recursive
aws s3 rm s3://$PERSONALIZE_BUCKET/ --recursive
sam delete

Amazon Personalize resources like Dataset groups, datasets, etc. are not created via AWS Cloudformation, thus you have to delete them manually. Please follow the instructions in the official AWS documentation on how to clean up the created resources.

About the Authors

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. He loves to dive deep into his customer’s technical issues and help them design communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Christian Bonzelet

Christian Bonzelet

Christian Bonzelet is an AWS Solutions Architect at DFL Digital Sports. He loves those challenges to provide high scalable systems for millions of users. And to collaborate with lots of people to design systems in front of a whiteboard. He uses AWS since 2013 where he built a voting system for a big live TV show in Germany. Since then, he became a big fan on cloud, AWS and domain driven design.

Updated requirements for US toll-free phone numbers

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/updated-requirements-for-us-toll-free-phone-numbers/

Many Amazon Pinpoint customers use toll-free phone numbers to send messages to their customers in the United States. A toll-free number is a 10-digit number that begins with one of the following three-digit codes: 800, 888, 877, 866, 855, 844, or 833. You can use toll-free numbers to send both SMS and voice messages to recipients in the US.

What’s changing

Historically, US toll-free numbers have been available to purchase with no registration required. To prevent spam and other types of abuse, the United States mobile carriers now require new toll-free numbers to be registered as well. The carriers also require all existing toll-free numbers to be registered by September 30, 2022. The carriers will block SMS messages sent from unregistered toll-free numbers after this date.

If you currently use toll-free numbers to send SMS messages, you must complete this registration process for both new and existing toll-free numbers. We’re committed to helping you comply with these changing carrier requirements.

Information you provide as part of this registration process will be provided to the US carriers through our downstream partners. It can take up to 15 business days for your registration to be processed. To help prevent disruptions of service with your toll-free number, you should submit your registration no later than September 12th, 2022.

Requesting new toll-free numbers

Starting today, when you request a United States toll-free number in the Amazon Pinpoint console, you’ll see a new page that you can use to register your use case. Your toll-free number registration must be completed and verified before you can use it to send SMS messages. For more information about completing this registration process, see US toll-free number registration requirements and process in the Amazon Pinpoint User Guide.

Registering existing toll-free numbers

You can also use the Amazon Pinpoint console to register toll-free numbers that you already have in your account. For more information about completing the registration process for existing toll-free numbers, see US toll-free number registration requirements and process in the Amazon Pinpoint User Guide.

In closing

Change is a constant factor in the SMS and voice messaging industry. Carriers often introduce new processes in order to protect their customers. The new registration requirements for toll-free numbers are a good example of these kinds of changes. We’ll work with you to help make sure that these changes have minimal impact on your business. If you have any concerns about these changing requirements, open a ticket in the AWS Support Center.

How ERGO built an on-call support solution in a week

Post Syndicated from Sid Singh original https://aws.amazon.com/blogs/architecture/how-ergo-built-an-on-call-support-solution-in-a-week/

ERGO’s Technology & Services S.A. (ET&S) Cloud Solutions Department is a specialist team of cloud engineers who provide technical support for business owners, project managers, and engineering leads. The support team deals with complex issues, such as failed deployments, security vulnerabilities, environment availability, etc.

When an issue arises, it’s categorized as Priority 1 (P1) or Priority 2 (P2). For urgent P1 incidents, users contact the support team directly via phone. For P2 incidents, the workflow sends an issue description to the support team via SMS.

Originally, the SMS and voice forwarding systems were manually updated every Monday. For SMS, an operator manually updated the phone numbers in the system for the assigned support team members. For voice forwarding, support team members used physical phones, which were handed off from engineer to engineer per the support team roster.

These manual processes were time consuming and occasionally error prone. Additionally, with COVID-19 physical distancing measures in place, handing off physical devices was complicated. To keep up with the increasing number of support cases and the growth of their Cloud Solutions Department, ERGO worked with AWS to modernize and automate their manual workflow. We’ll show you how ERGO implemented a production-ready, on-call support solution with SMS and voice features in just one week using Amazon Connect and Amazon Pinpoint.

Automating the SMS on-call system

Let’s look at how we automated the SMS on-call support system, as shown in Figure 1 and summarized as follows:

  1. We use an open-source orchestration tool, Red Hat Ansible Automation Platform (Ansible), as a frontend to run the template “Assign to On-call SMS”.
  2. The template sets the parameter to a subset of support team members who are assigned to support P1/P2 cases. The assignment is based on the on-call shift schedule.
  3. Next, support team members are subscribed to the Amazon Simple Notification Service (Amazon SNS) topic subscriber’s list using an Ansible playbook.

Now the support team will receive SMS alerts.

Assign to on-call SMS workflow

Figure 1. Assign to on-call SMS workflow

Next, we integrated the SMS workflow with our ZIS IT monitoring tool to capture critical events and forward them via SMS to the support team, as shown in Figure 2:

  1. The Amazon Pinpoint phone number is set as the SMS destination in our monitoring tool.
  2. The monitoring tool then sends the SMS to Amazon Pinpoint, where:
    • We extract the messageBody from the payload that Amazon Pinpoint prepared by sending the message to Amazon SNS “Before Processing Message”, which is subscribed by our AWS Lambda function “Extract messageBody”.
    • The extracted message is then sent to Amazon SNS as “After Processing Message”, which uses the Amazon Pinpoint “Two-way SMS” feature to send the SMS to support team members who are assigned to the Amazon SNS topic.
On-call SMS workflow integration with Amazon Pinpoint

Figure 2. On-call SMS workflow integration with Amazon Pinpoint

Also shown in Figure 2, we track our monthly SMS spending using Amazon CloudWatch. The SMSMonthToDateSpentUSD metric shows the amount spent sending SMS messages during the current month.

Why extract the messageBody before sending the SMS to the support team?

Amazon Pinpoint captures SMS from the monitoring tool in JSON format, which includes additional information, such as the origin and destination numbers, the message ID and related data, as shown in the following example:

{

"originationNumber":"+14255550182",

"destinationNumber":"+12125550101",

"messageKeyword":"JOIN",

"messageBody":"EXAMPLE",

"inboundMessageId":"cae173d2-66b9-564c-8309-21f858e9fb84",

"previousPublishedMessageId":"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"

}

The support team only needs the messageBody, and the JSON format makes it difficult to read on a mobile phone. Therefore, we use a Lambda function for the “messageBody” extraction.

Automating the voice forwarding system

The other half of our on-call solution is voice forwarding. As mentioned in the introduction, we had a physical phone and updated the call forwarding every Monday. This allowed us to forward calls to a single number, but this system had two main problems: it wasn’t scalable and it was prone to human errors.

In our automated system, shown in Figure 3, all calls to the physical phone are forwarded to Amazon Connect, so we do not need to change the number of the phone.

This is how it’s set up:

  • The assigned phone numbers in Amazon Connect are attached to the Contact Flow “ERGO On-call Forwarding Voice”, which starts at the “Entry point” rectangle on the left side of the diagram.
  • In the next step, “Set logging behavior” captures the calling number. This allows us to see the number to return any missed calls.
  • Finally, the set working queue contains routing profiles (in this case, we use a main line and secondary line). The main line has support team members who are assigned to address P1 cases. The secondary line is for managers who will take the call if the support team members are not available.

When a customer is in a queue, the Amazon Connect contact flow tries to route the call to a support team member. If there’s no answer, the service re-routes the call to the next available support team member. After 30 seconds, if there is no answer on the first line (and no other support team members have become available), the service tries the secondary line.

To set this up:

  • Every support team member requires an Amazon Connect account. You can import their data via CSV to automate provisioning.
  • If a support team member is shown as online but does not answer a call, Amazon Connect changes their status to offline. This way, an Amazon Connect admin can see the time and number of the missed call in the Amazon Connect Real-time metrics reports and can return the call when another team member or supervisor is available.
  • Figure 3 shows how Amazon Connect and CloudWatch monitor contact center health metrics like “MissedCalls” and generate alerts via Amazon Simple Notification Service (SNS) to send notifications via email to ensure calls are returned promptly. For more details on this integration pattern, refer to the Monitor and trigger alerts using Amazon CloudWatch for Amazon Connect blog post.
On-call voice forwarding workflow with Amazon Connect

Figure 3. On-call voice forwarding workflow with Amazon Connect

Lessons learned

After creating an Amazon Connect instance, we claimed a phone number to place or receive calls. Requesting phone numbers from Amazon Connect to serve different customers in different countries was the most time intensive part of the setup. Be aware that some countries have regulatory requirements, and this can increase the time and effort required. For example, requesting a German number and a Polish number will require different documents. To save time, we used international toll-free numbers. This allows us to provide support to people in all other countries without the caller incurring additional charges.

To help you with your implementation, you can find the list of ID requirements by country or AWS Region here and AWS support can provide more information.

Conclusion

Using managed services like Amazon Connect and Amazon Pinpoint allowed us to implement a scalable and pay-as-you-go on-call solution for technical support. The new automated setup is a huge improvement over the previous manual and error-prone workflow and enables us to easily onboard customers from new countries.

Looking ahead, we plan to explore using the Amazon Connect APIs to automate the management of an agent’s online/offline status, as well as building a skills-based routing workflow to accommodate a multi-lingual support team. You can read more about AWS Customer Engagement services here.

AWS Week In Review – July 11, 2022

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/aws-week-in-review-july-11/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

In France, we know summer has started when you see the Tour de France bike race on TV or in a city nearby. This year, the tour stopped in the city where I live, and I was blocked on my way back home from a customer conference to let the race pass through.

It’s Monday today, so let’s make another tour—a tour of the AWS news, announcements, or blog posts that captured my attention last week. I selected these as being of interest to IT professionals and developers: the doers, the builders that spend their time on the AWS Management Console or in code.

Last Week’s Launches
Here are some launches that got my attention during the previous week:

Amazon EC2 Mac M1 instances are generally available – this new EC2 instance type allows you to deploy Mac mini computers with M1 Apple Silicon running macOS using the same console, API, SDK, or CLI you are used to for interacting with EC2 instances. You can start, stop them, assign a security group or an IAM role, snapshot their EBS volume, and recreate an AMI from it, just like with Linux-based or Windows-based instances. It lets iOS developers create full CI/CD pipelines in the cloud without requiring someone in your team to reinstall various combinations of macOS and Xcode versions on on-prem machines. Some of you had the chance the enter the preview program for EC2 Mac M1 instances when we announced it last December. EC2 Mac M1 instances are now generally available.

AWS IAM Roles Anywhere – this is one of those incremental changes that has the potential to unlock new use cases on the edge or on-prem. AWS IAM Roles Anywhere enables you to use IAM roles for your applications outside of AWS to access AWS APIs securely, the same way that you use IAM roles for workloads on AWS. With IAM Roles Anywhere, you can deliver short-term credentials to your on-premises servers, containers, or other compute platforms. It requires an on-prem Certificate Authority registered as a trusted source in IAM. IAM Roles Anywhere exchanges certificates issued by this CA for a set of short-term AWS credentials limited in scope by the IAM role associated to the session. To make it easy to use, we do provide a CLI-based signing helper tool that can be integrated in your CLI configuration.

A streamlined deployment experience for .NET applications – the new deployment experience focuses on the type of application you want to deploy instead of individual AWS services by providing intelligent compute recommendations. You can find it in the AWS Toolkit for Visual Studio using the new “Publish to AWS” wizard. It is also available via the .NET CLI by installing AWS Deploy Tool for .NET. Together, they help easily transition from a prototyping phase in Visual Studio to automated deployments. The new deployment experience supports ASP.NET Core, Blazor WebAssembly, console applications (such as long-lived message processing services), and tasks that need to run on a schedule.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
This week, I also learned from these blog posts:

TLS 1.2 to become the minimum TLS protocol level for all AWS API endpointsthis article was published at the end of June, and it deserves more exposure. Starting in June 2022, we will progressively transition all our API endpoints to TLS 1.2 only. The good news is that 95 percent of the API calls we observe are already using TLS 1.2, and only five percent of the applications are impacted. If you have applications developed before 2014 (using a Java JDK before version 8 or .NET before version 4.6.2), it is worth checking your app and updating them to use TLS 1.2. When we detect your application is still using TLS 1.0 or TLS 1.1, we inform you by email and in the AWS Health Dashboard. The blog article goes into detail about how to analyze AWS CloudTrail logs to detect any API call that would not use TLS 1.2.

How to implement automated appointment reminders using Amazon Connect and Amazon Pinpoint this blog post guides you through the steps to implement a system to automatically call your customers to remind them of their appointments. This automated outbound campaign for appointment reminders checked the campaign list against a “do not call” list before making an outbound call. Your customers are able to confirm automatically or reschedule by speaking to an agent. You monitor the results of the calls on a dashboard in near real time using Amazon QuickSight. It provides you with AWS CloudFormation templates for the parts that can be automated and detailed instructions for the manual steps.

Using Amazon CloudWatch metrics math to monitor and scale resources AWS Auto Scaling is one of those capabilities that may look like magic at first glance. It uses metrics to take scale-out or scale-in decisions. Most customers I talk with struggle a bit at first to define the correct combination of metrics that allow them to scale at the right moment. Scaling out too late impacts your customer experience while scaling out too early impacts your budget. This article explains how to use metric math, a way to query multiple Amazon CloudWatch metrics, and use math expressions to create new time series based on these metrics. These math metrics may, in turn, be used to trigger scaling decisions. The typical use case would be to mathematically combine CPU, memory, and network utilization metrics to decide when to scale in or to scale out.

How to use Amazon RDS and Amazon Aurora with a static IP address – in the cloud, it is better to access network resources by referencing their DNS name instead of IP addresses. IP addresses come and go as resources are stopped, restarted, scaled out, or scaled in. However, when integrating with older, more rigid environments, it might happen, for a limited period of time, to authorize access through a static IP address. You have probably heard that scary phrase: “I have to authorize your IP address in my firewall configuration.” This new blog post explains how to do so for Amazon Relational Database Service (Amazon RDS) database. It uses a Network Load Balancer and traffic forwarding at the Linux-kernel level to proxy your actual database server.

Amazon S3 Intelligent-Tiering significantly reduces storage costs – we estimate our customers saved up to $250 millions in storage costs since we launched S3 Intelligent-Tiering in 2018. A recent blog post describes how Amazon Photo, a service that provides unlimited photo storage and 5 GB of video storage to Amazon Prime members in eight marketplaces world-wide, uses S3 Intelligent-Tiering to significantly save on storage costs while storing hundreds of petabytes of content and billions of images and videos on S3.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS re:Inforce is the premier cloud security conference, July 26-27. This year it is hosted at the Boston Convention and Exhibition Center, Massachusetts, USA. The conference agenda is available and there is still time to register.

AWS Summit Chicago, August 25, at McCormick Place, Chicago, Illinois, USA. You may register now.

AWS Summit Canberra, August 31, at the National Convention Center, Canberra, Australia. Registrations are already open.

That’s all for this week. Check back next Monday for another tour of AWS news and launches!

— seb

New – High Volume Outbound Communication with Amazon Connect Outbound Campaigns

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-high-volume-outbound-communication-with-amazon-connect-outbound-campaigns/

The new high volume outbound communication capability in Amazon Connect which was announced at Enterprise Connect last year, is now generally available to all. It is named Amazon Connect outbound campaigns.

If you haven’t heard about Amazon Connect, it is an easy-to-use cloud contact center service that helps companies of any size deliver superior customer service at lower cost. You can read the original blog post Jeff wrote at launch in 2017, with amazing Lego art 🙂

Contact centers not only receive calls and communications, but they also send outbound communications to customers. There are a variety of reasons to send outbound communication: appointment reminders, telemarketing, subscription renewals, and billing reminders. The vast majority of these communications are phone calls, and in many contact centers, agents make the calls manually using customer contact lists in external systems. Since customers only answer about ten percent of calls, these agents can spend nearly half of their time dialing and waiting. This can result in millions of dollars in lost productivity each year for a contact center with as few as 200 agents.

To help you to address this challenge, today we are adding to Amazon Connect outbound campaigns a set of high-volume outbound communication capabilities that allows you to proactively reach more of your customers across voice, SMS, and email. When using this capability, you will have a scalable way for proactive outreach for hundreds to millions of your customers, and you will increase your agents’ productivity and lower your operational costs.

Amazon Connect outbound campaigns delivers a predictive phone dialer. The dialer includes an answering machine detection system powered by machine learning. It allows the automatic detection of answering machines for voice calls and passes calls to agents only when the call is answered by a human. The dialer also adjusts the call rate depending on factors such as percentage of human-answered the calls, call duration, and agent availability. There is no integration required to get the benefit of existing Amazon Connect features, such as automated workflows, routing, and machine learning capabilities like Contact Lens. You now have a single system for inbound and outbound communications.

To further refine the customer experience or use multiple channels in your campaigns, for example, to send an SMS or email message to your customers when they do not answer calls, you have the option to use Amazon Pinpoint. Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service. It allows you to define customer segments, define the customer journey, define the contact strategy, and more. Amazon Pinpoint is the system handling high-volume SMS and email campaigns.

To better understand how Amazon Connect, Amazon Pinpoint, and other AWS services work together, you can refer to this very detailed blog post.

Let’s show you how it works
Imagine I am a contact center manager, and I want to create an outbound call campaign to target a selected list of customers.

I first import my customer contact list from a spreadsheet on Amazon S3. I may also import it from popular customer relationship management (CRM) and marketing automation applications, such as Marketo, Salesforce, Twilio’s Segment, ServiceNow, Shopify, Zendesk, and Amazon Pinpoint itself.

Amazon Connect outbound campaigns - import contact 2

Then I create a campaign and define some journey parameters: the communication channel, the start time, and the corresponding content, such as a call script, email template, or SMS message. At the scheduled start time, the journey is executed using Amazon Connect for calls or Amazon Pinpoint for SMS or emails, as specified.

Amazon Connect outbound campaigns - create campaign

When I configure the campaign to run in Predictive dial mode, as I mentioned before, the dialer automatically adjusts the dial rate based on the duration of calls and the real-time availability of agents. Once a call is answered, Amazon Connect distinguishes whether it is a live voice or a recorded message and routes the live customer to an available agent in the Amazon Connect agent application, where the agent can see the call script that I specified during setup, along with relevant customer information.

As explained earlier, I may use Amazon Pinpoint to define the customer journey. By doing so, I can combine voice, email, and SMS channels in the same outbound communication campaign to improve the efficiency of my agents and my customer’s experience. For example, a financial institution can use Amazon Connect to send an SMS notification to remind a customer of a missed payment and include a link to request a call back from an agent. When a call is requested, Amazon Connect automatically queues the call, dials the customer’s number, detects their voice, and connects an available agent to the customer.

Amazon Connect outbound campaigns - journey workflow

Amazon Pinpoint allows you to define the details of the customer journey.

Amazon Connect outbound campaigns - setup quiet times

As usual with AWS services, I can analyze contact events sent via Amazon EventBridge. EventBridge is a serverless event bus that makes it easier to build event-driven applications at scale using events generated from your applications, integrated software-as-a-service (SaaS) applications, and AWS service. When filtering or analyzing events posted to EventBridge, I can create metrics such as time to connect to an agent, duration of the contact, and call abandonment rate

These metrics help me understand the status of my campaign and ensure compliance with applicable regulations, such as maximum call abandonment rates. I also can use historical reports of these metrics to understand the effectiveness of all my communications campaigns over time.

Amazon Connect outbound campaigns - jounrey metrics

Speaking of compliance, we do not want anyone to abuse the system, intentionally or not, or to break any local compliance rules.

Access and Compliance
Using automated services to drive outbound communication campaigns is strictly regulated in several countries and territories. For example, the US adopted the Telephone Consumer Protection Act (TCPA) in 1991, and the United Kingdom’s Office of Communications has similar rules.

Amazon Connect outbound campaigns gives you the tools to stay compliant with these regulations and many others. However, just like with traditional IT security, it is a shared responsibility. It is your responsibility to use the service in a compliant manner. We are happy to assist you in addressing specific use cases.

Let’s share two examples to illustrate how Amazon Connect outbound campaigns can help you meet your compliance status: respect quiet time and monitor call abandonment rate.

The use of quiet times allows contact center managers to configure a schedule for channel communications based on the day of the week and the hours of the day. More precise delivery times means your customers are most likely to engage with the communication and increase metrics such as open rates for SMS and email, as well as pick-up rates for voice calls. It also allows contact center managers to follow country and state-level voice dialing legislation. The following screenshot shows how you can configure quiet times using Amazon Pinpoint.

Amazon Connect outbound campaigns - quiet times

According to TCPA, call abandonment rate is the percentage of calls picked up by a live customer but not connected to a live agent within two seconds after the customer greeting. I found it interesting that in the UK, the time is measured from the start of your customer greetings, while in the US, it is measured from the end of the greeting. Amazon Connect outbound campaigns provides you with metrics, such as customerGreetingStart, customerGreetingStop, andconnectedToAgent for each outbound communication. Contact center managers can use these to compute the abandonment rate and dial up or down the outgoing communication channel accordingly.

Other metrics, configuration parameters, and AWS Lambda API integration allow contact center managers to consult a Do-Not-Call (DNC) registry or list scrubbing and verify your customer’s local time zone or bank holiday calendars, just to name a few.

Pricing and Availability
Amazon Connect outbound campaigns is available in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (London) AWS Regions. This allows you to start your outbound campaigns for customers in the USA, UK, Australia, and New Zealand.

As usual, pricing is based on your usage; you only pay for what you use with no upfront or minimum engagement. The key metrics we are using for pricing are the minutes of outbound calls. The pricing page has all the details.

And now, go build your contact centers.

— seb