Tag Archives: Amazon Pinpoint

Reaching your global players – message on a single, scalable communication hub

Post Syndicated from Richard Perez original https://aws.amazon.com/blogs/messaging-and-targeting/reaching-your-global-players-message-on-a-single-scalable-communication-hub/

Introduction

AWS provides solutions to address industry challenges, enabling customers to address the risk of potentially significant financial and reputational damage. This blog discusses how customers that have adopted Amazon Pinpoint have been able to deliver time-sensitive messages reliably and securely. Specifically, it can mitigate the risk associated with reaching players who have voluntarily self-excluded from marketing communications.

Regulated workloads in the betting and gaming industry refer to specific activities, systems, and processes that are subject to government regulations, oversight, and compliance requirements. The role of the gambling regulator is to ensure adherence to these requirements and maintain fairness, security, and responsible behavior within the industry. Regulators also exist to protect players, subject to their jurisdiction, by regulating the businesses that provide gambling services, imposing controls for operators, the Internet Software Vendor’s (ISVs) and players themselves. By way of example, the UK Gambling Commission (GC) is one such organization with the responsibility of regulating the individuals and businesses that provide gambling services in Great Britain. The UK GC and their counterparts across the globe regulate and license betting and gaming businesses to ensure they adhere to standards, and investigate and prosecute those that breach regulations and their conditions of licensure.

Player engagement

Player engagement in the Betting and Gaming industry refers to the level of involvement, enjoyment, and interaction that individuals experience when participating in various gambling activities. It encompasses aspects like personalized gaming experiences, social interactions, rewards, and challenges. Maintaining a strong relationship with players is vital for operators as it fosters loyalty, increases player retention, and drives engagement. Engaged players are more likely to return, deposit, and recommend the brand to others. Additionally, a positive operator-regulator relationship that promotes responsible gambling and regulatory compliance creates a trustworthy reputation, a key component for an operator’s long-term success in the industry. Player expectations and betting options are constantly changing and require operators to develop more engaging and sophisticated games to maintain participation and loyalty. At their core, they typically include:

Responsible Gaming

Proactively preventing messages being delivered to players that have self-excluded, mitigating the risk of incurring fines, and sending an SMS message for the purpose of interrupting gambling behavior or sending an email with guidance or support.

Transactional history

Sending statements of fact about the transactions that have completed and providing information to the player about the bets placed, and suspicious account activity. Send Real-time notifications that can include game results or betting odd changes. These can be delivered in multi-channel formats, subject to the player preferences.

Player Engagement

Deliver personalized notifications that are tailored to the gaming experience of the player. This can include games suggestions based on player’s preference, offering customized bonuses, or providing recommendations for in-game actions or new games. After confirming that a player is not on an exclusion list and opted to receive messaging with you can send notifications with bonuses and promotions offering attractive incentives to maintain player engagement. These can include welcome bonuses, free game access, and loyalty programs.

Account settings & Security messaging

This can be in the form of SMS to provide seamless account creation, onboarding verification and deliver one-time-passwords to accelerate player identification. Access codes can be delivered where a unique code is delivered to the player in a time sensitive window to provide access to a particular game.

Responsible Gaming

Operators face several challenges when it comes to maintaining player engagement while adhering to regulatory mandated responsible gambling standards, requiring they balance engagement and retention strategies with player well-being and safe-play requirements. A common challenge faced by operators is ingesting multiple self-exclusion lists to ensure self-excluded users do not receive engagement and retention offers. Operators do employ various activities, such as educational initiatives, such as providing players with information about responsible gambling, offering responsive customer support to address player issues and queries promptly, and use machine learning to identify problematic gambling and take proactive actions. It is also worth noting that operators collaborate with problem gambling support organizations to provide resources and assistance to affected individuals.

Players can also take decisive action themselves and are free to opt-out of all gambling notifications for a set length of time, usually between 6 months and five years. The gambling operators in that jurisdiction are required to observe this request by law or face million-dollar fines and reputational damage. With increased player participation, ensuring you have the right tools and services in place in critical, as highlighted by an AWS customer.

Betting and Gaming has become one of the fastest growing sectors in the last 20 years. Everyday millions of people use these platforms for their betting and entertainment needs. tombola recognize the importance of having a reliable and scalable outbound communication hub for engaging with and protecting our players. Reliable communication is essential for a safe, engaging and sustainable customer experience.

James Conway, Technology Director, tombola

How to build Player Engagement solutions in the Betting and Gaming industry with AWS

AWS offers robust solutions that can benefit the Betting and Gaming industry. With the challenges discussed in this blog, AWS provides the infrastructure and solutions necessary to address these challenges effectively.

In a series of blogs, we will be sharing how AWS and Amazon Pinpoint can support industry focused use cases. Other solutions in this series are;

  • How you can use AIML and GenAI to create player recommendations to improve player engagement
  • How you use Amazon Pinpoint for improved onboarding and user authentication
  • Identify player trends and next best actions using Amazon Pinpoint and AIML

The first in this four-part series is how we are supporting customers with responsible gaming, and how they can mitigate the risk of contacting players that have chosen not to receive any more communication.

Amazon Pinpoint is a key service in this solution. Pinpoint is a purpose-built, customer engagement service with multi-channel capabilities, message delivery, management, and optimization. This allows AWS customers to send personalized messages, promotions, and notifications to users via various channels, through a single console. Customers view Amazon Pinpoint as their communication hub because they can consolidate the services and tools used for their outbound channels to just one. It becomes the central service for managing and facilitating communications between different stakeholders. It streamlines and creates efficiency for front and back-office teams as they are able to concentrate on the player’s experience. Customers are looking for an easy, optimized, and consolidated solution to manage their digital communications. Customers often have different services or tools for each communication channel, causing operational complexity that can lead to system issues, inefficiencies and breaches in regulatory obligations. Operators face issues when different teams and stakeholders have to sync between their customer data repository and these services and tools. Maintaining the integrity of these systems can be cumbersome and increase the risk in the event of a sync or systems error. By maintaining a single service for outbound communications and simplifying the customer data, you are able to innovate and deliver player centric solutions. Below we have provided a selection of use cases focusing on player engagement.

Responsible Gaming – Mitigate the Risk of Targeting Self-Excluded Players on AWS

Under the UK gaming regulations, operators are required to observe a self-exclusion list, managed by GAMSTOP. Amazon Pinpoint queries GAMSTOP’s API in real time before sending the message to a customer. The above is possible with the use of Amazon Pinpoint’s Campaign Hooks and Journey Hooks. Campaign and Journey hooks can be used for other use cases that require reaching out to a 3rd party service in real time, such as players’ balance check before sending a message. Gamstop has been used in this example but can apply to any self-exclusion site or an internal exclusion list.

Tombola, who operate the largest online bingo website in Europe, successfully adopted Pinpoint across all their territories said,

‘At Tombola, we needed a communication tool that helped us deliver on one of our core values: responsible gambling. Players can opt out of marketing communications from online gambling operators by registering themselves on an independent self-exclusion scheme. We needed a system that helped deliver consistency with evolving changing requirements and ensure, in real-time, we have the controls in place not to contact the wrong people. Amazon Pinpoint allowed us to consolidate the various communication systems we had deployed across territories and allowed us to continually evaluate the content and the recipients of our communication. Only Pinpoint’s system could keep up with our shifting requirements, so we can easily connect with the players who want to hear from us, but never contact those who had self-excluded.’

About the Authors

Richard Perez

Richard Perez

Richard Perez is a Principle Go-to-Market Specialist for Communication Developer Services at AWS. He enjoys diving deep into customers’ issues and helping to deliver communication solutions. In his spare time, he enjoys watching and participating in sports, and reading historical novels.

Lower Your Risk of SMS Fraud with Country Level Blocking and Amazon Pinpoint

Post Syndicated from Brett Ezell original https://aws.amazon.com/blogs/messaging-and-targeting/lower-your-risk-of-sms-fraud-with-country-level-blocking-and-amazon-pinpoint/

As a developer, marketing professional, or someone in the communications space, you’re likely familiar with the importance of SMS messaging in engaging customers and driving valuable interactions. However, you may have also encountered the growing challenge of artificially inflated traffic (AIT), also known as SMS pumping. A new report co-authored by
Enea revealed that AIT is so widespread within the SMS ecosystem that 19.8B-35.7B fraudulent messages were sent by bad actors in 2023, incurring substantial costs of over $1 billion. In this blog post, we’ll explore how you can use
Protect configurations, a powerful set of capabilities within
Amazon Pinpoint SMS, that provides granular control over which destination countries your SMS, MMS, and voice messages can be sent to.
<img decoding=” width=”1252″ height=”889″>

What is SMS Pumping, aka Artificially Inflated Traffic (AIT)?

AIT, or SMS pumping, is a type of fraud where bad actors use bots to generate large volumes of fake SMS traffic. These bots target businesses’ whose websites, apps, and other assets have forms or other mechanisms that trigger SMS being sent out. Common use cases for SMS such as one-time passwords (OTPs), app download links, promotion signups, etc. are all targets for these bad actors to “pump” SMS and send out fraudulent messages. The goal is to artificially inflate the number of SMS messages a business sends, resulting in increased costs and a negative impact on the sender’s reputation. In the realm of SMS-based artificially inflated traffic (AIT), the prevalent method for bad actors to profit involves colluding with parties in the SMS delivery chain to receive a portion of the revenue generated from each fraudulent message sent.

<img decoding=” width=”1280″ height=”720″>

AIT poses several challenges for businesses:

  1. Overspending: The fake SMS traffic generated by AIT bots results in businesses paying for messages that yield no actual results.

  2. Interrupted service: Large volumes of AIT can force businesses to temporarily halt SMS services, disrupting legitimate customer communications.

  3. Diverted focus: Dealing with AIT can shift businesses’ attention away from core operations and priorities.

  4. Reduced deliverability rates due to the messages never being interacted with and/or large volumes of SMS being sent quickly.

How does Protect mitigate AIT?

Amazon Pinpoint’s Protect feature allows you to control which countries you can send messages to. This is beneficial if your customers are located in specific countries.

With Protect, you can create a list of country rules that either allow or block messages to each destination country. These country rules can be applied to SMS, MMS, and voice messages sent from your AWS account. The Protect configurations you create enable precise control over which destination countries your messages can be sent to. This helps mitigate the impact of AIT by allowing you to tailor where you do or do not send.

Protect offers flexibility in how the country rules are applied. You can apply them at the account level, the configuration set level, or the individual message level. This enables you to customize your AIT mitigation strategy to best fit your business needs and messaging patterns.

By leveraging Protect within Amazon Pinpoint, you can help ensure the integrity and cost-effectiveness of your SMS, MMS, and voice communications.

Account-level Protect Configuration

The simplest approach is to create a Protect configuration and associate it as the account default. This means the allow/block rules defined in that configuration will be applied across all messages sent from your account, unless overridden. This is an effective option if you only need one set of country rules applied universally.

Configuration set-specific Protect configuration

You can associate a Protect configuration with one or more of your Pinpoint SMS configuration sets. This allows you to apply different country rules to distinct messaging flows or use cases within your application without changing your existing code if you are already using Config Sets. It also enables more detailed logging and monitoring of the Protect configuration’s impact, such as:

  1. Error Logs: Logging of any errors or issues encountered when messages are sent, providing insights into how the Protect configuration is affecting message delivery.
  2. Audit Logs: Records of all configuration changes, access attempts, and other relevant activities related to the Protect configuration, allowing for comprehensive auditing and monitoring.
  3. Usage Metrics: Tracking of usage statistics, such as the number of messages sent to different countries, the impact of the Protect configuration on message volumes, and other usage-related data.
  4. Compliance and Policy Enforcement Logs: Documentation of how the Protect configuration is enforcing compliance with messaging policies and regulations, including any instances where messages are blocked or allowed based on the configuration rules.

Dynamic Protect configuration per message

If your needs are even more specific, you can create a Protect configuration without any association, and then specify its ID when sending messages via the Pinpoint APIs (e.g. SendMediaMessage, SendTextMessage, SendVoiceMessage). This gives you the ability to dynamically choose the Protect configuration to apply for each individual message, providing the ultimate flexibility.

Regardless of the approach, the core benefit of Protect configurations is the ability to precisely control which destination countries your messages may be sent to. Blocking countries where you don’t have a presence or where SMS pricing is high eliminates your exposure to fraudulent AIT traffic originating from those regions. This helps protect your messaging budget, maintain service continuity, and focus your efforts on legitimate customer interactions.

Who should use Protect configurations?

Protect configurations are designed to benefit a wide range of AWS customers, particularly those who:

  1. Send SMS messages to a limited number of countries: If your business primarily operates in a few specific countries, Protect configurations can help you easily block SMS messages to countries where you don’t have a presence, reducing the risk of AIT.
  2. Have experienced AIT issues in the past: If your business has been a target of SMS pumping, Protect configurations can help you regain control over your SMS communications and prevent future AIT attacks.
  3. Want to proactively protect their SMS messaging: Even if you haven’t encountered AIT issues yet, Protect configurations can help you stay ahead of the curve and maintain the integrity of your SMS communications.

How to create a country rules list with Protect configuration

To begin building a list of country rules that allow or block messages to specific destination countries, you start by creating a new Protect configuration. There are two ways to accomplish this, either by using the console, or the API.

Option 1 – Using the AWS Console

Console Scenario: My account is out of the sandbox and I only want to send to 1 country – United Kingdom (iso:GB) using the SenderID “DEMOTP”.

At a high level, we will follow the three steps outlined below for each method. In our examples, we used a SenderID as our Originator. However, it should be noted that the same process can be achieved using any originator you’d like. i.e. SenderID, Phone pool, Phone number, 10DLC, short code, etc.

  1. Create SenderID (Optional if you already have one)
  2. Create Protect Configuration
  3. Send Test Message via console

Using the AWS Console

1) Create SenderID for United Kingdom (GB)

  • Pinpoint SMS Console – Request Originator
    • Select United Kingdom (GB) and follow the prompts for a SenderID. DO NOT select Two-way SMS Messaging
    • Enter Sender ID – Example: DEMOTP
    • Confirm and Request

2) Create default Protect Configuration

<img decoding=” width=”863″ height=”521″>

    • Search for Country=United Kingdom then deselect United Kingdom

<img decoding=” width=”865″ height=”582″>

    • Set as Account Default and select Create protect configuration

<img decoding=” width=”1497″ height=”1173″>

3) Send a test message with SMS simulator

Note: The Pinpoint SMS Simulator provides special phone numbers you can use to send test text messages and receive realistic event records, all within the confines of the Amazon Pinpoint service. These simulator phone numbers are designed to stay entirely within the Pinpoint SMS ecosystem, ensuring your test messages don’t get sent over the carrier network.

You can use these simulator phone numbers to send both SMS and MMS messages, allowing you to thoroughly validate your message content, workflow, and event handling. The responses you receive back will mimic either success or fail depending on which destination simulator number you send to.

  • From the Pinpoint SMS Console SMS Simulator page,
    • For Originator, Choose Sender ID, and select your Sender ID created from earlier.
    • Under Destination number, select Simulator numbers and choose United Kingdom (GB). Enter a test message in the Message body.
    • Finally, choose send test message. This should prompt a green “Success” banner at the top of your page.

<img decoding=” width=”1336″ height=”1313″>

    • Conversely, follow the previous test message steps, and instead attempt to send to anywhere other than the United Kingdom (GB). In this example, Australia (AU) 
    • As shown below in the screenshot this one is blocked since you have configured to only send to GB.

<img decoding=” width=”1333″ height=”1364″>

Option 2 – Using the V2 API and CLI

V2 API Scenario: 
My account is out of the sandbox and I want to BLOCK only 1 country – Australia (AU) while using the SenderID “DEMOTP”.

1) Create SenderID for GB

Note: before using the CLI remember to configure your access and secret key using

aws configure

Windows users should use PowerShell over cmd to test

  • Use RequestSenderId to create the same Sender Id as previously made via the console.
aws pinpoint-sms-voice-v2 request-sender-id --iso-country-code "GB" --sender-id "DEMOTP"

Response:

{
   "DeletionProtectionEnabled": False,
   "IsoCountryCode": "GB",
   "MessageTypes": [ "TRANSACTIONAL" ],
   "MonthlyLeasingPrice": "0.00",
   "Registered": False,
   "SenderId": "DEMOTP",
   "SenderIdArn": "string"
}

2) Create default Protect Configuration

aws pinpoint-sms-voice-v2 create-protect-configuration --tags Key=Name,Value=CLITESTING

Response:

{
   "AccountDefault": False,
   "CreatedTimestamp": number,
   "DeletionProtectionEnabled": False,
   "ProtectConfigurationArn": "string",
   "ProtectConfigurationId":  "string",
   "Tags": [ 
      { 
         "Key": "Name",
         "Value": "CLITESTING"
      }
   ]
}

  • Add AU as BLOCKED on protect configuration.
aws pinpoint-sms-voice-v2 update-protect-configuration-country-rule-set --protect-configuration-id protect-string --number-capability 'SMS' --country-rule-set-updates '{\"AU\":{\"ProtectStatus\":\"BLOCK\"}}'

Response:

{
   "CountryRuleSet": { 
      "string" : { 
         "ProtectStatus": "ALLOW | BLOCK"
      }
   },
   "NumberCapability": "SMS",
   "ProtectConfigurationArn": "string",
   "ProtectConfigurationId": "string"
}

  • Set as account default configuration.
aws pinpoint-sms-voice-v2 set-account-default-protect-configuration --protect-configuration-id protect-string

Response:

{
   "DefaultProtectConfigurationArn": "string",
   "DefaultProtectConfigurationId": "string"
}

3) Send test message

  • Use SendTextMessage to test your Protect Configuration via the V2 API.
  • Test sending to GB Simulator Number should be successful.
aws pinpoint-sms-voice-v2 send-text-message --destination-phone-number "string" --message-body "string"

Response:

{
   "MessageId": "string"
}

  • Test sending to AU Simulator Number should be blocked.
aws pinpoint-sms-voice-v2 send-text-message --destination-phone-number "string" --message-body "string"

Response – (ConflictException):

{
An error occurred (ConflictException) when calling the 
SendTextMessage operation: Conflict Occurred - 
Reason="DESTINATION_COUNTRY_BLOCKED_BY_PROTECT_CONFIGURATION" ResourceType="protect-configuration" ResourceId="string"
}

Conclusion

As SMS messaging continues to play a crucial role in customer engagement and authentication, protecting your communications from AIT is more important than ever. Amazon Pinpoint Protect provides a powerful and user-friendly solution to help you mitigate the impact of SMS pumping, ensuring the integrity of your SMS channels and preserving your business’ reputation and resources. Whether you’re a small business or a large enterprise, Pinpoint Protect is a valuable tool to have in your arsenal as you navigate the evolving landscape of SMS messaging.

To get started with Pinpoint SMS Protect, visit the Amazon Pinpoint SMS documentation or reach out to your AWS account team. And don’t forget to let us know in the comments how Protect configurations has helped you combat AIT and strengthen your SMS communications.

A few resources to help you plan for your SMS program:

About the Author

Brett Ezell

Brett Ezell is your friendly neighborhood Solutions Architect at AWS, where he specializes in helping customers optimize their SMS and email campaigns using Amazon Pinpoint and Amazon Simple Email Service. As a former US Navy veteran, Brett brings a unique perspective to his work, ensuring customers receive tailored solutions to meet their needs. In his free time, Brett enjoys live music, collecting vinyl, and the challenges of a good workout. And, as a self-proclaimed comic book aficionado, he can often be found combing through his local shop for new books to add to his collection.

How to enable one-click unsubscribe email with Amazon Pinpoint

Post Syndicated from Zip Zieper original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-enable-one-click-unsubscribe-email-with-amazon-pinpoint/

Amazon Pinpoint customers who use campaigns, journeys, or the SendMesages API to send more than 5,000 marketing email messages per day are considered “bulk senders”. If your organization meets this criteria, you are now subject to new requirements that were recently established by Google, Yahoo and other large ISPs/ESPs. These providers have mandated these requirements to help protect their user’s inboxes. Detailed information about these requirements is provided in the Amazon Simple Email Service (SES) bulk sender updates blog post.

Per these new requirements, Pinpoint customers that send marketing email messages in bulk must meet all of these criteria:

  • Fully authenticate their email sending domains with SPF, DKIM and DMARC. See this blog.
  • Provide a clearly visible unsubscribe link in the body &/or footer of each message.
  • Enable the “List-Unsubscribe” and “List-Unsubscribe-Post” one-click unsubscribe (the subbect of this blog post). You can learn more about these headers and how they are used in SES in this related blog post.
  • Honor all unsubscribe POST requests within 48 hours, after which time you shouldn’t be sending emails to the now unsubscribed end-user.
  • Actively monitor spam complaint rates, and take the steps needed to ensure these rates remain below acceptable levels as defined by the ESPs.

This blog post provides Pinpoint customers with the steps necessary to enable the one-click unsubscribe button via email headers for “List-Unsubscribe” and “List-Unsubscribe-Post” as defined by RFC 2369 and RFC 8058.

Unsubscribe Process Overview

Pinpoint now supports the inclusion of the “List-Unsubscribe” and “List-Unsubscribe-Post” email headers that enable compatible email client apps to render a one-click unsubscribe button when displaying emails from a subscription list. When you include these headers in the emails you send by Pinpoint, those end-users who want to unsubscribe from your emails can do so by simply clicking the unsubscribe button in their email app (see image). Once pressed, the unsubscribe button fires off a POST request to the URL you have defined in the “List-Unsubscribe” header.

You, the Pinpoint customer, are responsible for defining the “List-Unsubscribe” and “List-Unsubscribe-Post” headers, as well as supplying the system or process invoked by the “List-Unsubscribe” and “List-Unsubscribe-Post” email headers. Your system or process must, when activated by the unsubscribe action, update that end-user’s preferences accordingly so that within 48 hours, any end-user who unsubscribes will no longer receive unwanted emails.

If you only use Pinpoint’s campaigns and journeys, you may elect to use the Pinpoint endpoint’s OptOut attribute to store the user’s unsubscribe preferences. Possible values for OptOut are: ALL, the user has opted out and doesn’t want to receive any messages; and, NONE, the user hasn’t opted out and wants to receive all messages. It is important to note, however, that the SendMessages API ignores the Pinpoint endpoint’s OptOut attribute.

If you do not currently offer your recipients the option to unsubscribe to unwanted emails, you will need to develop & deploy a system or process to receive end-user unsubscribe requests to be in compliance with these new requirements. An example solution with sample code to processes email opt-out requests for Pinpoint can be found here. You can read more about this example in this blog post.

REQUIRED: Update the SES IAM role used by Pinpoint

Because Pinpoint uses SES resources for sending email messages, when using campaigns or journeys you must now create (or update) an IAM Orchestration sending role to grant Pinpoint service access to your SES resources. This allows Pinpoint to send emails via SES. To add or update the IAM role, follow the steps outlined in the Pinpoint documentation.

Note – If you are sending emails directly via the SendMesage, API you do not need an IAM Orchestration sending role, but you must have permissions for ses:SendEmail and ses:SendRawEmail.

Add easy unsubscribe email headers:

The steps you need to take to enable one-click unsubscribe in your Pinpoint emails depends on how you send emails, and whether or not you use templates, as shown below:

Decision tree for adding headers

Use SendMessages with the AWS SDK or CLI

Using the AWS CLI: add headers for the “List-Unsubscribe” and “List-Unsubscribe-post” as shown in the example below:

aws pinpoint send-messages \
--region us-east-1 \
--application-id ce796be37f32f178af652b26eexample \
--message-request '{
    "Addresses": {
        "[email protected]": {"ChannelType": "EMAIL"},
    },
    "MessageConfiguration": {
        "EmailMessage": {
            "SimpleEmail": {
                "Subject": {"Data":"URL with easy unsubscribe headers", "Charset":"UTF-8"},
                "TextPart": {"Data":"with headers list-unsubscribe and list-unsubscribe-post.\n\nUnsubscribe: <https://www.example.com/preferences>", "Charset":"UTF-8"},
                "HtmlPart": {"Data":"<html><body>with headers list-unsubscribe and list-unsubscribe-post<br><br><a ses:tags=\"unsubscribeLinkTag:optout\" href=\"https://example.com/?address=x&topic=x\">Unsubscribe</a></body></html>", "Charset":"UTF-8"},
                "Headers": [
                    {"Name":"List-Unsubscribe", "Value":"<https://example.com/?address=x&topic=x>, <mailto: [email protected]?subject=TopicUnsubscribe>"},
                    {"Name":"List-Unsubscribe-Post", "Value":"List-Unsubscribe=One-Click"}
                ]
            }
        }
    }
}

Send an email message

Below is an example using the SendMessages API from the AWS SDK for Python (Boto3) that includes the List-Unsubscribe headers. This example assumes that you’ve already installed and updated the SDK for Python (Boto3) to the latest version available. For more information, see Quickstart in the AWS SDK for Python (Boto3) API Reference.

import logging  # Logging library to log messages
import boto3  # AWS SDK for Python
from botocore.exceptions import ClientError  # Exception handling for boto3
import hashlib  # Library to generate unique hashes

# Configure logger
logger = logging.getLogger(__name__)

# Define constants
CHARSET = "UTF-8"
REGION = 'us-east-1'

def send_email_message(
    pinpoint_client,
    project_id, 
    sender,
    to_addresses,
    subject,
    html_message,
    text_message,
):
    """
    Sends an email message with HTML and plain text versions.

    :param pinpoint_client: A Boto3 Pinpoint client.
    :param project_id: The Amazon Pinpoint project ID to use when you send this message.
    :param sender: The "From" address. This address must be verified in
                   Amazon Pinpoint in the AWS Region you're using to send email.
    :param to_addresses: The list of addresses on the "To" line. If your Amazon Pinpoint account
                         is in the sandbox, these addresses must be verified.
    :param subject: The subject line of the email.
    :param html_message: The HTML content of the email.
    :param text_message: The plain text content of the email.
    :return: A dict of to_addresses and their message IDs.
    """
    try:
        # Create a dictionary of addresses with unique unsubscribe URLs
        # The addresses are encoded using the SHA256 hashing algorithm from the hashlib library
        # to create a unique and obfuscated unsubscribe URL for each recipient. This ensures
        # that the unsubscribe link is specific to each individual recipient, preventing
        # potential abuse or unauthorized unsubscribes. The hashed value is appended to the
        # base unsubscribe URL, allowing the email service to identify the intended recipient
        # when the unsubscribe link is clicked, while also protecting the recipient's personal
        # email address from being directly exposed in the URL.
        addresses = {
            address: {
                "ChannelType": "EMAIL",
                "Substitutions": {
                    "unsubscribeURL": [f"https://example.com/unsub/{hashlib.sha256(address.encode()).hexdigest()}"],
                }
            }
            for address in to_addresses
        }
        
        # Send email using Amazon Pinpoint
        response = pinpoint_client.send_messages(
            ApplicationId=project_id,
            MessageRequest={
                "Addresses": addresses,
                "MessageConfiguration": {
                    "EmailMessage": {
                        "FromAddress": sender,
                        "SimpleEmail": {
                            "Subject": {"Charset": CHARSET, "Data": subject},
                            "HtmlPart": {"Charset": CHARSET, "Data": html_message},
                            "TextPart": {"Charset": CHARSET, "Data": text_message},
                            "Headers": [
                                {"Name": "List-Unsubscribe", "Value": "{{unsubscribeURL}}"},
                                {"Name": "List-Unsubscribe-Post", "Value": "List-Unsubscribe=One-Click"}
                            ],
                        },
                    }
                }
            }
        )
    except ClientError as e:
        # Log exception if sending email fails
        logger.exception("Couldn't send email: %s", e)
        raise
    else:
        # Return a dictionary of addresses and their respective message IDs
        return {
            address: message["MessageId"] 
        for address, message in response["MessageResponse"]["Result"].items()
        }

def main():
    # Sample data for sending email
    project_id = "ce796be37f32f178af652b26eexample"  # Amazon Pinpoint project ID
    sender = "[email protected]"  # Verified sender email address
    to_addresses = ["[email protected]", "[email protected]", "[email protected]"]  # Recipient email addresses
    subject = "Amazon Pinpoint Unsubscribe Headers Test (SDK for Python (Boto3))"  # Email subject
    text_message = """Amazon Pinpoint Test (SDK for Python)
    -------------------------------------
    This email was sent with Amazon Pinpoint using the AWS SDK for Python (Boto3).
    For more information, see https://aws.amazon.com/sdk-for-python/
                """  # Plain text message
    html_message = """<html>
    <head></head>
    <body>
      <h1>Amazon Pinpoint Test (SDK for Python (Boto3)</h1>
      <p>This email was sent with
        <a href='https://aws.amazon.com/pinpoint/'>Amazon Pinpoint</a> using the
        <a href='https://aws.amazon.com/sdk-for-python/'>
          AWS SDK for Python (Boto3)</a>.</p>
    </body>
    </html>
                """  # HTML message

    # Create a Pinpoint client
    pinpoint_client = boto3.client("pinpoint", region_name=REGION)

    print("Sending email.")
    # Send email and print message IDs
    try:
        message_ids = send_email_message(
            pinpoint_client,
            project_id,
            sender,
            to_addresses,
            subject,
            html_message,
            text_message,
        )
        print(f"Message sent! Message IDs: {message_ids}")
    except ClientError as e:
        print(f"Failed to send messages: {e}")

# Entry point of the script
if __name__ == "__main__":
    logging.basicConfig(level=logging.INFO)  # Set logging level to INFO
    main()

Send an email message with an existing email template.

If you use message templates to send email messages via AWS SDK for Python (Boto3), you can add the headers for List-Unsubscribe and List-Unsubscribe-post into the template, and then fill those variables with unique values per recipient, as shown in the code example below. First, you would create the template via the UI and add the Headers in the new fields as shown in the image below.

Or you can create the template, with headers, via the AWS CLI:

aws pinpoint create-email-template --template-name MyEmailTemplate \
--email-template-request '{
    "Subject": "Amazon Pinpoint Unsubscribe Headers Test using email template",
    "TextPart": "Hello, welcome to our service. We are glad to have you with us. If you wish to unsubscribe, click here: {{unsubscribeURL}}",
    "HtmlPart": "<html><body><h1>Hello, welcome to our service</h1><p>We are glad to have you with us.</p><p>If you wish to unsubscribe, click <a href=\"{{unsubscribeURL}}\">here</a>.</p></body></html>",
    "DefaultSubstitutions": "{\"unsubscribeURL\": \"https://example.com/unsubscribe\"}",
    "Headers": [
            {"Name": "List-Unsubscribe","Value": "{{unsubscribeURL}}"},
            {"Name": "List-Unsubscribe-Post","Value": "List-Unsubscribe=One-Click"}
        ]
  }

In this next example, we are including the use of a secret Hash key. By using this format, the unsubscribe URL will include the Pinpoint project ID and a hashed value of the email address combined with the secret key. This provides a more secure and customized unsubscribe experience for the recipients.

import logging  # Logging library to log messages
import boto3  # AWS SDK for Python
from botocore.exceptions import ClientError  # Exception handling for boto3
import hashlib  # Library to generate unique hashes

# Configure logger
logger = logging.getLogger(__name__)

# Define constants
REGION = 'us-east-1'
HASH_SECRET_KEY = "my_secret_key"  # Replace with your secret key

def send_templated_email_message(
    pinpoint_client, 
    project_id, 
    sender, 
    to_addresses, 
    template_name, 
    template_version
):
    """
    Sends an email message with HTML and plain text versions.

    :param pinpoint_client: A Boto3 Pinpoint client.
    :param project_id: The Amazon Pinpoint project ID to use when you send this message.
    :param sender: The "From" address. This address must be verified in
                   Amazon Pinpoint in the AWS Region you're using to send email.
    :param to_addresses: The list of addresses on the "To" line. If your Amazon Pinpoint account
                         is in the sandbox, these addresses must be verified.
    :param template_name: The name of the email template to use when sending the message.
    :param template_version: The version number of the message template.

    :return: A dict of to_addresses and their message IDs.
    """
    try:
        # Create a dictionary of addresses with unique unsubscribe URLs
        # The addresses are encoded using the SHA256 hashing algorithm from the hashlib library
        # to create a unique and obfuscated unsubscribe URL for each recipient. This ensures
        # that the unsubscribe link is specific to each individual recipient, preventing
        # potential abuse or unauthorized unsubscribes. The hashed value is appended to the
        # base unsubscribe URL, allowing the email service to identify the intended recipient
        # when the unsubscribe link is clicked, while also protecting the recipient's personal
        # email address from being directly exposed in the URL.
        addresses = {
            address: {
                "ChannelType": "EMAIL",
                "Substitutions": {
                    "unsubscribeURL": [
                        f"https://www.example.com/preferences/index.html?pid={project_id}&h={hashlib.sha256((address + HASH_SECRET_KEY).encode()).hexdigest()}"
                    ]
                }
            }
            for address in to_addresses
        }
        # Send templated email using Amazon Pinpoint
        response = pinpoint_client.send_messages(
            ApplicationId=project_id,
            MessageRequest={
                "Addresses": addresses,
                "MessageConfiguration": {"EmailMessage": {"FromAddress": sender}},
                "TemplateConfiguration": {
                    "EmailTemplate": {
                        "Name": template_name,
                        "Version": template_version,
                    },
                },
            },
        )
    except ClientError as e:
        # Log exception if sending email fails
        logger.exception("Couldn't send email: %s", e)
        raise
    else:
        # Return a dictionary of addresses and their respective message IDs
        return {
            address: message["MessageId"] 
        for address, message in response["MessageResponse"]["Result"].items()
        }


def main():
    # Sample data for sending email
    project_id = "ce796be37f32f178af652b26eexample"  # Amazon Pinpoint project ID
    sender = "[email protected]"  # Verified sender email address
    to_addresses = ["[email protected]", "[email protected]", "[email protected]"]  # Recipient email addresses
    template_name = "MyEmailTemplate"
    template_version = "1"

    # Create a Pinpoint client
    pinpoint_client = boto3.client("pinpoint", region_name=REGION)
    print("Sending email.")
    # Send email and print message IDs
    try:
        message_ids = send_templated_email_message(
            pinpoint_client,
            project_id,
            sender,
            to_addresses,
            template_name,
            template_version,
        ),
        print(f"Message sent! Message IDs: {message_ids}"),
    except ClientError as e:
        print(f"Failed to send messages: {e}")
        
# Entry point of the script
if __name__ == "__main__":
    logging.basicConfig(level=logging.INFO)  # Set logging level to INFO
    main()

Pinpoint Campaigns via API (runtime).

If you send emails using Pinpoint campaigns via the API call (runtime), you can add the headers as described below:

"EmailMessage":{
   "Body": "string", 
   "Title": "string", 
   "HtmlBody": "string", 
    "FromAddress": "string",
   "Headers": [
        {
            "Name": "string", 
            "Value": "string"
        } 
   ]
}

Pinpoint Campaigns & Journeys via AWS Console.

The Pinpoint console enables you to create (or update) your email templates to add support for up to 15 different headers, including the “List-Unsubscribe” and “List-Unsubscribe-Post” headers. Simply open , or create a new, template in the Pinpoint console, scroll to the bottom of the visual message editor, expand the Headers option, and insert the header names and values. Note that if you only use the console UI to send your Campaigns and Journeys, you can store the encoded List-Unsubscribe URL as an attribute in the endpoint, then use that attribute as the value as shown below:

Conclusion.

In this blog, we provide Pinpoint customers with the information and guidance needed to enable a one-click unsubscribe link in their recipients’ compatible email apps via “List-Unsubscribe” and “List-Unsubscribe-Post” email headers. Following this guidance, in conjunction with properly authenticating your email sending domains and monitoring / keeping spam complaints below prescribed thresholds will help ensure high rates of Pinpoint email deliverability.

We welcome your comments on this post below. For additional information, refer to these resources, or contact your AWS account team.

About the Authors

zip

Zip

Zip is an Amazon Pinpoint and Amazon Simple Email Service Sr. Specialist Solutions Architect at AWS. Outside of work he enjoys time with his family, cooking, mountain biking and plogging.

Darren Roback

Darren Roback

Darren is a Senior Solutions Architect with Amazon Web Services based in St. Louis, Missouri. He has a background in Security and Compliance, Serverless Event-Driven Architecture, and Enterprise Architecture. At AWS, Darren partners with customers to help them solve business challenges with AWS technology. Outside of work, Darren enjoys spending time in his shop working on woodworking projects.

Bruno Giorgini

Bruno Giorgini

Bruno Giorgini is a Senior Solutions Architect specializing in Pinpoint and SES. With over two decades of experience in the IT industry, Bruno has been dedicated to assisting customers of all sizes in achieving their objectives. When he is not crafting innovative solutions for clients, Bruno enjoys spending quality time with his wife and son, exploring the scenic hiking trails around the SF Bay Area.

Message delivery status tracking with Amazon Pinpoint

Post Syndicated from Brijesh Pati original https://aws.amazon.com/blogs/messaging-and-targeting/message-delivery-status-tracking-with-amazon-pinpoint/

In the vast landscape of digital communication, reaching your audience effectively is key to building successful customer relationships. Amazon Pinpoint – Amazon Web Services’ (AWS) flexible, user-focused messaging and targeting solution goes beyond mere messaging; it allows businesses to engage customers through email, SMS, push notifications, and more.

What sets Amazon Pinpoint apart is its scalability and deliverability. Amazon Pinpoint supports a multitude of business use cases, from promotional campaigns and transactional messages to customer engagement journeys. It provides insights and analytics that help tailor and measure the effectiveness of communication strategies.

For businesses, the power of this platform extends into areas such as marketing automation, customer retention campaigns, and transactional messaging for updates like order confirmations and shipping alerts. The versatility of Amazon Pinpoint can be a significant asset in crafting personalized user experiences at scale.

Use Case & Solution overview – Tracking SMS & Email Delivery Status

In a business setting, understanding whether a time-sensitive email or SMS was received can greatly impact customer experience as well as operational efficiency. For instance, consider an e-commerce platform sending out shipping notifications. By quickly verifying that the message was delivered, businesses can preemptively address any potential issues, ensuring customer satisfaction.

Amazon Pinpoint tracks email and SMS delivery and engagement events, which can be streamed using Amazon Kinesis Firehose for storage or further processing. However, third party applications don’t have a direct API to query and obtain the latest status of a message.

To address the above challenge, this blog presents a solution that leverages AWS services for data streaming, storage, and retrieval of Amazon Pinpoint events using a simple API call. At the core of the solution is Amazon Pinpoint event stream capability, which utilizes Amazon Kinesis services for data streaming.

The architecture for message delivery status tracking with Amazon Pinpoint is comprised of several AWS services that work in concert. To streamline the deployment of these components, they have been encapsulated into an AWS CloudFormation template. This template allows for automated provisioning and configuration of the necessary AWS resources, ensuring a repeatable and error-free deployment.

The key components of the solution are as follows:

  1. Event Generation: An event is generated within Amazon Pinpoint when a user interacts with an application, or when a message is sent from a campaign, journey, or as a transactional communication. The event name and metadata depends on the channel SMS or Email.
  2. Amazon Pinpoint Event Data Streaming: The generated event data is streamed to Amazon Kinesis Data Firehose. Kinesis Data Firehose is configured to collect the event information in near real-time, enabling the subsequent processing and analysis of the data.
  3. Pinpoint Event Data Processing: Amazon Kinesis Data Firehose is configured to invoke a specified AWS Lambda function to transform the incoming source data. This transformation step is set up during the creation of the Kinesis Data Firehose delivery stream, ensuring that the data is in the correct format before it is stored, enhancing its utility for immediate and downstream analysis. The Lambda function acts as a transformation mechanism for event data ingested through Kinesis Data Firehose. The function decodes the base64-encoded event data, deserializes the JSON payload, and processes the data depending on the event type (email or SMS)- it parses the raw data, extracting relevant attributes before ingesting it into Amazon DynamoDB. The function handles different event types, specifically email and SMS events, discerning their unique attributes and ensuring they are formatted correctly for DynamoDB’s schema.
  4. Data Ingestion into Dynamo DB: Once processed, the data is stored in Amazon DynamoDB. DynamoDB provides a fast and flexible NoSQL database service, which facilitates the efficient storage and retrieval of event data for analysis.
  5. Data Storage: Amazon DynamoDB stores the event data after it’s been processed by AWS Lambda. Amazon DynamoDB is a highly scalable NoSQL database that enables fast queries, which is essential for retrieving the status of messages quickly and efficiently, thereby facilitating timely decision-making based on customer interactions.
  6. Customer application/interface: Users or integrated systems engage with the messaging status through either a frontend customer application or directly via an API. This interface or API acts as the conduit through which message delivery statuses are queried, monitored, and managed, providing a versatile gateway for both user interaction and programmatic access.
  7. API Management: The customer application communicates with the backend systems through Amazon API Gateway. This service acts as a fully managed gateway, handling all the API calls, data transformation, and transfer between the frontend application and backend services.
  8. Event Status Retrieval API: When the API Gateway receives a delivery status request, it invokes another AWS Lambda function that is responsible for querying the DynamoDB table. It retrieves the latest status of the message delivery, which is then presented to the user via the API.

DynamoDB Table Design for Message Tracking:

The tables below outline the DynamoDB schema designed for the efficient storage and retrieval of message statuses, detailing distinct event statuses and attributes for each message type such as email and SMS:

Attributes for Email Events:

Attribute Data type Description
message_id String The unique message ID generated by Amazon Pinpoint.
event_type String The value would be ’email’.
aws_account_id String The AWS account ID used to send the email.
from_address String The sending identity used to send the email.
destination String The recipient’s email address.
client String The client ID if applicable
campaign_id String The campaign ID if part of a campaign
journey_id String The journey ID if part of a journey
send Timestamp The timestamp when Amazon Pinpoint accepted the message and attempted to deliver it to the recipient
delivered Timestamp The timestamp when the email was delivered, or ‘NA’ if not delivered.
rejected Timestamp The timestamp when the email was rejected (Amazon Pinpoint determined that the message contained malware and didn’t attempt to send it.)
hardbounce Timestamp The timestamp when a hard bounce occurred (A permanent issue prevented Amazon Pinpoint from delivering the message. Amazon Pinpoint won’t attempt to deliver the message again)
softbounce Timestamp The timestamp when a soft bounce occurred (A temporary issue prevented Amazon Pinpoint from delivering the message. Amazon Pinpoint will attempt to deliver the message again for a certain amount of time. If the message still can’t be delivered, no more retries will be attempted. The final state of the email will then be SOFTBOUNCE.)
complaint Timestamp The timestamp when a complaint was received (The recipient received the message, and then reported the message to their email provider as spam (for example, by using the “Report Spam” feature of their email client).
open Timestamp The timestamp when the email was opened (The recipient received the message and opened it.)
click Timestamp The timestamp when a link in the email was clicked. (The recipient received the message and clicked a link in it)
unsubscribe Timestamp The timestamp when a link in the email was unsubscribed (The recipient received the message and clicked an unsubscribe link in it.)
rendering_failure Timestamp The timestamp when a link in the email was clicked (The email was not sent due to a rendering failure. This can occur when template data is missing or when there is a mismatch between template parameters and data.)

Attributes for SMS Events:

Attribute Data type Description
message_id String The unique message ID generated by Amazon Pinpoint.
event_type String The value would be ‘sms’.
aws_account_id String The AWS account ID used to send the email.
origination_phone_number String The phone number from which the SMS was sent.
destination_phone_number String The phone number to which the SMS was sent.
record_status String Additional information about the status of the message. Possible values include:
– SUCCESSFUL/DELIVERED – Successfully delivered.
– PENDING – Not yet delivered.
– INVALID – Invalid destination phone number.
– UNREACHABLE – Recipient’s device unreachable.
– UNKNOWN – Error preventing delivery.
– BLOCKED – Device blocking SMS.
– CARRIER_UNREACHABLE – Carrier issue preventing delivery.
– SPAM – Message identified as spam.
– INVALID_MESSAGE – Invalid SMS message body.
– CARRIER_BLOCKED – Carrier blocked message.
– TTL_EXPIRED – Message not delivered in time.
– MAX_PRICE_EXCEEDED – Exceeded SMS spending quota.
– OPTED_OUT – Recipient opted out.
– NO_QUOTA_LEFT_ON_ACCOUNT – Insufficient spending quota.
– NO_ORIGINATION_IDENTITY_AVAILABLE_TO_SEND – No suitable origination identity.
– DESTINATION_COUNTRY_NOT_SUPPORTED – Destination country blocked.
– ACCOUNT_IN_SANDBOX – Account in sandbox mode.
– RATE_EXCEEDED – Message sending rate exceeded.
– INVALID_ORIGINATION_IDENTITY – Invalid origination identity.
– ORIGINATION_IDENTITY_DOES_NOT_EXIST – Non-existent origination identity.
– INVALID_DLT_PARAMETERS – Invalid DLT parameters.
– INVALID_PARAMETERS – Invalid parameters.
– ACCESS_DENIED – Account blocked from sending messages.
– INVALID_KEYWORD – Invalid keyword.
– INVALID_SENDER_ID – Invalid Sender ID.
– INVALID_POOL_ID – Invalid Pool ID.
– SENDER_ID_NOT_SUPPORTED_FOR_DESTINATION – Sender ID not supported.
– INVALID_PHONE_NUMBER – Invalid origination phone number.
iso_country_code String The ISO country code associated with the destination phone number.
message_type String The type of SMS message sent.
campaign_id String The campaign ID if part of a campaign, otherwise N/A.
journey_id String The journey ID if part of a journey, otherwise N/A.
success Timestamp The timestamp when the SMS was successfully accepted by the carrier/delivered to the recipient, or ‘NA’ if not applicable.
buffered Timestamp The timestamp when the SMS is still in the process of being delivered to the recipient, or ‘NA’ if not applicable.
failure Timestamp The timestamp when the SMS delivery failed, or ‘NA’ if not applicable.
complaint Timestamp The timestamp when a complaint was received (The recipient received the message, and then reported the message to their email provider as spam (for example, by using the “Report Spam” feature of their email client).
optout Timestamp The timestamp when the customer received the message and replied by sending the opt-out keyword (usually “STOP”), or ‘NA’ if not applicable.
price_in_millicents_usd Number The amount that was charged to send the message.

Prerequisites

  • AWS Account Access (setup) with admin-level permission.
  • AWS CLI version 2 with named profile setup. If a locally configured IDE is not convenient, you can use the AWS CLI from the AWS CloudShell in your browser.
  • A Pinpoint project that has never been configured with an event stream (PinpointEventStream).“
  • The Pinpoint ID from the project you want to monitor. This ID can be found in the AWS Pinpoint console on the project’s main page (it will look something like “79788ecad55555513b71752a4e3ea1111”). Copy this ID to a text file, as you will need it shortly.
    • Note, you must use the ID from a Pinpoint project that has never been configured with the PinpointEventStream option.

Solution Deployment & Testing

Deploying this solution is a straightforward process, thanks to the AWS CloudFormation template we’ve created. This template automates the creation and configuration of the necessary AWS resources into an AWS stack. The CloudFormation template ensures that the components such as Kinesis Data Firehose, AWS Lambda, Amazon DynamoDB, and Amazon API Gateway are set up consistently and correctly.

Deployment Steps:

  • Download the CloudFormation Template from this GitHub sample repository. The CloudFormation template is authored in JSON and named PinpointAPIBlog.yaml.
  • Access the CloudFormation Console: Sign into the AWS Management Console and open the AWS CloudFormation console.
  • Create a New Stack:
    • Choose Create Stack and select With new resources (standard) to start the stack creation process.
    • Under Prerequisite – Prepare template, select Template is ready.
    • Under ‘Specify template’, choose Upload a template file, and then upload the CloudFormation template file you downloaded in Step 1.
  • Configure the Stack:
    • Provide a stack name, such as “pinpoint-yourprojectname-monitoring” and paste the Pinpoint project (application) ID. Press Next.
    • Review the stack settings, and make any necessary changes based on your specific requirements. Next.
  • Initiate the Stack Creation: Once you’ve configured all options, acknowledge that AWS CloudFormation might create IAM resources with custom names, and then choose Create stack.
    • AWS CloudFormation will now provision and configure the resources as defined in the template This will take about 20 minutes to fully deploy. You can view the status in the AWS CloudFormation console.

Testing the Solution:

After deployment is complete you can test (and use) the solution.

  • Send Test Messages: Utilize the Amazon Pinpoint console to send test email and SMS messages. Documentation for this can be found at:
  • Verify Lambda Execution:
    • Navigate to the AWS CloudWatch console.
    • Locate and review the logs for the Lambda functions specified in the solution (`aws/lambda/{functionName}`) to confirm that the Kinesis Data Firehose records are being processed successfully. In the log events you should see messages including INIT_START, Raw Kinesis Data Firehouse Record, etc.
  • Check Amazon DynamoDB Data:
    • Navigate to Amazon DynamoDB in the AWS Console.
    • Select the table created by the CloudFormation template and choose ‘Explore Table Items‘.
    • Confirm the presence of the event data by checking if the message IDs appear in the table.
    • The table should have one or more message_id entries from the test message(s) you sent above.
    • Click on a message_id to review the data, and copy the message_id to a text editor on your computer. It will look like “0201123456gs3nroo-clv5s8pf-8cq2-he0a-ji96-59nr4tgva0g0-343434
  • API Gateway Testing:
    • In the API Gateway console, find the MessageIdAPI.
    • Navigate to Stages and copy the Invoke URL provided.

    • Open the text editor on your computer and paste the APIGateway invoke URL.
    • Create a curl command with you API Gateway + ?message_id=message_id. It should look like this: “https://txxxxxx0.execute-api.us-west-2.amazonaws.com/call?message_id=020100000xx3xxoo-clvxxxxf-8cq2-he0a-ji96-59nr4tgva0g0-000000”
    • Copy the full curl command in your browser and enter.
    • The results should look like this (MacOS, Chrome):

By following these deployment and testing steps, you’ll have a functioning solution for tracking Pinpoint message delivery status using Amazon Pinpoint, Kinesis Fire Hose, DynamoDB and CloudWatch.

Clean Up

To help prevent unwanted charges to your AWS account, you can delete the AWS resources that you used for this walkthrough.

To delete the stack follow these following instructions:

Open the AWS CloudFormation console.

  • In the AWS CloudFormation console dashboard, select the stack you created (pinpoint-yourprojectname-monitoring).
  • On the Actions menu, choose Delete Stack.
  • When you are prompted to confirm, choose Yes, Delete.
  • Wait for DELETE_COMPLETE to appear in the Status column for the stack.

Next steps

The solution on this blog provides you an API endpoint to query messages’ status. The next step is to store and analyze the raw data based on your business’s requirements. The Amazon Kinesis Firehose used in this blog can stream the Pinpoint events to an AWS database or object storage like Amazon S3. Once the data is stored, you can catalogue them using AWS Glue, query them via SQL using Amazon Athena and create custom dashboards using Amazon QuickSight, which is a cloud-native, serverless, business intelligence (BI) with native machine learning (ML) integrations.

Conclusion

The integration of AWS services such as Kinesis, Lambda, DynamoDB, and API Gateway with Amazon Pinpoint transforms your ability to connect with customers through precise event data retrieval and analysis. This solution provides a stream of real-time data, versatile storage options, and a secure method for accessing detailed information, all of which are critical for optimizing your communication strategies.

By leveraging these insights, you can fine-tune your email and SMS campaigns for maximum impact, ensuring every message counts in the broader narrative of customer engagement and satisfaction. Harness the power of AWS and Amazon Pinpoint to not just reach out but truly connect with your audience, elevating your customer relationships to new heights.

Considerations/Troubleshooting

When implementing a solution involving AWS Lambda, Kinesis Data Streams, Kinesis Data Firehose, and DynamoDB, several key considerations should be considered:

  • Scalability and Performance: Assess the scalability needs of your system. Lambda functions scale automatically, but it’s important to configure concurrency settings and memory allocation based on expected load. Similarly, for Kinesis Streams and Firehose, consider the volume of data and the throughput rate. For DynamoDB, ensure that the table’s read and write capacity settings align with your data processing requirements.
  • Error Handling and Retries: Implement robust error handling within the Lambda functions to manage processing failures. Kinesis Data Streams and Firehose have different retry behaviors and mechanisms. Understand and configure these settings to handle failed data processing attempts effectively. In DynamoDB, consider the use of conditional writes to handle potential data inconsistencies.
  • Security and IAM Permissions: Secure your AWS resources by adhering to the principle of least privilege. Define IAM roles and policies that grant the Lambda function only the necessary permissions to interact with Kinesis and DynamoDB. Ensure that data in transit and at rest is encrypted as required, using AWS KMS or other encryption mechanisms.
  • Monitoring and Logging: Utilize AWS CloudWatch for monitoring and logging the performance and execution of Lambda functions, as well as Kinesis and DynamoDB operations. Set up alerts for any anomalies or thresholds that indicate issues in data processing or performance bottlenecks.

About the Authors

Brijesh Pati

Brijesh Pati

Brijesh Pati is an Enterprise Solutions Architect at AWS. His primary focus is helping enterprise customers adopt cloud technologies for their workloads. He has a background in application development and enterprise architecture and has worked with customers from various industries such as sports, finance, energy and professional services. His interests include serverless architectures and AI/ML.

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Senior Specialist Solutions Architect at AWS. He enjoys diving deep into customers’ technical issues and help in designing communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Anshika Singh

Anshika Singh

Anshika Singh is an Associate Solutions Architect at AWS specializing in building for GenAI applications. She helps enable customers to use the cloud through the use of code samples and starter projects.

Building a generative AI Marketing Portal on AWS

Post Syndicated from Tristan Nguyen original https://aws.amazon.com/blogs/messaging-and-targeting/building-a-generative-ai-marketing-portal-on-aws/

Introduction

In the preceding entries of this series, we examined the transformative impact of Generative AI on marketing strategies in “Building Generative AI into Marketing Strategies: A Primer” and delved into the intricacies of Prompt Engineering to enhance the creation of marketing content with services such as Amazon Bedrock in “From Prompt Engineering to Auto Prompt Optimisation”. We also explored the potential of Large Language Models (LLMs) to refine prompts for more effective customer engagement.

Continuing this exploration, we will articulate how Amazon Bedrock, Amazon Personalize, and Amazon Pinpoint can be leveraged to construct a marketer portal that not only facilitates AI-driven content generation but also personalizes and distributes this content effectively. The aim is to provide a clear blueprint for deploying a system that crafts, personalizes, and distributes marketing content efficiently. This blog will guide you through the deployment process, underlining the real-world utility of these services in optimizing marketing workflows. Through use cases and a code demonstration, we’ll see these technologies in action, offering a hands-on perspective on enhancing your marketing pipeline with AI-driven solutions.

The Challenge with Content Generation in Marketing

Many companies struggle to streamline their marketing operations effectively, facing hurdles at various stages of the marketing operations pipeline. Below, we list the challenges at three main stages of the pipeline: content generation, content personalization, and content distribution.

Content Generation

Creating high-quality, engaging content is often easier said than done. Companies need to invest in skilled copywriters or content creators who understand not just the product but also the target audience. Even with the right talent, the process can be time-consuming and costly. Moreover, generating content at scale while maintaining quality and compliance to industry regulations is the key blocker for many companies considering adopting generative AI technologies in production environments.

Content Personalization

Once the content is created, the next hurdle is personalization. In today’s digital age, generic content rarely captures attention. Customers expect content tailored to their needs, preferences, and behaviors. However, personalizing content is not straightforward. It requires a deep understanding of customer data, which often resides in siloed databases, making it difficult to create a 360-degree view of the customer.

Content Distribution

Finally, even the most captivating, personalized content is ineffective if it doesn’t reach the right audience at the right time. Companies often grapple with choosing the appropriate channels for content distribution, be it email, social media, or mobile notifications. Additionally, ensuring that the content complies with various regulations and doesn’t end up in spam folders adds another layer of complexity to the distribution phase. Sending at scale requires paying attention to deliverability, security and reliability which often poses significant challenges to marketers.

By addressing these challenges, companies can significantly improve their marketing operations and empower their marketers to be more effective. But how can this be achieved efficiently and at scale? The answer lies in leveraging the power of Amazon Bedrock, Amazon Personalize, and Amazon Pinpoint, as we will explore in the following solution.

The Solution In Action

Before we dive into the details of the implementation, let’s take a look at the end result through the linked demo video.

Use Case 1: Banking/Financial Services Industry

You are a relationship manager working in the Consumer Banking department of a fictitious company called AnyCompany Bank. You are assigned a group of customers and would like to send out personalized and targeted communications to the channel of choice to every members of this group of customer.

Behind the scene, the marketer is utilizing Amazon Pinpoint to create the segment of customers they would like to target. The customers’ information and the marketer’s prompt are then fed into Amazon Bedrock to generate the marketing content, which is then sent to the customer via SMS and email using Amazon Pinpoint.

  • In the Prompt Iterator page, you can employ a process called “prompt engineering” to further optimize your prompt to maximize the effectiveness of your marketing campaigns. Please refer to this blog on the process behind engineering the prompt as well as how to apply an additional LLM model for auto-prompting. To get started, simply copy the sample banking prompt which has gone through the prompt engineering process in this page.
  • Next, you can either upload your customer group by uploading a .csv file (through “Importing a Segment”) or specify a customer group using pre-defined filter criteria based on your current customer database using Amazon Pinpoint.

UseCase1Segment

E.g.: The screenshot shows a sample filtered segment named ManagementOrRetired that only filters to customers who are management or retirees.

  • Once done, you can log into the marketer portal and choose the relevant segment that you’ve just created within the Amazon Pinpoint console.

PinpointSegment

  • You can then preview the customers and their information stored in your Amazon Pinpoint’s customer database. Once satisfied, we’re ready to start generating content for those customers!
  • Click on 1:1 Content Generator tab, your content is automatically generated for your first customer. Here, you can cycle through your customers one by one, and depending on the customer’s preferred language and channel, an email or SMS in the preferred language is automatically generated for them.
    • Generated SMS in English

PostiveSMS

    • A negative example showing proper prompt-engineering at work to moderate content. This happens if we try to insert data that does not make sense for the marketing content generator to output. In this case, the marketing generator refuses to output (justifiably) an advertisement for a 6-year-old on a secured instalment loan.

NegativeSMS

  • Finally, we choose to send the generated content via Amazon Pinpoint by clicking on “Send with Amazon Pinpoint”. In the back end, Amazon Pinpoint will orchestrate the sending of the email/SMS through the appropriate channels.
    • Alternatively, if the auto-generated content still did not meet your needs and you want to generate another draft, you can Disagree and try again.

Use Case 2: Travel & Hospitality

You are a marketing executive that’s working for an online air ticketing agency. You’ve been tasked to promote a specific flight from Singapore to Hong Kong for AnyCompany airline. You’d first like to identify which customers would be prime candidates to promote this flight leg to and then send out hyper-personalized message to them.

Behind the scene, instead of using Amazon Pinpoint to manually define the segment, the marketer in this case is leveraging AIML capabilities of Amazon Personalize to define the best group of customers to recommend the specific flight leg to them. Similar to the above use case, the customers’ information and LLM prompt are fed into the Amazon Bedrock, which generates the marketing content that is eventually sent out via Amazon Pinpoint.

  • Similar to the above use case, you’d need to go through a prompt engineering process to ensure that the content the LLM model is generating will be relevant and safe for use. To get started quickly, go to the Prompt Iterator page, you can use the sample airlines prompt and iterate from there.
  • Your company offers many different flight legs, aggregated from many different carriers. You first filter down to the flight leg that you want to promote using the Filters on the left. In this case, we are filtering for flights originating from Singapore (SRCCity) and going to Hong Kong (DSTCity), operated by AnyCompany Airlines.

PersonalizeInstructions

  • Now, let’s choose the number of customers that you’d like to generate. Once satisfied, you choose to start the batch segmentation job.
  • In the background, Amazon Personalize generates a group of customers that are most likely to be interested in this flight leg based on past interactions with similar flight itineraries.
  • Once the segmentation job is finished as shown, you can fetch the recommended group of customers and start generating content for them immediately, similar to the first use case.

Setup instructions

The setup instructions and deployment details can be found in the GitHub link.

Conclusion

In this blog, we’ve explored the transformative potential of integrating Amazon Bedrock, Amazon Personalize, and Amazon Pinpoint to address the common challenges in marketing operations. By automating the content generation with Amazon Bedrock, personalizing at scale with Amazon Personalize, and ensuring precise content distribution with Amazon Pinpoint, companies can not only streamline their marketing processes but also elevate the customer experience.

The benefits are clear: time-saving through automation, increased operational efficiency, and enhanced customer satisfaction through personalized engagement. This integrated solution empowers marketers to focus on strategy and creativity, leaving the heavy lifting to AWS’s robust AI and ML services.

For those ready to take the next step, we’ve provided a comprehensive guide and resources to implement this solution. By following the setup instructions and leveraging the provided prompts as a starting point, you can deploy this solution and begin customizing the marketer portal to your business’ needs.

Call to Action

Don’t let the challenges of content generation, personalization, and distribution hold back your marketing potential. Deploy the Generative AI Marketer Portal today, adapt it to your specific needs, and watch as your marketing operations transform. For a hands-on start and to see this solution in action, visit the GitHub repository for detailed setup instructions.

Have a question? Share your experiences or leave your questions in the comment section.

About the Authors

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. At work, he specializes in technical implementation of communications services in enterprise systems and architecture/solutions design. In his spare time, he enjoys chess, rock climbing, hiking and triathlon.

Philipp Kaindl

Philipp Kaindl

Philipp Kaindl is a Senior Artificial Intelligence and Machine Learning Solutions Architect at AWS. With a background in data science and
mechanical engineering his focus is on empowering customers to create lasting business impact with the help of AI. Outside of work, Philipp enjoys tinkering with 3D printers, sailing and hiking.

Bruno Giorgini

Bruno Giorgini

Bruno Giorgini is a Senior Solutions Architect specializing in Pinpoint and SES. With over two decades of experience in the IT industry, Bruno has been dedicated to assisting customers of all sizes in achieving their objectives. When he is not crafting innovative solutions for clients, Bruno enjoys spending quality time with his wife and son, exploring the scenic hiking trails around the SF Bay Area.

Build Better Engagement Using the AWS Community Engagement Flywheel: Part 2 of 3

Post Syndicated from Tristan Nguyen original https://aws.amazon.com/blogs/messaging-and-targeting/build-better-engagement-using-the-aws-community-engagement-flywheel-part-2-of-3/

Introduction

Part 2 of 3: From Cohorts to Campaigns

Businesses are constantly looking for better ways to engage with customer communities, but it’s hard to do when profile data is limited to user-completed form input or messaging campaign interaction metrics. Neither of these data sources tell a business much about their customer’s interests or preferences when they’re engaging with that community.

To bridge this gap for their community of customers, AWS Game Tech created the Cohort Modeler: a deployable solution for developers to map out and classify player relationships and identify like behavior within a player base. Additionally, the Cohort Modeler allows customers to aggregate and categorize player metrics by leveraging behavioral science and customer data. In our first blog post, we talked about how to extend Cohort Modeler’s functionality.

In this post, you’ll learn how to:

  1. Use the extension we built to create the first part of the Community Engagement Flywheel.
  2. Process the user extract from the Cohort Modeler and import the data into Amazon Pinpoint as a messaging-ready Segment.
  3. Send email to the users in the Cohort via Pinpoint’s powerful and flexible Campaign functionality.

Use Case Examples for The Cohort Modeler

For this example, we’re going to retrieve a cohort of individuals from our Cohort Modeler who we’ve identified as at risk:

  • Maybe they’ve triggered internal alarms where they’ve shared potential PII with others over cleartext.
  • Maybe they’ve joined chat channels known to be frequented by some of the game’s less upstanding citizens.

Either way, we want to make sure they understand the risks of what they’re doing and who they’re dealing with.

Pinpoint provides various robust methods to import user contact and personalization data in specific formats, and once Pinpoint has ingested that data, you can use Campaigns or Journeys to send customized and personalized messaging to your cohort members – either via automation, or manually via the Pinpoint Console.

Architecture overview

In this architecture, you’ll create a simple Amazon DynamoDB table that mimics a game studio’s database of record for its customers. You’ll then create a Trigger for Amazon Simple Storage Service (Amazon S3) bucket that will ingest the Cohort Modeler extract (created in the prior blog post) and convert it into a CSV file that Pinpoint can ingest. Lastly, once generated, the AWS Lambda function will prompt Pinpoint to automatically ingest the CSV as a static segment.

Once the automation is complete, you’ll use Pinpoint’s console to quickly and easily create a Campaign, including an HTML mail template, to the imported segment of players you identified as at risk via the Cohort Modeler.

Prerequisites

At this point, you should have completed the steps in the prior blog post, Extending the Cohort Modeler. This is all you’ll need to proceed.

Walkthrough

Messaging your Cohort

Now that we’ve extended the Cohort Modeler and built a way to extract cohort data into an S3 bucket, we’ll transform that data into a Segment in Pinpoint, and use the Pinpoint Console to send a message to the members of the Cohort via a Pinpoint Campaign. In this walkthrough, you’ll:

  • Create a Pinpoint Project to import your Cohort Segments.
  • Create a Dynamo table to emulate your database of record for your players.
  • Create an S3 bucket to hold the cohort contact data CSV file.
  • Create a Lambda trigger to respond to Cohort Modeler export events and kick off Pinpoint import jobs.
  • Create and send a Pinpoint Campaign using the imported Segment.

Create the Pinpoint Project

You’ll need a Pinpoint Project (sometimes referred to as an “App”) to send messaging to your cohort members, so navigate to the Pinpoint console and click Create a Project.

  • Sign in to the AWS Management Console and open the Pinpoint Console.
  • If this is your first time using Amazon Pinpoint, you will see a page that introduces you to the features of the service. In the Get started section, you’ll need to enter the name you want to call your project. We used ‘CohortModelerPinpoint‘ but you can use whatever you’d like.
  • On the following screen, the Configure features page, you’ll want to choose Configure in the Email section.
    • Pinpoint will ask you for an email address you want to validate, so that when email goes out, it will use your email address as the FROM header in your email. Enter the email address you want to use as your sending address, and Choose Verify email address.
    • Check the inbox of the address that you entered and look for an email from [email protected]. Open the email and click the link in the email to complete the verification process for the email address.
    • Note: Once you have verified your email identity, you may receive an alert prompting you to update your email address’ policy. If so, highlight your email under All identities, and choose Update policy. To complete this update, Enter confirm where requested, and choose Update.

  • Later on, when you’re asked for your Pinpoint Project ID, this can accessed by choosing All projects from the Pinpoint navigation pane. From there, next to your project name, you will see the associated Project ID.

Create the Dynamo Table

For this step, you’re emulating a game studio’s database of record for its players, and therefore the Lambda function that you’re creating, (to merge Cohort Modeler data with the database of record) is also an emulation.

In a real-world situation, you would use the same ingestion method as the S3TriggerCohortIngest.py example that will be created further below. However, instead of using placeholder data, you would use the ‘playerId’ information extracted from the Cohort Modeler. This would allow you to formulate a specific query against your main database, whether it requires an SQL statement, or some other type of database query.

Creating the Table

Navigate to the DynamoDB Console. You’re going to create a table with ‘playerId’ as the Primary key, and four additional attributes: email, favorite role, first name, and last name.

  • In the navigation pane, choose Tables. On the next page, in the Tables section, choose Create table.
  • In the Table details section, we entered userdata for our Table name. (In order to maintain simple compatibility with the scripts that follow, it is recommended that you do the same.)
  • For Partition key, enter playerId and leave the data type as String.
  • Intentionally leave the Sort key blank and the data type as String.
  • Below, in the Table settings section, leave everything at their Default settings value.
  • Scroll to the end of the page and choose Create table.
Adding Synthetic Data

You’ll need some synthetic data in the database, so that your Cohort Modeler-Pinpoint integration can query the database, retrieve contact information, and then import that contact information into Pinpoint as a Segment.

  • From the DynamoDB Tables section, choose your newly created Table by selecting its name. (The name preferably being userdata).
  • In the DynamoDB navigation pane, choose Explore items.
  • From the Items returned section, choose Create item.
  • Once on the Create item page, ensure that the Form view is highlighted and not the JSON view. You’re going to create a new entry in the table. Cohort Modeler creates the same synthetic information each time it’s built, so all you need to do is to create three entries.
    • For the first entry, enter wayne96 as the Value for playerID.
    • Select the Add new attribute dropdown, and choose String.
    • Enter email as the Attribute name, and the Value should be your own email address since you’ll be receiving this email. This should be the same email used to configure your Pinpoint project from earlier.
    • Again, select the Add new attribute dropdown, and choose String.
    • Enter favoriteRole as the Attribute name, and enter Tank as the attribute’s Value.
    • Again, select the Add new attribute dropdown, and choose String.
    • Enter firstName as the Attribute name, and enter Wayne as the attribute’s Value.
    • Finally, select the Add new attribute dropdown, and choose String.
    • And enter the lastName as the Attribute name, and enter Johnson as the attribute’s value.

  • Repeat the process for the following two users. You’ll be using the SES Mailbox Simulator on these player IDs – one will simulate a successful delivery (but no opens or clicks), and the other will simulate a bounce notification, which represents an unknown user response code.

 

A B C D E
1 playerId email favoriteRole firstName lastName
2 xortiz [email protected] Healer Tristan Nguyen
3 msmith [email protected] DPS Brett Ezell

Now that the table’s populated, you can build the integration between Cohort Modeler and your new “database of record,” allowing you to use the cohort data to send messages to your players.

Create the Pinpoint Import S3 Bucket

Pinpoint requires a CSV or JSON file stored on S3 to run an Import Segment job, so we’ll need a bucket (separate from our Cohort Modeler Export bucket) to facilitate this.

  • Navigate to the S3 Console, and inside the Buckets section, choose Create Bucket.
  • In the General configuration section, enter a bucket a name, remembering that its name must be unique across all of AWS.
  • You can leave all other settings at their default values, so scroll down to the bottom of the page and choose Create Bucket. Remember the name – We’ll be referring to it as your “Pinpoint import bucket” from here on out.
Create a Pinpoint Role for the S3 Bucket

Before creating the Lambda function, we need to create a role that allows the Cohort Modeler data to be imported into Amazon Pinpoint in the form of a segment.

For more details on how to create an IAM role to allow Amazon Pinpoint to import endpoints from the S3 Bucket, refer to this documentation. Otherwise, you can follow the instructions below:

  • Navigate to the IAM Dashboard. In the navigation pane, under Access management, choose Roles, followed by Create role.
  • Once on the Select trusted entity page, highlight and select AWS service, under the Trusted entity type section.
  • In the Use case section dropdown, type or select S3. Once selected, ensure that S3 is highlighted, and not S3 Batch Operations. Choose, Next.
  • From the Add permissions page, enter AmazonS3ReadOnlyAccess within Search area. Select the associated checkbox and choose Next.
  • Once on the Name, review, and create page, For Role name, enter PinpointSegmentImport. 
  • Scroll down and choose Create role.
  • From the navigation pane, and once again under Access management, choose Roles. Select the name of the role just created.
  • In the Trust relationships tab, choose Edit trust policy.
  • Paste the following JSON trust policy. Remember to replace accountId, region and application-id with your AWS account ID, the region you’re running Amazon Pinpoint from, and the Amazon Pinpoint project ID respectively.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "sts:AssumeRole",
            "Effect": "Allow",
            "Principal": {
                "Service": "pinpoint.amazonaws.com"
            },
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "accountId"
                },
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:mobiletargeting:region:accountId:apps/application-id"
                }
            }
        }
    ]
}

Build the Lambda

You’ll need to create a Lambda function for S3 to trigger when Cohort Modeler drops its export files into the export bucket, as well as the connection to the Cohort Modeler export bucket to set up the trigger. The steps below will take you through the process.

Create the Lambda

Head to the Lambda service menu, and from Functions page, choose Create function. From there:

  • On the Create function page, select Author from scratch.
  • For Function Name, enter S3TriggerCohortIngest for consistency.
  • For Runtime choose Python 3.8
  • No other complex configuration options are needed, so leave the remaining options as default and click Create function.
  • In the Code tab, replace the sample code with the code below.
import json
import os
import uuid
import urllib

import boto3
from botocore.exceptions import ClientError

### S3TriggerCohortIngest

# We get activated once we're triggered by an S3 file getting Put.
# We then:
# - grab the file from S3 and ingest it.
# - negotiate with a DB of record (Dynamo in our test case) to pull the corresponding player data.
# - transform that record data into a format Pinpoint will interpret.
# - Save that CSV into a different S3 bucket, and
# - Instruct Pinpoint to ingest it as a Segment.


# save the CSV file to a random unique filename in S3
def save_s3_file(content):
    
    # generate a random uuid csv filename.
    fname = str(uuid.uuid4()) + ".csv"
    
    print("Saving data to file: " + fname)
    
    try:
        # grab the S3 bucket name
        s3_bucket_name = os.environ['S3BucketName']
        
        # Set up the S3 boto client
        s3 = boto3.resource('s3')
        
        # Lob the body into the object.
        object = s3.Object(s3_bucket_name, fname)
        object.put(Body=content)
        
        return fname
        
    # If we fail, say why and exit.
    except ClientError as error:
        print("Couldn't store file in S3: %s", json.dumps(error.response))
        return {
            'statuscode': 500,
            'body': json.dumps('Failed access to storage.')
        }
        
# Given a list of users, query the user dynamo db for their account info.
def query_dynamo(userlist):
    
    # set up the dynamo client.
    ddb_client = boto3.resource('dynamodb')
    
    # Set up the RequestIems object for our query.
    batch_keys = {
        'userdata': {
            'Keys': [{'playerId': user} for user in userlist]
        }
    }

    # query for the keys. note: currently no explicit error-checking for <= 100 items.     
    try:        
 
        db_response = ddb_client.batch_get_item(RequestItems=batch_keys)
 
 
     
        return db_response
        
    # If we fail, say why and exit.
    except ClientError as error:
        print("Couldn't access data in DynamoDB: %s", json.dumps(error.response))
        return {
            'statuscode': 500,
            'body': json.dumps('Failed access to db.')
        }
        
def ingest_pinpoint(filename):
    
    s3url = "s3://" + os.environ.get('S3BucketName') + "/" + filename
    
    
    try:
        pinClient = boto3.client('pinpoint')
        
        response = pinClient.create_import_job(
            ApplicationId=os.environ.get('PinpointApplicationID'),
            ImportJobRequest={
                'DefineSegment': True,
                'Format': 'CSV',
                'RegisterEndpoints': True,
                'RoleArn': 'arn:aws:iam::744969268958:role/PinpointSegmentImport',
                'S3Url': s3url,
                'SegmentName': filename
            }
        )
        
        return {
            'ImportId': response['ImportJobResponse']['Id'],
            'SegmentId': response['ImportJobResponse']['Definition']['SegmentId'],
            'ExternalId': response['ImportJobResponse']['Definition']['ExternalId'],
        }
        
    # If we fail, say why and exit.
    except ClientError as error:
        print("Couldn't create Import job for Pinpoint: %s", json.dumps(error.response))
        return {
            'statuscode': 500,
            'body': json.dumps('Failed segment import to Pinpoint.')
        }
        
# Lambda entry point GO
def lambda_handler(event, context):
    
    # Get the bucket + obj name from the incoming event
    incoming_bucket = event['Records'][0]['s3']['bucket']['name']
    filename = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
    
    # light up the S3 client
    s3 = boto3.resource('s3')
    
    # grab the file that triggered us
    try:
        content_object = s3.Object(incoming_bucket, filename)
        file_content = content_object.get()['Body'].read().decode('utf-8')
        
        # and turn it into JSON.
        json_content = json.loads(file_content)
        
    except Exception as e:
        print(e)
        print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(filename, incoming_bucket))
        raise e

    # Munge the file we got into something we can actually use
    record_content = json.dumps(json_content)

    # load it into json
    record_json = json.loads(record_content)
    
    # Initialize an empty list for names
    namelist = []
    
    # Iterate through the records in the list
    for record in record_json:
        # Check if "playerId" key exists in the record
        if "playerId" in record:
            # Append the first element of "playerId" list to namelist
            namelist.append(record["playerId"][0])

    # use the name list and grab the corresponding users from the dynamo table
    userdatalist = query_dynamo(namelist)
    
    # grab just what we need to create our import file
    userdata_responses = userdatalist["Responses"]["userdata"]
    
    csvlist = "ChannelType,Address,User.UserId,User.UserAttributes.FirstName,User.UserAttributes.LastName\n"
    
    for user in userdata_responses:
        newString = "EMAIL," + user["email"] + "," + user["playerId"] + "," + user["firstName"] + "," + user["lastName"] + "\n"
        csvlist += newString
        
    # Dump it to S3 with a unique filename. 
    csvFile = save_s3_file(csvlist)

    # and tell Pinpoint to import it as a Segment.
    pinResponse = ingest_pinpoint(csvFile)
    
    return {
        'statusCode': 200,
        'body': json.dumps(pinResponse)
    }

Configure the Lambda

Firstly, you’ll need to raise the function timeout, because sometimes it will take time to import large Pinpoint segments. To do so, navigate to the Configuration tab, then General configuration and change the Timeout value to the maximum of 15 minutes.

Next, select Environment variables beneath General configuration in the navigation pane. Choose Edit, followed by Add environment variable, for each Key and Value below.

  • Create a key – DynamoUserTableName – and give it the name of the DynamoDB table you built in the previous step. (If following our recommendations, it would be userdata. )
  • Create a key – PinpointApplicationID – and give it the Project ID (not the name), of the Pinpoint Project you created in the first step.
  • Create a key – S3BucketName – and give it the name of the Pinpoint Import S3 Bucket.
  • Finally, create a key – PinpointS3RoleARN – and paste the ARN of the Pinpoint S3 role you created during the Import Bucket creation step.
  • Once all Environment Variables are entered, choose Save.

In a production build, you could have this information stored in System Manager Parameter Store, in order to ensure portability and resilience.

While still in the Configuration tab, from the navigation pane, choose the Permissions menu option.

  • Note that just beneath Execution role, AWS has created an IAM Role for the Lambda. Select the role’s name to view it in the IAM console.
  • On the Role’s page, in the Permissions tab and within the Permissions policies section, you should see one policy attached to the role: AWSLambdaBasicExecutionRole
  • You will need to give the Lambda access to your Pinpoint import bucket, so highlight the Policy name and select the Add permissions dropdown and choose Create inline policy – we won’t be needing this role anywhere else.
  • On the next screen, click the JSON tab.
    • Paste the following IAM Policy JSON:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::YOUR-PINPOINT-BUCKET-NAME-HERE/*",
                "arn:aws:s3:::YOUR-PINPOINT-BUCKET-NAME-HERE",
                "arn:aws:s3:::YOUR-CM-BUCKET-NAME-HERE/*",
                "arn:aws:s3:::YOUR-CM-BUCKET-NAME-HERE"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "dynamodb:BatchGetItem",
            "Resource": "arn:aws:dynamodb:region:accountId:table/userdata"
        },
        {
            "Effect": "Allow",
            "Action": "mobiletargeting:CreateImportJob",
            "Resource": "arn:aws:mobiletargeting:region:accountId:apps/application-id"
        },
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::accountId:role/PinpointSegmentImport"
        }
    ]
}
    • Replace the placeholder YOUR-CM-BUCKET-NAME-HERE with the name of the S3 Bucket you created in the previous blog post to store, and the YOUR-PINPOINT-BUCKET-NAME-HERE with the bucket to store Amazon Pinpoint segment endpoint you created earlier in the blog.
    • Remember to replace accountId, region and application-id with your AWS account ID, the region you’re running Amazon Pinpoint from, and the Amazon Pinpoint project ID respectively.
    • Choose Review Policy.
    • Give the policy a name – we used S3TriggerCohortIngestPolicy.
    • Finally, choose Create Policy.
Trigger the Lambda via S3

The goal is for the Lambda to be triggered when Cohort Modeler drops the extract file into its designated S3 delivery bucket. Fortunately, setting this up is a simple process:

  • Navigate back to the Lambda Functions page. For this particular Lambda script S3TriggerCohortIngest, choose the + Add trigger from the Function overview section.
    • From the Trigger configuration dropdown, select S3 as the source.
    • Under Bucket, enter or select the bucket you’ve chosen for Cohort Modeler extract delivery. (Created in the previous blog.)
    • Leave Event type as “All object create events
    • Leave both Prefix and Suffix blank.
    • Check the box that acknowledges that using the same bucket for input and output is not recommended, as it can increase Lambda usage and thereby costs.
    • Finally, choose Add.
    • Lambda will add the appropriate permissions to invoke the trigger when an object is created in the S3 bucket.
Test the Lambda

The best way to test the end to end process is to simply connect to the endpoint you created in the first step of the process and send it a valid query. I personally use Postman, but you can use curl or any other HTTP tool to send the request.

Again, refer back to your prior work to determine the HTTP API endpoint for your Cohort Modeler’s cohort extract endpoint, and then send it the following query:

https://YOUR-ENDPOINT.execute-api.YOUR-REGION.amazonaws.com/Prod/data/cohort/ea_atrisk?threshold=2

You should receive back a response that looks something like this:

{'statusCode': 200, 'body': 'export/ea_atrisk_2_2023-09-12_13-57-06.json'}

The Status code confirms that the request was successful, and the body provides the name of the export file which was created.

  • From the AWS console, navigate to the S3 Dashboard, and select the S3 Bucket you assigned to Cohort Modeler exports. You should see a JSON file corresponding to the response from your API call in that bucket.
  • Still in S3, navigate back and select the S3 bucket you assigned as your Pinpoint Import bucket. You should find a CSV file with the same file prefix in that bucket.
  • Finally, navigate to the Pinpoint dashboard and choose your Project.
  • From the navigation pane, select Segments. You should see a segment name which directly corresponds to the CSV file which you located in the Pinpoint Import bucket.

If these three steps are complete, then the outbound arm of the Community Engagement Flywheel is functional. All that’s left now is to test the Segment by using it in a Campaign.

Create an email template

In order to send your message recipients a message, you’ll need a message template. In this section, we’ll walk you through this process. The Pinpoint Template Editor is a simple HTML editor, but other third-party services like visual designers, can integrate directly with Pinpoint to provide a seamless integration between the design tool and Pinpoint.

  • From the navigation pane of the Pinpoint console, choose Message templates, and then select Create template.
  • Leave the Channel set to Email, and under Template name, enter a unique and memorable name.
  • Under Subject – We entered and used ‘Happy Video Game Day!’, but enter and use whatever you would like.
  • Locate and copy the contents of EmailTemplate.html, and paste the contents into the Message section of the form.
  • Finally, choose Create, and your Template will now be available for use.

Create & Send the Pinpoint Campaign

For the final step, you will create and send a campaign to the endpoints included in the Segment that the Community Engagement Flywheel created. Earlier, you mapped three email addresses to the identities that Cohort Modeler generated for your query: your email, and two test emails from the SES Email Simulator. As a result, you should receive one email to the email address you selected when you’ve completed this process, as well as events which indicate the status of all campaign activities.

  • In the navigation bar of the Pinpoint console, choose All projects, and select the project you’ve created for this exercise.
  • From the navigation pane, choose Campaigns, and then Create a campaign at the top of the page.
  • On the Create a campaign page, give your campaign a name, highlight Standard campaign, and choose Email for the Channel. To proceed, choose Next.
  • On the Choose a segment page, highlight Use an existing segment, and from the Segment dropdown, select the segment .csv that was created earlier. Once selected, choose Next.
  • On the Create your message page, you have two tasks:
    • You’re going to use the email template you created in the prior step, so in the Email template section, under Template name, select Choose a template, followed by the template you created, and finally Choose template.
    • In the Email settings section, ensure you’ve selected the sender email address you verified previously when you initially created the Pinpoint project.
    • Choose Next.
  • On the Choose when to send the campaign page, ensure Immediately is highlighted for when you want the campaign to be sent. Scroll down and choose Next.
  • Finally, on the Review and launch page, verify your selections as you scroll down the page, and finally Launch campaign.

Check your inbox! You will shortly receive the email, and this confirms the Campaign has been successfully sent.

Conclusion

So far you’ve extended the Cohort Modeler to report on the cohorts it’s built for you, you’ve operated on that extract and built an ETL machine to turn that cohort into relevant contact and personalization data, you’ve imported the contact data into Pinpoint as a static Segment, and you’ve created a Pinpoint Campaign witih that Segment to send messaging to that Cohort.

In the next and final blog post, we’ll show how to respond to events that result from your cohort members interacting with the messaging they’ve been sent, and how to enrich the cohort data with those events so you can understand in deeper detail how your messaging works – or doesn’t work – with your cohort members.

Related Content

About the Authors

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. At work, he specializes in technical implementation of communications services in enterprise systems and architecture/solutions design. In his spare time, he enjoys chess, rock climbing, hiking and triathlon.

Brett Ezell

Brett Ezell

Brett Ezell is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. As a Navy veteran, he joined AWS in 2020 through an AWS technical military apprenticeship program. When he isn’t deep diving into solutions for customer challenges, Brett spends his time collecting vinyl, attending live music, and training at the gym. An admitted comic book nerd, he feeds his addiction every Wednesday by combing through his local shop for new books.

Build Better Engagement using the AWS Community Engagement Flywheel: Part 1 of 3

Post Syndicated from Tristan Nguyen original https://aws.amazon.com/blogs/messaging-and-targeting/build-better-engagement-using-the-aws-community-engagement-flywheel-part-1-of-3/

Introduction

Part 1 of 3: Extending the Cohort Modeler

Businesses are constantly looking for better ways to engage with customer communities, but it’s hard to do when profile data is limited to user-completed form input or messaging campaign interaction metrics. Neither of these data sources tell a business much about their customer’s interests or preferences when they’re engaging with that community.

To bridge this gap for their community of customers, AWS Game Tech created the Cohort Modeler: a deployable solution for developers to map out and classify player relationships and identify like behavior within a player base. Additionally, the Cohort Modeler allows customers to aggregate and categorize player metrics by leveraging behavioral science and customer data.

In this series of three blog posts, you’ll learn how to:

  1. Extend the Cohort Modeler’s functionality to provide reporting functionality.
  2. Use Amazon Pinpoint, the Digital User Engagement Events Database (DUE Events Database), and the Cohort Modeler together to group your customers into cohorts based on that data.
  3. Interact with them through automation to send meaningful messaging to them.
  4. Enrich their behavioral profiles via their interaction with your messaging.

In this blog post, we’ll show how to extend Cohort Modeler’s functionality to include and provide cohort reporting and extraction.

Use Case Examples for The Cohort Modeler

For this example, we’re going to retrieve a cohort of individuals from our Cohort Modeler who we’ve identified as at risk:

  • Maybe they’ve triggered internal alarms where they’ve shared potential PII with others over cleartext
  • Maybe they’ve joined chat channels known to be frequented by some of the game’s less upstanding citizens.

Either way, we want to make sure they understand the risks of what they’re doing and who they’re dealing with.

Because the Cohort Modeler’s API automatically translates the data it’s provided into the graph data format, the request we’re making is an easy one: we’re simply asking CM to retrieve all of the player IDs where the player’s ea_atrisk attribute value is greater than 2.

In our case, that either means

  1. They’ve shared PII at least twice, or shared PII at least once.
  2. Joined the #give-me-your-credit-card chat channel, which is frequented by real-life scammers.

These are currently the only two activities which generate at-risk data in our example model.

Architecture overview

In this example, you’ll extend Cohort Modeler’s functionality by creating a new API resource and method, and test that functional extension to verify it’s working. This supports our use case by providing a studio with a mechanism to identify the cohort of users who have engaged in activities that may put them at risk for fraud or malicious targeting.

CohortModelerExtensionArchitecture

Prerequisites

This blog post series integrates two tech stacks: the Cohort Modeler and the Digital User Engagement Events Database, both of which you’ll need to install. In addition to setting up your environment, you’ll need to clone the Community Engagement Flywheel repository, which contains the scripts you’ll need to use to integrate Cohort Modeler and Pinpoint.

You should have the following prerequisites:

Walkthrough

Extending the Cohort Modeler

In order to meet our functional requirements, we’ll need to extend the Cohort Modeler API. This first part will walk you through the mechanisms to do so. In this walkthrough, you’ll:

  • Create an Amazon Simple Storage Service (Amazon S3) bucket to accept exports from the Cohort Modeler
  • Create an AWS Lambda Layer to support Python operations for Cohort Modeler’s Gremlin interface to the Amazon Neptune database
  • Build a Lambda function to respond to API calls requesting cohort data, and
  • Integrate the Lambda with the Amazon API Gateway.

The S3 Export Bucket

Normally it’d be enough to just create the S3 Bucket, but because our Cohort Modeler operates inside an Amazon Virtual Private Cloud (VPC), we need to both create the bucket and create an interface endpoint.

Create the Bucket

The size of a Cohort Modeler extract could be considerable depending on the size of a cohort, so it’s a best practice to deliver the extract to an S3 bucket. All you need to do in this step is create a new S3 bucket for Cohort Modeler exports.

  • Navigate to the S3 Console page, and inside the main pane, choose Create Bucket.
  • In the General configuration section, enter a bucket a name, remembering that its name must be unique across all of AWS.
  • You can leave all other settings at their default values, so scroll down to the bottom of the page and choose Create Bucket. Remember the name – I’ll be referring to it as your “CM export bucket” from here on out.

Create S3 Gateway endpoint

When accessing “global” services, like S3 (as opposed to VPC services, like EC2) from inside a private VPC, you need to create an Endpoint for that service inside the VPC. For more information on how Gateway Endpoints for Amazon S3 work, refer to this documentation.

  • Open the Amazon VPC console.
  • In the navigation pane, under Virtual private cloud, choose Endpoints.
  • In the Endpoints pane, choose Create endpoint.
  • In the Endpoint settings section, under Service category, select AWS services.
  • In the Services section, under find resources by attribute, choose Type, and select the filter Type: Gateway and select com.amazonaws.region.s3.
  • For VPC section, select the VPC in which to create the endpoint.
  • For Route tables, section, select the route tables to be used by the endpoint. We automatically add a route that points traffic destined for the service to the endpoint network interface.
  • In the Policy section, select Full access to allow all operations by all principals on all resources over the VPC endpoint. Otherwise, select Custom to attach a VPC endpoint policy that controls the permissions that principals have to perform actions on resources over the VPC endpoint.
  • (Optional) To add a tag, choose Add new tag in the Tags section and enter the tag key and the tag value.
  • Choose Create endpoint.

Create the VPC Endpoint Security Group

When accessing “global” services, like S3 (as opposed to VPC services, like EC2) from inside a private VPC, you need to create an Endpoint for that service inside the VPC. One of the things the Endpoint needs to know is what network interfaces to accept connections from – so we’ll need to create a Security Group to establish that trust.

  • Navigate to the Amazon VPC console and In the navigation pane, under Security, choose Security groups.
  • In the Security Groups pane choose Create security group.
  • Under the Basic details section, name your security group S3 Endpoint SG.
  • Under the Outbound Rules section, choose Add Rule.
    • Under Type, select All traffic.
    • Under Source, leave Custom selected.
    • For the Custom Source, open the dropdown and choose the S3 gateway endpoint (this should be named pl-63a5400a)
    • Repeat the process for Outbound rules.
    • When finished, choose Create security group

Creating a Lambda Layer

You can use the code as provided in a Lambda, but the gremlin libraries required for it to run are another story: gremlin_python doesn’t come as part of the default Lambda libraries. There are two ways to address this:

  • You can upload the libraries with the code in a .zip file; this will work, but it will mean the Lambda isn’t editable via the built-in editor, which isn’t a scalable technique (and makes debugging quick changes a real chore).
  • You can create a Lambda Layer, upload those libraries separately, and then associate them with the Lambda you’re creating.

The Layer is a best practice, so that’s what we’re going to do here.

Creating the zip file

In Python, you’ll need to upload a .zip file to the Layer, and all of your libraries need to be included in paths within the /python directory (inside the zip file) to be accessible. Use pip to install the libraries you need into a blank directory so you can zip up only what you need, and no more.

  • Create a new subdirectory in your user directory,
  • Create a /python subdirectory,
  • Invoke pip3 with the —target option:
pip install --target=./python gremlinpython

Ensure that you’re zipping the python folder, the resultant file should be named python.zip and extracts to a python folder.

Creating the Layer

Head to the Lambda console, and select the Layers menu option from the AWS Lambda navigation pane. From there:

  • Choose Create layer in the Layer’s section
  • Give it a relevant name – like gremlinpython .
  • Select Upload a .zip file and upload the zip file you just created
  • For Compatible architectures, select x86_64.
  • Select the Python 3.8 as your runtime,
  • Choose Create.

Assuming all steps have been followed, you’ll receive a message that the layer has been successfully created.

Building the Lambda

You’ll be extending the Cohort Modeler with new functionality, and the way CM manages its functionality is via microservice-based Lambdas. You’ll be building a new API: to query the CM and extract Cohort information to S3.

Create the Lambda

Head back to the Lambda service menu, in the Resources for (your region) section, choose Create Function. From there:

  • On the Create function page select Author from scratch.
  • For Function Name enter ApiCohortGet for consistency.
  • For Runtime choose Python 3.8.
  • For Architectures, select x86_64.
  • Under the Advanced Settings pane select Enable VPC – you’re going to need this Lambda to query Cohort Modeler’s Neptune database, which has VPC endpoints.
    • Under VPC select the VPC created by the Cohort Modeler installation process.
    • Select all subnets in the VPC.
    • Select the security group labeled as the Security Group for API Lambda functions (also installed by CM)
    • Furthermore, select the security group S3 Endpoint SG we created, this allows the Lambda function hosted inside the VPC to access the S3 bucket.
  • Choose Create Function.
  • In the Code tab, and within the Code source window, delete all of the sample code and replace it with the code below. This python script will allow you to query Cohort Modeler for cohort extracts.
import os
import json
import boto3
from datetime import datetime
from gremlin_python import statics
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
from gremlin_python.driver.protocol import GremlinServerError
from gremlin_python.driver import serializer
from gremlin_python.process.anonymous_traversal import traversal
from gremlin_python.process.graph_traversal import __
from gremlin_python.process.strategies import *
from gremlin_python.process.traversal import T, P
from aiohttp.client_exceptions import ClientConnectorError
import logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)

s3 = boto3.client('s3')

def query(g, cohort, thresh):
    return (g.V().hasLabel('player')
            .has(cohort, P.gt(thresh))
            .valueMap("playerId", cohort)
            .toList())

def doQuery(g, cohort, thresh):
    return query(g, cohort, thresh)

# Lambda handler
def lambda_handler(event, context):
    
    # Connection instantiation
    conn = create_remote_connection()
    g = create_graph_traversal_source(conn)
    try:
        # Validate the cohort info here if needed.

        # Grab the event resource, method, and parameters.
        resource = event["resource"]
        method = event["httpMethod"]
        pathParameters = event["pathParameters"]

        # Grab query parameters. We should have two: cohort and threshold
        queryParameters = event.get("queryStringParameters", {})

        cohort_val = pathParameters.get("cohort")
        thresh_val = int(queryParameters.get("threshold", 0))

        result = doQuery(g, cohort_val, thresh_val)

        
        # Convert result to JSON
        result_json = json.dumps(result)
        
        # Generate the current timestamp in the format YYYY-MM-DD_HH-MM-SS
        current_timestamp = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
        
        # Create the S3 key with the timestamp
        s3_key = f"export/{cohort_val}_{thresh_val}_{current_timestamp}.json"

        # Upload to S3
        s3_result = s3.put_object(
            Bucket=os.environ['S3ExportBucket'],
            Key=s3_key,
            Body=result_json,
            ContentType="application/json"
        )
        response = {
            'statusCode': 200,
            'body': s3_key
        }
        return response

    except Exception as e:
        logger.error(f"Error occurred: {e}")
        return {
            'statusCode': 500,
            'body': str(e)
        }

    finally:
        conn.close()

# Connection management
def create_graph_traversal_source(conn):
    return traversal().withRemote(conn)

def create_remote_connection():
    database_url = 'wss://{}:{}/gremlin'.format(os.environ['NeptuneEndpoint'], 8182)
    return DriverRemoteConnection(
        database_url,
        'g',
        pool_size=1,
        message_serializer=serializer.GraphSONSerializersV2d0()
    )

Configure the Lambda

Head back to the Lambda service page, and fom the navigation pane, select Functions.  In the Functions section select ApiCohortGet from the list.

  • In the Function overview section, select the Layers icon beneath your Lambda name.
  • In the Layers section, choose Add a layer.
  • From the Choose a layer section, select Layer Source to Custom layers.
  • From the dropdown menu below, select your recently custom layer, gremlinpython.
  • For Version, select the appropriate (probably the highest, or most recent) version.
  • Once finished, choose Add.

Now, underneath the Function overview, navigate to the Configuration tab and choose Environment variables from the navigation pane.

  • Now choose edit to create a new variable. For the key, enter NeptuneEndpoint , and give it the value of the Cohort Modeler’s Neptune Database endpoint. This value is available from the Neptune control panel under Databases. This should not be the read-only cluster endpoint, so select the ‘writer’ type. Once selected, the Endpoint URL will be listed beneath the Connectivity & security tab
  • Create an additional new key titled,  S3ExportBucket and for the value use the unique name of the S3 bucket you created earlier to receive extracts from Cohort Modeler. Once complete, choose save
  • In a production build, you can have this information stored in System Manager Parameter Store in order to ensure portability and resilience.

While still in the Configuration tab, under the navigation pane choose Permissions.

  • Note that AWS has created an IAM Role for the Lambda. select the role name to view it in the IAM console.
  • Under the Permissions tab, in the Permisions policies section, there should be two policies attached to the role: AWSLambdaBasicExecutionRole and AWSLambdaVPCAccessExecutionRole.
  • You’ll need to give the Lambda access to your CM export bucket
  • Also in the Permissions policies section, choose the Add permissions dropdown and select Create Inline policy – we won’t be needing this role anywhere else.
  • On the new page, choose the JSON tab.
    • Delete all of the sample code within the Policy editor, and paste the inline policy below into the text area.
    • {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": "s3:*",
                  "Resource": [
                      "arn:aws:s3:::YOUR-S3-BUCKET-NAME-HERE",
                      "arn:aws:s3:::YOUR-S3-BUCKET-NAME-HERE /*"
                  ]
              }
          ]
      }
  • Replace the placeholder YOUR-S3-BUCKET-NAME-HERE with the name of your CM export bucket.
  • Click Review Policy.
  • Give the policy a name – I used ApiCohortGetS3Policy.
  • Click Create Policy.

Integrating with API Gateway

Now you’ll need to establish the API Gateway that the Cohort Modeler created with the new Lambda functions that you just created. If you’re on the old console User Interface, we strongly recommend switching over to the new console UI. This is due to the previous UI being deprecated by the 30th of October 2023. Consequently, the following instructions will apply to the new console UI.

  • Navigate to the main service page for API Gateway.
  • From the navigation pane, choose Use the new console.

APIGatewayNewConsole

Create the Resource

  • From the new console UI, select the name of the API Gateway from the APIs Section that corresponds to the name given when you launched the SAM template.
  • On the Resources navigation pane, choose /data, followed by selecting Create resource.
  • Under Resource name, enter cohort, followed by Create resource.

CreateNewResource

We’re not quite finished. We want to be able to ask the Cohort Modeler to give us a cohort based on a path parameter – so that way when we go to /data/cohort/COHORT-NAME/ we receive back information about the cohort name that we provided. Therefore…

Create the Method

CreateMethod

Now we’ll create the GET Method we’ll use to request cohort data from Cohort Modeler.

  • From the same menu, choose the /data/cohort/{cohort} Resource, followed by selecting Get from the Methods dropdown section, and finally choosing Create Method.
  • From the Create method page, select GET under Method type, and select Lambda function under the Integration type.
  • For the  Lambda proxy integration, turn the toggle switch on.
  • Under Lamba function, choose the function ApiCohortGet, created previously.
  • Finally, choose Create method.
  • API Gateway will prompt and ask for permissions to access the Lambda – this is fine, choose OK.

Create the API Key

You’ll want to access your API securely, so at a minimum you should create an API Key and require it as part of your access model.

CreateAPIKey

  • Under the API Gateway navigation pane, choose APIs. From there, select API Keys, also under the navigation pane.
  • In the API keys section, choose Create API key.
  • On the Create API key page, enter your API Key name, while leaving the remaining fields at their default values. Choose Save to complete.
  • Returning to the API keys section, select and copy the link for the API key which was generated.
  • Once again, select APIs from the navigation menu, and continue again by selecting the link to your CM API from the list.
  • From the navigation pane, choose API settings, folded under your API name, and not the Settings option at the bottom of the tab.

  • In the API details section, choose Edit under API details. Once on the Edit API settings page, ensure the Header option is selected under API key source.

Deploy the API

Now that you’ve made your changes, you’ll want to deploy the API with the new endpoint enabled.

  • Back in the navigation pane, under your CM API’s dropdown menu, choose Resources.
  • On the Resources page for your CM API, choose Deploy API.
  • Select the Prod stage (or create a new stage name for testing) and click Deploy.

Test the API

When the API has deployed, the system will display your API’s URL. You should now be able to test your new Cohort Modeler API:

  • Using your favorite tool (curl, Postman, etc.) create a new request to your API’s URL.
    • The URL should look like https://randchars.execute-api.us-east-1.amazonaws.com/Stagename. You can retrieve your APIGateway endpoint URL by selecting API Settings, in the navigation pane of your CM API’s dropdown menu.
    • From the API settings page, under Default endpoint, will see your Active APIGateway endpoint URL. Remember to add the Stagename (for example, “Prod) at the end of the URL.

    • Be sure you’re adding a header named X-API-Key to the request, and give it the value of the API key you created earlier.
    • Add the /data/cohort resource to the end of the URL to access the new endpoint.
    • Add /ea_atrisk after /data/cohort – you’re querying for the cohort of players who belong to the at-risk cohort.
    • Finally, add ?threshold=2 so that we’re only looking at players whose cohort value (in this case, the number of times they’ve shared personally identifiable information) is greater than 2. The final URL should look something like: https://randchars.execute-api.us-east-1.amazonaws.com/Stagename/data/cohort/ea_atrisk?threshold=2
  • Once you’ve submitted the query, your response should look like this:
{'statusCode': 200, 'body': 'export/ea_atrisk_2_2023-09-12_13-57-06.json'}

The status code indicates a successful query, and the body indicates the name of the json file in your extract S3 bucket which contains the cohort information. The name comprises of the attribute, the threshold level and the time the export was made. Go ahead and navigate to the S3 bucket, find the file, and download it to see what Cohort Modeler has found for you.

Troubleshooting

Installing the Game Tech Cohort Modeler

  • Error: Could not find public.ecr.aws/sam/build-python3.8:latest-x86_64 image locally and failed to pull it from docker
    • Try: docker logout public.ecr.aws.
    • Attempt to pull the docker image locally first: docker pull public.ecr.aws/sam/build-python3.8:latest-x86_64
  • Error: RDS does not support creating a DB instance with the following combination:DBInstanceClass=db.r4.large, Engine=neptune, EngineVersion=1.2.0.2, LicenseModel=amazon-license.
    • The default option r4 family was offered when Neptune was launched in 2018, but now newer instance types offer much better price/performance. As of engine version 1.1.0.0, Neptune no longer supports r4 instance types.
    • Therefore, we recommend choosing another Neptune instance based on your needs, as detailed on this page.
      • For testing and development, you can consider the t3.medium and t4g.medium instances, which are eligible for Neptune free-tier offer.
      • Remember to add the instance type that you want to use in the AllowedValues attributes of the DBInstanceClass and rebuilt using sam build –use-container

Using the data gen script (for automated data generation)

  • The cohort modeler deployment does not deploy the CohortModelerGraphGenerator.ipynb which is required for dummy data generation as a default.
  • You will need to login to your Sagemaker instance and upload the  CohortModelerGraphGenerator.ipynb file and run through the cells to generate the dummy data into your S3 bucket.
  • Finally, you’ll need to follow the instructions in this page to load the dummy data from Amazon S3 into your Neptune instance.
    • For the IAM role for Amazon Neptune to load data from Amazon S3, the stack should have created a role with the name Cohort-neptune-iam-role-gametech-modeler.
    • You can run the requests script from your jupyter notebook instance, since it already has access to the Amazon Neptune endpoint. The python script should look like below:
import requests
import json

url = 'https://<NeptuneEndpointURL>:8182/loader'

headers = {
    'Content-Type': 'application/json'
}

data = {
    "source": "<S3FileURI>",
    "format": "csv",
    "iamRoleArn": "NeptuneIAMRoleARN",
    "region": "us-east-1",
    "failOnError": "FALSE",
    "parallelism": "MEDIUM",
    "updateSingleCardinalityProperties": "FALSE",
    "queueRequest": "TRUE"
}

response = requests.post(url, headers=headers, data=json.dumps(data))

print(response.text)

    • Remember to replace the NeptuneEndpointURL, S3FileURI, and NeptuneIAMRoleARN.
    • Remember to load user_vertices.csv, campaign_vertices.csv, action_vertices.csv, interaction_edges.csv, engagement_edges.csv, campaign_edges.csv, and campaign_bidirectional_edges.csv in that order.

Conclusion

In this post, you’ve extended the Cohort Modeler to respond to requests for cohort data, by both querying the cohort database and providing an extract in an S3 bucket for future use. In the next post, we’ll demonstrate how creating this file triggers an automated process. This process will identify the players from the cohort in the studio’s database, extract their contact and other personalization data, compiling the data into a CSV file from that request, and import that file into Pinpoint for targeted messaging.

Related Content

About the Authors

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. At work, he specializes in technical implementation of communications services in enterprise systems and architecture/solutions design. In his spare time, he enjoys chess, rock climbing, hiking and triathlon.

Brett Ezell

Brett Ezell

Brett Ezell is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. As a Navy veteran, he joined AWS in 2020 through an AWS technical military apprenticeship program. When he isn’t deep diving into solutions for customer challenges, Brett spends his time collecting vinyl, attending live music, and training at the gym. An admitted comic book nerd, he feeds his addiction every Wednesday by combing through his local shop for new books.

Simplify your SMS setup with the new Amazon Pinpoint SMS console

Post Syndicated from hamzarau original https://aws.amazon.com/blogs/messaging-and-targeting/send-sms-using-the-new-amazon-pinpoint-sms-console/

Amazon Pinpoint is a multichannel communication service that helps application developers engage their customers through communication channels such as SMS or text messaging, email, mobile push, voice, and in-app messaging.

Amazon Pinpoint SMS provides the global scale, resiliency, and flexibility required to deliver SMS and voice messaging in web, mobile, or business applications. SMS messaging is used for use cases like one-time passcode validation, time sensitive alerts, and two-way chat due to its global reach and ubiquity. Today Amazon Pinpoint SMS sends messages to over 240 countries and regions. In this post, we will review how to use the new Pinpoint SMS management console to get your SMS resources setup correctly the first time.

This blog walks through the setup and configuration steps for Pinpoint SMS using the management console. Additionally, all setup and configurations can also be completed using Pinpoint SMS APIs. For more information visit the Pinpoint SMS documentation, or complete the Amazon Pinpoint SMS workshop.

The Pinpoint SMS management console provides control for the existing functionality of the Pinpoint SMS APIs to create, and manage your SMS and voice resources. In addition, the Pinpoint SMS console has a Quick start – SMS setup guide or Request originator flow to guide you through the setup process and for requesting and managing your SMS resources.

If you require additional background on how SMS works using Amazon Pinpoint SMS, refer to How to Manage Global Sending of SMS with Amazon Pinpoint. Below are some important SMS concepts we’ll highlight in this blog post.

Important SMS Concepts and Resources

  • Phone pool: The phone pool resource is a collection of phone numbers and sender IDs that all share the same settings and provide failover if a number becomes unavailable.
  • Originator: An originator refers to either a phone number or sender ID.
  • Phone number: Also called originator number, a phone number is a numeric string of numbers that identifies the sender. This can be a long code, short code, toll-free number (TFN), or 10-digit long code (10DLC). For more information see choosing a phone number or sender ID.
  • Verified destination phone number: When your account is in Sandbox you can only send SMS messages to phone numbers that have gone through the verification process. The phone number receives an SMS message with a verification code. The received code must be entered into the console to complete the process.
  • Simulator phone number: A simulator phone number behaves as any other origination and destination phone number without sending the SMS message to mobile carriers. Simulator phone numbers do not require registration and are used for testing scenarios.
  • Sender ID: Also called originator ID, a sender ID is an alphanumeric string that identifies the sender. For more information see choosing a phone number or sender ID.
  • Registered phone number: Some countries require you to register your company’s identity before you can purchase phone numbers or sender IDs. They also require a review of the messages that you send to recipients in their country. Registrations are processed by external third parties, so the amount of time to process a registration varies by phone number type and country. After all required registrations are complete, the status of your phone numbers changes to Active and is available for use. For more information about which countries require registration see, supported countries and regions (SMS channel).

Getting started

Sign-in to the AWS management console and search for Amazon Pinpoint. If you don’t have an existing AWS account, complete the following steps to create one.

In the Amazon Pinpoint console, you can choose between managing Pinpoint SMS and Pinpoint campaign orchestration. Pinpoint SMS is the place where applications developers go to setup and configure their associated resources for SMS sending through any AWS service. Pinpoint campaign orchestration is for builders who want to manage their customer segments and send messages using campaigns, or multi-step journeys. Campaign orchestration utilizes communication channels like Pinpoint SMS or Amazon SES (simple email service) to deliver its messages. In this blog, we will discuss how to configure Pinpoint SMS using its management console.

Amazon Pinpoint SMS Console

Quick start – SMS setup guide

Once you’ve selected the Amazon Pinpoint SMS console, you will land on the Overview page. On this page, you get a summary of your SMS resources and the Quick start – SMS setup guide. This guide will walk you through creating the appropriate SMS resources to start sending SMS messages. The steps outlined in the Quick start guide are recommended but not required.

Step 1: Create a phone pool

A phone pool is a collection of phone numbers and sender IDs that all share the same settings and provide failover if a number becomes unavailable. Phone pools provide the benefit of managing for number resiliency, removes the complexity from sending applications, and provides a logical grouping to manage phone numbers and sender IDs. For example, phone pools can be grouped by use-case such as having a phone pool for OTP (one-time password) messages.

In the navigation pane, under Overview, in the Quick start section, choose Create pool. Under the pool setup section, enter a name for your pool in Pool name. To create a pool, you will need to select an origination identity, either a phone number or sender ID to associate with the pool. Additional origination identities can be added once the pool is created on the Phone pools page. If you don’t have an active phone number or sender ID in your account, we recommend selecting a simulator number, which can be used for testing and does not require any registration. Once you’ve selected an origination identity, you can choose Create phone pool to complete step 1.

Setting up phone pools for sending SMS

Step 2: Create a configuration set

A configuration set is a set of rules that are applied when you send a message. For example, a configuration set can specify a destination for events related to a message. When SMS events occur (such as delivery or failure events), they are routed to the destination associated with the configuration set that you specified when you sent the message. You’re not required to use configuration sets when you send messages, but we recommend that you do. We support sending SMS and voice events to Amazon CloudWatch, Amazon Kinesis DataFirehose, and Amazon SNS.

In the navigation pane, under Overview, in the Quick start section, choose Create set. Under the Configuration set details section, enter a name in Configuration set name. For Event Destination setup, choose either the quick start option to create a Cloud formation stack to automatically create and configure CloudWatch, Kinesis DataFirehose, and SNS to log all events or the advanced option to manually select which event destinations you would like to setup. Once you’ve made the selection, choose Create Configuration set to complete step 2.

How to create a configuration set for sending SMS

Step 3: Test SMS sending

Send a test message using the SMS simulator. Select an originator to send from, and a destination number to send to. To track the status of your message, add a configuration set to publish SMS events.

In the navigation pane, under Overview, in the Quick start section, choose Test SMS sending. Under the Originator section, select either a phone pool, phone number, or sender ID in your account to send test messages from. Next, under the Destination phone number section, select either a simulator number or active destination number to send test messages to. If your account is in Sandbox, you can only send messages to simulator numbers or verified destination numbers. Once your account is in Production you can send messages to simulator numbers or any active destination number. You can (optionally) select a configuration set to track your SMS events. Next, under the Message body section, enter a sample message and send the test message.

Note – If you are sending from a US simulator number (or using a phone pool that only contains a US simulator number) you can only send messages to US simulator destination numbers. A simulator phone number behaves like any other phone number without sending the SMS message to mobile carriers.

SMS simulator in the SMS console

Step 4: Request production Access

Finally, if your account is in Sandbox there are limits to the amount you can spend and can only send to verified destination phone numbers. Request moving your account from Sandbox to Production to remove these limits. To move to Production, open a case with AWS Support Center.

Conclusion

After following the request for Production access, you’ve completed the recommended steps to get your account configuration setup. You have now tested and configured the following resources in your account:

  • Phone pool: A phone pool is a collection of phone numbers and sender IDs that all share the same settings and provide failover if a number becomes unavailable. Phone pools provide the benefit of managing for number resiliency, removes the complexity from sending applications, and provides a logical grouping to manage phone numbers and sender IDs.
    • Originator: As part of the pool setup, you are required to associate at least one originator to the phone pool. An originator refers to either a phone number or sender ID. If you’ve selected a simulator number and would like to now request a new phone number or sender ID, you can do so following Request originator flow.
  • Configuration set: A configuration set allows you to organize, track, and configure logging of your SMS events, specifying where to publish them by adding event destinations.

Next steps

To request additional originators such as phone numbers or sender IDs, you can follow the Request Originator flow in the management console. If your originator requires registrations and is supported, you can self-service the phone number or sender ID registration in the management console.

An Overview of Bulk Sender Changes at Yahoo/Gmail

Post Syndicated from Dustin Taylor original https://aws.amazon.com/blogs/messaging-and-targeting/an-overview-of-bulk-sender-changes-at-yahoo-gmail/

In a move to safeguard user inboxes, Gmail and Yahoo Mail announced a new set of requirements for senders effective from February 2024. Let’s delve into the specifics and what Amazon Simple Email Service (Amazon SES) customers need to do to comply with these requirements.

What are the new email sender requirements?

The new requirements include long-standing best practices that all email senders should adhere to in order to achieve good deliverability with mailbox providers. What’s new is that Gmail, Yahoo Mail, and other mailbox providers will require alignment with these best practices for those who send bulk messages over 5000 per day or if a significant number of recipients indicate the mail as spam.

The requirements can be distilled into 3 categories: 1) stricter adherence to domain authentication, 2) give recipients an easy way to unsubscribe from bulk mail, and 3) monitoring spam complaint rates and keeping them under a 0.3% threshold.

* This blog was originally published in November 2023, and updated on January 12, 2024 to clarify timelines, and to provide links to additional resources.

1. Domain authentication

Mailbox providers will require domain-aligned authentication with DKIM and SPF, and they will be enforcing DMARC policies for the domain used in the From header of messages. For example, gmail.com will be publishing a quarantine DMARC policy, which means that unauthorized messages claiming to be from Gmail will be sent to Junk folders.

Read Amazon SES: Email Authentication and Getting Value out of Your DMARC Policy to gain a deeper understanding of SPF and DKIM domain-alignment and maximize the value from your domain’s DMARC policy.

The following steps outline how Amazon SES customers can adhere to the domain authentication requirements:

Adopt domain identities: Amazon SES customers who currently rely primarily on email address identities will need to adopt verified domain identities to achieve better deliverability with mailbox providers. By using a verified domain identity with SES, your messages will have a domain-aligned DKIM signature.

Not sure what domain to use? Read Choosing the Right Domain for Optimal Deliverability with Amazon SES for additional best practice guidance regarding sending authenticated email. 

Configure a Custom MAIL FROM domain: To further align with best practices, SES customers should also configure a custom MAIL FROM domain so that SPF is domain-aligned.

The table below illustrates the three scenarios based on the type of identity you use with Amazon SES

Scenarios using example.com in the From header DKIM authenticated identifier SPF authenticated identifier DMARC authentication results
[email protected] as a verified email address identity amazonses.com email.amazonses.com Fail – DMARC analysis fails as the sending domain does not have a DKIM signature or SPF record that matches.
example.com as a verified domain identity example.com email.amazonses.com Success – DKIM signature aligns with sending domain which will cause DMARC checks to pass.
example.com as a verified domain identity, and bounce.example.com as a custom MAIL FROM domain example.com bounce.example.com Success – DKIM and SPF are aligned with sending domain.

Figure 1: Three scenarios based on the type of identity used with Amazon SES. Using a verified domain identity and configuring a custom MAIL FROM domain will result in both DKIM and SPF being aligned to the From header domain’s DMARC policy.

Be strategic with subdomains: Amazon SES customers should consider a strategic approach to the domains and subdomains used in the From header for different email sending use cases. For example, use the marketing.example.com verified domain identity for sending marketing mail, and use the receipts.example.com verified domain identity to send transactional mail.

Why? Marketing messages may have higher spam complaint rates and would need to adhere to the bulk sender requirements, but transactional mail, such as purchase receipts, would not necessarily have spam complaints high enough to be classified as bulk mail.

Publish DMARC policies: Publish a DMARC policy for your domain(s). The domain you use in the From header of messages needs to have a policy by setting the p= tag in the domain’s DMARC policy in DNS. The policy can be set to “p=none” to adhere to the bulk sending requirements and can later be changed to quarantine or reject when you have ensured all email using the domain is authenticated with DKIM or SPF domain-aligned authenticated identifiers.

2. Set up an easy unsubscribe for email recipients

Bulk senders are expected to include a mechanism to unsubscribe by adding an easy to find link within the message. The February 2024 mailbox provider rules will require senders to additionally add one-click unsubscribe headers as defined by RFC 2369 and RFC 8058. These headers make it easier for recipients to unsubscribe, which reduces the rate at which recipients will complain by marking messages as spam.

There are many factors that could result in your messages being classified as bulk by any mailbox provider. Volume over 5000 per day is one factor, but the primary factor that mailbox providers use is in whether the recipient actually wants to receive the mail.

If you aren’t sure if your mail is considered bulk, monitor your spam complaint rates. If the complaint rates are high or growing, it is a sign that you should offer an easy way for recipients to unsubscribe.

How to adhere to the easy unsubscribe requirement

The following steps outline how Amazon SES customers can adhere to the easy unsubscribe requirement:

Add one-click unsubscribe headers to the messages you send: Amazon SES customers sending bulk or potentially unwanted messages will need to implement an easy way for recipients to unsubscribe, which they can do using the SES subscription management feature.

Mailbox providers are requiring that large senders give recipients the ability to unsubscribe from bulk email in one click using the one-click unsubscribe header, however it is acceptable for the unsubscribe link in the message to direct the recipient to a landing page for the recipient to confirm their opt-out preferences.

To set up one-click unsubscribe without using the SES subscription management feature, include both of these headers in outgoing messages:

  • List-Unsubscribe-Post: List-Unsubscribe=One-Click
  • List-Unsubscribe: <https://example.com/unsubscribe/example>

When a recipient unsubscribes using one-click, you receive this POST request:

POST /unsubscribe/example HTTP/1.1
Host: example.com
Content-Type: application/x-www-form-urlencoded
Content-Length: 26
List-Unsubscribe=One-Click

Gmail’s FAQ and Yahoo’s FAQ both clarify that the one-click unsubscribe requirement will not be enforced until June 2024 as long as the bulk sender has a functional unsubscribe link clearly visible in the footer of each message.

Honor unsubscribe requests within 2 days: Verify that your unsubscribe process immediately removes the recipient from receiving similar future messages. Mailbox providers are requiring that bulk senders give recipients the ability to unsubscribe from email in one click, and that the senders process unsubscribe requests within two days.

If you adopt the SES subscription management feature, make sure you integrate the recipient opt-out preferences with the source of your email sending lists. If you implement your own one-click unsubscribe (for example, using Amazon API Gateway and an AWS Lambda function), make sure it designed to suppress sending to email addresses in your source email lists.

Review your email list building practices: Ensure responsible email practices by refraining from purchasing email lists, safeguarding opt-in forms from bot abuse, verifying recipients’ preferences through confirmation messages, and abstaining from automatically enrolling recipients in categories that were not requested.

Having good list opt-in hygiene is the best way to ensure that you don’t have high spam complaint rates before you adhere to the new required best practices. To learn more, read What is a Spam Trap, and Why You Should Care.

3. Monitor spam rates

Mailbox providers will require that all senders keep spam complaint rates below 0.3% to avoid having their email treated as spam by the mailbox provider. The following steps outline how Amazon SES customers can meet the spam complaint rate requirement:

Enroll with Google Postmaster Tools: Amazon SES customers should enroll with Google Postmaster Tools to monitor their spam complaint rates for Gmail recipients.

Gmail recommends spam complaint rates stay below 0.1%. If you send to a mix of Gmail recipients and recipients on other mailbox providers, the spam complaint rates reported by Gmail’s Postmaster Tools are a good indicator of your spam complaint rates at mailbox providers who don’t let you view metrics.

Enable Amazon SES Virtual Deliverability Manager: Enable Virtual Deliverability Manager (VDM) in your Amazon SES account. Customers can use VDM to monitor bounce and complaint rates for many mailbox providers. Amazon SES recommends customers to monitor reputation metrics and stay below a 0.1% complaint rate.

Segregate and secure your sending using configuration sets: In addition to segregating sending use cases by domain, Amazon SES customers should use configuration sets for each sending use case.

Using configuration sets will allow you to monitor your sending activity and implement restrictions with more granularity. You can even pause the sending of a configuration set automatically if spam complaint rates exceed your tolerance threshold.

Conclusion

These changes are planned for February 2024, but be aware that the exact timing and methods used by each mailbox provider may vary. If you experience any deliverability issues with any mailbox provider prior to February, it is in your best interest to adhere to these required best practices as a first step.

We hope that this blog clarifies any areas of confusion on this change and provides you with the information you need to be prepared for February 2024. Happy sending!

Helpful links:

How to prevent SMS Pumping when using Amazon Pinpoint or SNS

Post Syndicated from Akshada Umesh Lalaye original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-prevent-sms-pumping-when-using-amazon-pinpoint-or-sns/

SMS fraud is, unfortunately, a common issue that all senders of SMS encounter as they adopt SMS as a communication channel. This post defines the most common types of fraud and provides concrete guidance on how to mitigate or eliminate each of them.

Introduction to SMS Pumping:

SMS Pumping, also known as an SMS Flood attack, or Artificially Inflated Traffic (AIT), occurs when fraudsters exploit a phone number input field to acquire a one-time passcode (OTP), an app download link, or any other content via SMS. In cases where these input forms lack sufficient security measures, attackers can artificially increase the volume of SMS traffic, thereby exploiting vulnerabilities in your application. The perpetrators dispatch SMS messages to a selection of numbers under the jurisdiction of a particular mobile network operator (MNO), ultimately receiving a portion of the resulting revenue. It is essential to understand how to detect these attacks and prevent them.

Common Evidence of SMS Pumping:

  • Dramatic Decrease in Conversion Rates: A common SMS use case is for identity verification through the use of One Time Passwords (OTP) but this could also be seen in other types of use cases where a clear and consistent conversion rate is seen. A drop in a normally stable conversion rate may be caused by an increase in volume that will never convert and can indicate an issue that requires investigation. Setting up an alert for anomalies in conversion rates is always a good practice.
  • SMS Requests or Deliveries from Unknown Countries: If your application normally sends SMS to a defined set of countries and you begin to receive requests for a different country, then then this should be investigated.
  • Spike in Outgoing Messages: A significant and sudden increase in outgoing messages could indicate an issue that requires investigation.
  • Spike in Messages Sent to a Block of Adjacent Numbers: Fraudsters often deploy bots and programmatically loop through numbers in a sequence. You will probably notice an increase in messages to a group of nearby numbers frequently for example, +11111111110, +11111111111

How to Identify and Prevent SMS Pumping Attacks:

Now that we understand the common signs of SMS pumping, lets discuss how to use AWS Services to identify, confirm the fraud and how to place measures in place to prevent it in the first place.

Identify:

Delivery Statistics (UTC)

Delivery Statistics (UTC)

If you are using Amazon Pinpoint, you can use transactional messaging under analytics section to understand the SMS patterns

Transactional Messaging Charts

Transactional Messaging Charts

  • Spikes in Messages Sent to a Block of Adjacent Numbers: If you are using SNS you can use CloudWatch logs to analyse the destination numbers.

You can use CloudWatch Insights query on below log groups

sns/<region>/<Accountnumber>/DirectPublishToPhoneNumber
sns/<region>/<Accountnumber>/DirectPublishToPhoneNumber/failure

The below query will print all the logs that have the destination number like +11111111111
fields @timestamp, @message, @logStream, @log
| filter delivery.destination like '+11111111111'
| limit 20

If you are using Amazon Pinpoint, you can enable event stream to analyse destination numbers.

If you have deployed Digital User Engagement Events Database Solution You can use the below sample Amazon Athena query which displays entries that have the destination number like +11111111111

SELECT * FROM "due_eventdb"."sms_success" where destination_phone_number like '%11111111111%'
SELECT * FROM "due_eventdb"."sms_failure" where destination_phone_number like '%11111111111%'

How to Prevent SMS Pumping: 

      • Example: If you expect only users from India to sign up in your application, you can include rules such as “\+91[0-9]{10}”, which allows only Indian numbers as input.
      • Note: SNS and Pinpoint APIs are not natively integrated with WAF. However, you can connect your application to an Amazon API Gateway with which you can integrate with WAF.
      • How to Create a Regex Pattern Set with WAF – The below Regex Pattern set will allow sending messages to Australia (+61) and India (+91) destination phone numbers
          1. Sign in to the AWS Management Console and navigate to AWS WAF console
          2. In the navigation pane, choose Regex pattern sets and then Create regex pattern set.
          3. Enter a name and description for the regex pattern set. You’ll use these to identify it when you want to use the set. For example, Allowed_SMS_Countries
          4. Select the Region where you want to store the regex pattern set
          5. In the Regular expressions text box, enter one regex pattern per line
          6. Review the settings for the regex pattern set, and choose Create regex pattern set
Regex pattern set details

Regex pattern set details

      • Create a Web ACL with above Regex Pattern Set
          1. Sign in to the AWS Management Console and navigate to AWS WAF console
          2. In the navigation pane, choose Web ACLs and then Create web ACL
          3. Enter a Name, Description and CloudWatch metric name for Web ACL details
          4. Select Resource type as Regional resources
          5. Click Next

            Web ACL details

            Web ACL details

          6. Click on Add Rules > Add my own rules and rule groups
          7. Enter Rule name and select Regular rule

            Web ACL Rule Builder

            Web ACL Rule Builder

          8. Select Inspect > Body, Content type as JSON, JSON match scope as Values, Content to inspect as Full JSON content
          9. Select Match type as Matches pattern from regex pattern set and select the Regex pattern set as “Allowed_SMS_Countries” created above
          10. Select Action as Allow
          11. Click Add Rule  

            Web ACL Rule builder statement

            Web ACL Rule builder statement

          12. Select Block for Default web ACL action for requests that don’t match any rules

            Web ACL Rules

            Web ACL Rules

          13. Set rule priority and Click Next

            Web ACL Rule priority

            Web ACL Rule priority

          14. Configure metrics and Click Next

            Web ACL metrics

            Web ACL metrics

          15. Review and Click Create web ACL

For more information, please refer to WebACL

  • Rate Limit Requests
    • AWS WAF provides an option to rate limit per originating IP. You can define the maximum number of requests allowed in a five-minute period that satisfy the criteria you provide, before limiting the requests using the rule action setting
  • CAPTCHA
    • Implement CAPTCHA in your application request process to protect your application against common bot traffic
  • Turn off “Shared Routes”
  • Exponential Delay Verification Retries
    • Implement a delay between multiple messages to the same phone number. This doesn’t completely eliminate but will help slow down the attack
  • Set CloudWatch Alarm
  • Validate Phone Numbers – You can use the Pinpoint Phone number validate API to check the values for CountryCodeIso2, CountryCodeNumeric, and PhoneType prior to sending SMS and then only send SMS to countries that match your criteria
    Sample API Response:

{
"NumberValidateResponse": {
"Carrier": "ExampleCorp Mobile",
"City": "Seattle",
"CleansedPhoneNumberE164": "+12065550142",
"CleansedPhoneNumberNational": "2065550142",
"Country": "United States",
"CountryCodeIso2": "US",
"CountryCodeNumeric": "1",
"OriginalPhoneNumber": "+12065550142",
"PhoneType": "MOBILE",
"PhoneTypeCode": 0,
"Timezone": "America/Los_Angeles",
"ZipCode": "98101"
}
}

Conclusion:

This post covers the basics of SMS pumping attacks, the different mechanisms that can be used to detect them, and some potential ways to solve for or mitigate them using services and features like Pinpoint Validate API and WAF.

Further Reading:
Review the documentation of WAF with API gateway
here
Review the documentation of Phone number validate
here
Review the Web Access Control lists
here

 

Resources:
Amazon Pinpoint –
https://aws.amazon.com/pinpoint/
Amazon API Gateway –
https://aws.amazon.com/api-gateway/
Amazon Athena –
https://aws.amazon.com/athena/

Automate marketing campaigns with real-time customer data using Amazon Pinpoint

Post Syndicated from Rushabh Lokhande original https://aws.amazon.com/blogs/messaging-and-targeting/automate-marketing-campaigns-with-real-time-customer-data-using-amazon-pinpoint/

Amazon Pinpoint offers marketers and developers one customizable tool to deliver customer communications across channels, segments, and campaigns at scale. Amazon Pinpoint makes it easy to run targeted campaigns and drive customer communications across different channels: email, SMS, push notifications, in-app messaging, or custom channels. Amazon Pinpoint campaigns enables you define which users to target, determine which messages to send, schedule the best time to deliver the messages, and then track the results of your campaign.

In many cases, the customer data resides in a third-party system such as a CRM, Customer Data Platform, Point of Sales, database and data warehouse. This customer data represents a valuable asset for your organization. Your marketing team needs to leverage each piece of this data to elevate the customer experience.

In this blog post we will demonstrate how you can leverage users’ clickstream data stored in database to build user segments and launch campaigns using Amazon Pinpoint. Also, we will showcase the full architecture of the data pipeline including other AWS services such as Amazon RDS, AWS Data Migration Service, Amazon Kinesis and AWS Lambda.

Let us understand our case study with an example: a customer currently has digital touch points such as a Website and a Mobile App to collect the users’ clickstreams and behavioral data where they are storing them in a MySQL database. Marketing teams want to leverage the collected data to deliver a personalized experience by leveraging Amazon Pinpoint capabilities.

You can find below the detail of a specific use case covered by the proposed solution:

  • All the clickstream and customer data are stored in MySQL DB
  • Your marketing team wants to create a personalized Amazon Pinpoint campaigns based on the user status and experience. Ex:
    • Customers who interested in specific offering to activate for them campaign based on their interest
    • Communicate with the preferred language of the user

Please note that this use case is used to showcase the proposed solution capabilities. However, it is not limited to this specific use case since you can leverage any customer collected dimension/attribute to create specific campaign to achieve a specific marketing use case.

In this post, we provide a guided journey on how marketers can collect, segment, and activate audience segments in real-time to increase their agility in managing campaigns.

Overview of solution

The use case covered in this post, focuses on demonstrating the flexibility offered by Amazon Pinpoint in both inbound (Ingestion) and outbound (Activation) stream of customer data. For the inbound stream, Amazon Pinpoint gives you a variety of ways to import your customer data, including:

  1. CSV/JSON import from the AWS console
  2. API operation to create a single or multiple endpoints
  3. Programmatically create and execute import jobs

We will focus on building a real-time inbound stream of customer data available within an Amazon RDS MySQL database specifically. It is important to mention that similar approach can be implemented to ingest data from third-party systems if any.

For the outbound stream, activating customer data using Amazon Pinpoint can be achieved using the following two methods:

  1. Campaign: a campaign is a messaging initiative that engages a specific audience segment.
  2. Journey: a journey is a customized, multi-step engagement experience.

The result of customer data activation cannot be completed without specifying the targeted channel. A channel represents the platform through which you engage your audience segment with messages. For example, Amazon Pinpoint customers can optimize how they target notifications to prospective customers through LINE message and email. They can deliver notifications with more information on prospected customer’s product information such as sales, new products etc. to the appropriate audience.

Amazon Pinpoint supports the following channels:

  • Push notifications
  • Email
  • SMS
  • Voice
  • In-app messages

In addition to these channels, you can also extend the capabilities to meet your specific use case by creating custom channels. You can use custom channels to send messages to your customers through any service that has an API including third-party services. For example, you can use custom channels to send messages through third-party services such as WhatsApp or Facebook Messenger. We will focus on developing an Amazon Pinpoint connector using custom channel to target your customers on third-party services through API.

Solution Architecture

The below diagram illustrates the proposed architecture to address the use case. Moving from left to right:

Fig 1: Architecture Diagram for the Solution

Fig 1: Architecture Diagram for the Solution

  1. Amazon RDS: This hosts customer database where you can have one or many tables contains customer data.
  2. AWS Data Migration Service (DMS): This acts as the glue between Amazon RDS MySQL and the downstream services by replicating any transformation that happens at the record level in the configured customer tables.
  3. Amazon Kinesis Data Streams: This is the destination endpoint for AWS DMS. It will carry all the transformed records for the next stage of the pipeline.
  4. AWS Lambda (inbound): The inbound AWS Lambda triggers the Kinesis Data Streams, process the mutated records, and ingest them in Amazon Pinpoint.
  5. Amazon Pinpoint: This act as the centralized place to define customer segments and launch campaigns.
  6. AWS Lambda (outbound): This act as the custom channel destination for the campaigns activated from Amazon Pinpoint.

To illustrate how to set up this architecture, we’ll walk you through the following steps:

  1. Deploying an AWS CDK stack to provision the following AWS Resources
  2. Validate the Deployment.
  3. Run a Sample Workflow – This workflow will run an AWS Glue PySpark job that uses a custom Python library, and an upgraded version of boto3.
  4. Cleaning up your resources.

Prerequisites

Make sure that you complete the following steps as prerequisites:

The Solution

Launching your AWS CDK Stack

Step 1a: Open your device’s command line or Terminal.

Step1b: Checkout Git repository to a local directory on your device:

git clone https://github.com/aws-samples/amazon-pinpoint-realtime-campaign-optimization-example.git

Step 2: Change directories to the new directory code location:

cd amazon-pinpoint-realtime-campaign-optimization-example

Step 3: Update your AWS account number and region:

  1. Edit config.py with your choice to tool or command line
  2. look for section “Account Setup” and update your account number and region

    Fig 2: Configuring config.py for account-id and region

    Fig 2: Configuring config.py for account-id and region

  3. look for section “VPC Parameters” and update your VPC and subnet info

    Fig 3: Configuring config.py for VPC and subnet information

    Fig 3: Configuring config.py for VPC and subnet information

Step 4: Verify if you are in the directory where app.py file is located:

ls -ltr app.py

Step 5: Create a virtual environment:

macOS/Linux:

python3 -m venv .env

Windows:

python -m venv .env

Step 6: Activate the virtual environment after the init process completes and the virtual environment is created:

macOS/Linux:

source .env/bin/activate

Windows:

.env\Scripts\activate.bat

Step 7: Install the required dependencies:

pip3 install -r requirements.txt

Step 8: Bootstrap the cdk app using the following command:

cdk bootstrap aws://<AWS_ACCOUNTID>/<AWS_REGION>

Replace the place holder AWS_ACCOUNTID and AWS_REGION with your AWS account ID and the region to be deployed.
This step provisions the initial resources, including an Amazon S3 bucket for storing files and IAM roles that grant permissions needed to perform deployments.

Fig 4: Bootstrapping CDK environment

Fig 4: Bootstrapping CDK environment

Please note, if you have already bootstrapped the same account previously, you cannot bootstrap account, in such case skip this step or use a new AWS account.

Step 9: Make sure that your AWS profile is setup along with the region that you want to deploy as mentioned in the prerequisite. Synthesize the templates. AWS CDK apps use code to define the infrastructure, and when run they produce or “synthesize” a CloudFormation template for each stack defined in the application:

cdk synthesize

Step 10: Deploy the solution. By default, some actions that could potentially make security changes require approval. In this deployment, you’re creating an IAM role. The following command overrides the approval prompts, but if you would like to manually accept the prompts, then omit the –require-approval never flag:

cdk deploy "*" --require-approval never

While the AWS CDK deploys the CloudFormation stacks, you can follow the deployment progress in your terminal.

Fig 5: AWS CDK Deployment progress in terminal

Fig 5: AWS CDK Deployment progress in terminal

Once the deployment is successful, you’ll see the successful status as follows:

Fig 6: AWS CDK Deployment completion success

Fig 6: AWS CDK Deployment completion success

Step 11: Log in to the AWS Console, go to CloudFormation, and see the output of the ApplicationStack:

Fig 7: AWS CloudFormation stack output

Fig 7: AWS CloudFormation stack output

Note the values of PinpointProjectId, PinpointProjectName, and RDSSecretName variables. We’ll use them in the next step to upload our artifacts

Testing The Solution

In this section we will create a full data flow using the below steps:

  1. Ingest data in the customer_tb table within the Amazon RDS MySQL DB instance
  2. Validate that AWS Data Migration Service created task is replicating the changes to the Amazon Kinesis Data Streams
  3. Validate that endpoints are created within Amazon Pinpoint
  4. Create Amazon Pinpoint Segment and Campaign and activate data to Webhook.site endpoint URL

Step 1: Connect to MySQL DB instance and create customer database

    1. Sign in to the AWS Management Console and open the AWS Cloud9 console at https://console.aws.amazon.com/cloud9 
    2. Click Create environment
      • Name: mysql-cloud9-01 (for example)
      • Click Next
      • Environment type: Create a new EC2 instance for environment (direct access)
      • Instance type: t2.micro
      • Timeout: 30 minutes
      • Platform: Amazon Linux 2
      • Network settings under VPC settings select the same VPC where the MySQL DB instance was created. (this is the same VPC and Subnet from step 3.3)
      • Click Next
      • Review and click Create environment
    3. Select the created AWS Cloud9 from the AWS Cloud9 console at https://console.aws.amazon.com/cloud9  and click Open in Cloud9. You will have access to AWS Cloud9 Linux shell.
    4. From Linux shell, update the operating system and :
      sudo yum update -y
    5. From Linux shell, update the operating system and :
      sudo yum install -y mysql
    6. To connect to the created MySQL RDS DB instance, use the below command in the AWS Cloud9 Linux shell:
      mysql -h <<host>> -P 3308 --user=<<username>> --password=<<password>>
      • To get values for dbInstanceIdentifier, username, and password
        • Navigate to the AWS Secrets Manager service
        • Open the secret with the name created by the CDK application
        • Select ‘Reveal secret value’ and copy the respective values and replace in your command
      • After you enter the password for the user, you should see output similar to the following.
      • Welcome to the MariaDB monitor.  Commands end with ; or \g.
        Your MySQL connection id is 27
        Server version: 8.0.32 Source distribution
        Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
        Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
        MySQL [(none)]>

Step 2: Ingest data in the customer_tb table within the Amazon RDS MySQL DB instance Once the connection to the MySQL DB instance established, using the same AWS Cloud9 Linux shell connected to the MySQL RDS DB execute following commands.

  • Create database pinpoint-test-db:
    CREATE DATABASE `pinpoint-test-db`;
  • Create table customer-tb:
    Use `pinpoint-test-db`;
    CREATE TABLE `customer_tb` (`userid` int NOT NULL,
                                `email` varchar(150) DEFAULT NULL,
                                `language` varchar(45) DEFAULT NULL,
                                `favourites` varchar(250) DEFAULT NULL,
                                PRIMARY KEY (`userid`);
  • You can verify the schema using the below SQL command:
  1. DESCRIBE `pinpoint-test-db`.customer_tb;
    Fig 8: Verify schema for customer_db table

    Fig 8: Verify schema for customer_db table

    • Insert records in customer_tb table:
    Use `pinpoint-test-db`;
    insert into customer_tb values (1,'[email protected]','english','football');
    insert into customer_tb values (2,'[email protected]','english','basketball');
    insert into customer_tb values (3,'[email protected]','french','football');
    insert into customer_tb values (4,'[email protected]','french','football');
    insert into customer_tb values (5,'[email protected]','french','basketball');
    insert into customer_tb values (6,'[email protected]','french','football');
    insert into customer_tb values (7,'[email protected]','french',null);
    insert into customer_tb values (8,'[email protected]','english','football');
    insert into customer_tb values (9,'[email protected]','english','football');
    insert into customer_tb values (10,'[email protected]','english',null);
    • Verify records in customer_tb table:
    select * from `pinpoint-test-db`.`customer_tb`;
    Fig 9: Verify data for customer_db table

    Fig 9: Verify data for customer_db table

    Step 3: Validate that AWS Data Migration Service created task is replicating the changes to the Amazon Kinesis Data Streams

      1. Sign in to the AWS Management Console and open the AWS DMS console at https://console.aws.amazon.com/dms/v2 
      2. From the navigation panel, choose Database migration tasks.
      3. Click on the created task created by CDK code ‘dmsreplicationtask-*’
      4. Start the replication task

        Fig 10: Starting AWS DMS Replication Task

        Fig 10: Starting AWS DMS Replication Task

      5. Make sure that Status is Replication ongoing

        Fig 11: AWS DMS Replication statistics

        Fig 11: AWS DMS Replication statistics

      6. Navigate to Table Statistics and make sure that the number of Inserts is equal to 10 and Load state is Table completed*

        Fig 12: AWS DMS Replication statistics

        Fig 12: AWS DMS Replication statistics

    Step 4: Validate that endpoints are created within Amazon Pinpoint

    1. Sign in to the AWS Management Console and open the Amazon Pinpoint console at https://console.aws.amazon.com/pinpoint/ 
    2. Click on Amazon Pinpoint Project Demo created by CDK stack “dev-pinpoint-project”
    3. From the left menu, click on Analytics and validate that the Active targetable endpoints are equal to 10 as shown below:
    Fig 13: Amazon Pinpoint endpoint summary

    Fig 13: Amazon Pinpoint endpoint summary

    Step 5: Create Amazon Pinpoint Segment and Campaign

    Step 5.1: Create Amazon Pinpoint Segment

    • Sign in to the AWS Management Console and open the Amazon Pinpoint console at https://console.aws.amazon.com/pinpoint/ 
    • Click on Amazon Pinpoint Project Demo created by CDK stack “dev-pinpoint-project”
    • from the left menu, click on Segments and click Create a segment
    • create Segment using the below configurations:
      • Name: English Speakers
      • Under criteria:
    • Attribute: Language
    • Operator: Conatins
    • Value: english
    Fig 14: Amazon Pinpoint segment summary

    Fig 14: Amazon Pinpoint segment summary

    • Click create segment

    Step 5.2: Create Amazon Pinpoint Campaign

    • from the left menu, click on Campaigns and click Create a campaign
    • set the Campaign name to test campaign and select Custom option for Channel as shown below:
    Fig 15: Amazon Pinpoint create campaign

    Fig 15: Amazon Pinpoint create campaign

    • Click Next
    • Select English Speakers from Segment as shown below and click Next:
    Fig 16: Amazon Pinpoint segment

    Fig 16: Amazon Pinpoint segment

    • Choose Lambda function channel type and select outbound lambda function with name pattern as ApplicationStack-lambdaoutboundfunction* from the dropdown as shown below:
    Fig 17: Amazon Pinpoint message creation

    Fig 17: Amazon Pinpoint message creation

    • Click Next
    • Choose At a specific time option and immediately to send the campaign as show below:
    Fig 18: Amazon Pinpoint campaign scheduling

    Fig 18: Amazon Pinpoint campaign scheduling

    If you push more messages or records into Amazon RDS (from step 2.4), you will need to create a new campaign (from step 4.2) to process the new messages.

    • Click Next, review the configuration and click Launch campaign.
    • Navigate to dev-pinpoint-project and select the campaign created in previous step. You should see status as ‘Complete’
    Fig 19: Amazon Pinpoint campaign processing status

    Fig 19: Amazon Pinpoint campaign processing status

    • Navigate to dev-pinpoint-project dashboard and select your campaign in ‘Campaign metrics’ dashboard, you will see the statistics for the processing.
    Fig 20: Amazon Pinpoint campaign metrics

    Fig 20: Amazon Pinpoint campaign metrics

    Accomplishments

    This is a quick summary of what we accomplished:

    1. Created an Amazon RDS MySQL DB instance and define customer_tb table schema
    2. Created an Amazon Kinesis Data Stream
    3. Replicated database changes from the Amazon RDS MySQL DB to Amazon Kinesis Data Stream
    4. Created an AWS Lambda function triggered by Amazon Kinesis Data Stream to ingest database records in Amazon Pinpoints as User endpoints using AWS SDK
    5. Created an Amazon Pinpoint Project, segment and campaign
    6. Created an AWS Lambda function as custom channel for Amazon Pinpoint campaign
    7. Tested end-to-end data flow from Amazon RDS MySQL DB instance to third party endpoint

    Next Steps

    You have now gained a good understanding of Amazon Pinpoint agnostic data flow but there are still many areas left for exploration. What this workshop hasn’t covered is the operation of other communication channels such as Email, SMS, Push notification and Voice outbound. You can enable the channels that are pertinent to your use case and send messages using campaigns or journeys.

    Clean Up

    Make sure that you clean up all of the other AWS resources that you created in the AWS CDK Stack deployment. You can delete these resources via the AWS CDK Destroy command as follows or the CloudFormation console.

    To destroy the resources using AWS CDK, follow these steps:

    • Follow Steps 1-5 from the ‘Launching your CDK Stack’ section.
    • Destroy the app by executing the following command:
    cdk destroy

    Summary

    In this post, you have now gained a good understanding of Amazon Pinpoint flexible real-time data flow. By implementing the steps detailed in this blog post, you can achieve a seamless integration of your customer data from Amazon RDS MySQL database to Amazon Pinpoint where you can leverage segments and campaigns to activate data using custom channels to third-party services via API. The demonstrated use case focuses on Amazon RDS MySQL database as a data source. However, there are still many areas left for exploration. What this post hasn’t covered is the operation of integrating customer data from other type of data sources such as MongoDB, Microsoft SQL Server, Google Cloud, etc. Also, other communication channels such as Email, SMS, Push notification and Voice outbound can be used in the activation layer. You can enable the channels that are pertinent to your use case and send messages using campaigns or journeys, and get a complete view of their customers across all touchpoints and can lead to less relevant marketing campaigns.

    About the Authors

  2. Bret Pontillo is a Senior Data Architect with AWS Professional Services Analytics Practice. He helps customers implement big data and analytics solutions. Outside of work, he enjoys spending time with family, traveling, and trying new food.
    Rushabh Lokhande is a Data & ML Engineer with AWS Professional Services Analytics Practice. He helps customers implement big data, machine learning, and analytics solutions. Outside of work, he enjoys spending time with family, reading, running, and golf.
    Ghandi Nader is a Senior Partner Solution Architect focusing on the Adtech and Martech industry. He helps customers and partners innovate and align with the market trends related to the industry. Outside of work, he enjoys spending time cycling and watching formula one.

How to implement multi-tenancy with Amazon Pinpoint

Post Syndicated from Tristan Nguyen original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-implement-multi-tenancy-with-amazon-pinpoint/

Navigating Multi-Tenancy in Amazon Pinpoint

Businesses are constantly evolving, often managing multiple product lines, customer segments, or even geographical locations. Furthermore, many business-to-business (B2B) companies that are Independent Software Vendors (ISVs) will often need to manage their customer’s marketing automation environment. This complexity necessitates a robust customer engagement strategy that can adapt and scale efficiently. However, managing disparate systems for each tenant is not only cumbersome but also resource-intensive, leading to increased operational costs and potential data silos. A multi-tenancy setup in Amazon Pinpoint addresses these challenges head-on, allowing businesses to streamline their customer engagement efforts under a unified architecture.

The question is not just whether to adopt multi-tenancy, but how to implement it in a way that aligns with your unique business requirements. Amazon Pinpoint offers multiple approaches to achieve this. This blog explores three:

  • Single Pinpoint Project: Simple but demands careful permissions management.
  • Multiple Pinpoint Projects: Granular control but limited by soft project quotas.
  • Multiple Account & Multi Pinpoint Projects: Highly scalable but needs comprehensive monitoring.

We’ll delve into the pros, cons, and best use-cases for each as well as how to choose the different multi-tenancy configuration depending on your communications channels needs, guiding you to make an informed architectural decision.

In this blog, we’ll cut through the complexity, helping you align your Amazon Pinpoint architecture with your business goals. Let’s get started.

Single Account / Single Project (SA/SP)

Overview

In a Single Pinpoint Project setup, all customer engagement activities reside within one project and multi-tenancy within this context will leverage customer endpoint attributes. This streamlined approach allows for easy management, especially for those new to Amazon Pinpoint. A configuration example for this case is shown below:

Single Account / Single Project (SA/SP)

When preparing one Pinpoint Project and managing information for multiple tenants, tenant information can be managed by using custom user attributes of endpoints. Also, campaign information can be managed for each tenant by using the tag function for campaign information. The elements required to take this configuration are shown below.

  • S3 buckets that hold customer data:
    • Prepare an S3 bucket to store customer information lists to be imported into Pinpoint. Amazon Pinpoint allows you to import CSV files in S3 as segments. In order to make settings for each tenant in Amazon Pinpoint, we will include tenant information as custom user attributes in the CSV file.
  • 1 Amazon Pinpoint Project:
    • Create 1 Amazon Pinpoint Project.
    • Settings for each channel to be distributed are also required.
    • Campaign information can be assigned to tenant information by using the tag function.
  • Amazon Kinesis:
  • Athena and S3 buckets to analyze event data:
    • Store Amazon Pinpoint event data in S3 and analyze it via Athena. Take advantage of this solution.

One thing to keep in mind when adopting this configuration is that customer endpoint information exists in the same Pinpoint Project. It is possible to specify values that can be used to identify each tenant, such as custom attributes, and solve the problem with AWS Identity and Access Management (IAM) policies, but it is necessary to manage access rights and attributes on your own.

Also, to add an endpoint, you’ll need to specify its Channel and Address. Take note that one project cannot have the same channel and address for different endpoints. From the above, if the channel and address of the endpoint do not overlap between tenants, it is possible to construct your own access permission control, then this pattern can be examined.

Since fewer components are required compared to other patterns, the configuration is easier to start with. Some customers that want to build on top of Pinpoint API and want to simplify configuration on the Pinpoint side as much as possible can also choose this option. However, this approach can get complex to manage later on as you onboard more tenants. The issue presents itself when you want to create detailed reporting for your tenant in this configuration. You’ll have to have dedicated tags on each campaigns, journeys to operationalize granular reporting for your Amazon Pinpoint project.

Lastly, take note of service limits per Amazon Pinpoint project/AWS account to ensure your use case will be scalable should the need arise.

Single Account / Multiple Projects (SA/MP)

Overview

For this architecture, you are still using a single AWS account to host your Amazon Pinpoint environment, however, you will be creating multiple projects for each customer or tenant. A configuration example for this case is shown diagram.

Single Account / Multiple Projects (SA/MP)

In this example, we will create multiple Amazon Pinpoint Projects. One major difference from the case of the Single Pinpoint Project is that it is possible to completely separate customer endpoint information. When importing customer data segments, it is possible to manage each tenant in a separate state simply by importing them from S3 into the target Pinpoint Project. This makes it easy to control permissions via IAM policies.

Also, with Amazon Pinpoint, you can use email addresses, SMS numbers, message templates, etc. for transmission obtained with the relevant account in common to all projects, and event data for each project can be aggregated via Amazon Kinesis. By adopting such a configuration, you gain the benefits of separating endpoint information per project while still retaining basic setting information management and operator operations.

An example starter solution architecture to set up this configuration are shown below.

  • S3 buckets that hold customer data:
    • Similar to SA/SP, prepare an S3 bucket to store a list of customer information to be imported into Pinpoint. CSV to be imported must be prepared for each project.
  • Amazon DynamoDB Table:
    • Prepare a DynamoDB (or other key-value database) table to manage Pinpoint project information. Tenant information can also be stored as metadata in the DynamoDB table.
  • AWS Lambda:
    • Create a Pinpoint Project using Lambda. Amazon Pinpoint allows you to create and configure projects using the Amazon Pinpoint API, the AWS SDK, or the AWS Command Line Interface (AWS CLI). Thus, it is possible to automate the creation of the Pinpoint project and associated campaigns/journeys. Tenant information is also registered in DynamoDB at the time of creation.
  • Multiple Amazon Pinpoint Projects:
    • This is a Project created by Lambda above. There will now be a Pinpoint Project for each tenant, and endpoint information will be completely separated. It is also easy to control access rights for each project by using the IAM function.
    • Message templates: templates can be created and shared across projects.
    • By using Amazon Pinpoint’s event stream settings, campaign/journeys/app/channels events can be streamed to Amazon Kinesis. Multiple Amazon Pinpoint projects can all stream to one Amazon Kinesis stream. When setup correctly, event data will be tagged with the relevant tenant information so that an analytics solution can decompose the stream later on.
  • Athena and S3 buckets to analyze event data:
    • Amazon Pinpoint event data is stored in Amazon S3 and analyzed via Amazon Athena. The analytics solution, Amazon Athena in this case will be responsible for filtering event data and according to the tenant. Refer to this solution for more details.

Note that Pinpoint projects have a soft limit of 100 projects per AWS account, which can be increased via raising a Support Ticket, other quotas also apply at the project and the account level which should be taken into account.

From the above, it is necessary to note that there are restrictions on quotas per account when using the SA/MP and more initial configurations would be required to automate the process of project creation for individual tenants. However, when compared to SA/SP architecture,

Multiple Accounts & Multi Pinpoint Projects (MA/MP)

Overview

Before diving into the MA/MP approach, it’s crucial to understand the role of AWS Organizations in this configuration. AWS Organizations allows you to consolidate multiple AWS accounts into an organization to achieve centralized governance and billing. This feature is particularly useful in a MA/MP setup, as it enables streamlined management of multiple AWS accounts and Amazon Pinpoint projects from a single central management AWS account. For more information on AWS Organizations, you can visit the official AWS Organizations documentation.

In an MA/MP setup, we utilize separate AWS Accounts for each customer or tenant. A configuration example for this case is shown below.

In this example, we have created a Management account and prepared multiple AWS accounts under it. The management account manages the AWS account ID and the Pinpoint project ID, and has a configuration created with Lambda. Customer data and Event Stream Data are managed through a Management account, and information on each project is aggregated. A major benefit of this configuration is the ability to segregate actions of individual tenants, preventing the such as noisy neighbours antipattern. It also enables AWS accounts from being freed from quota restrictions that cannot be handled by a single AWS account. Additionally, Amazon Pinpoint has excellent CloudFormation coverage, and it is also possible to deploy highly reproducible architectures automatically.

The elements required to set up this configuration are shown below.

  • AWS Organizations:
    • Set up Organizations to manage multiple accounts. See Best Practices for setting up multiple accounts.
  • Management account:
    • Create an account to manage multiple account information. Here we will set the following elements. Use IAM roles and Service control policies (SCPs) when manipulating resources across accounts. This allows cross-account access. The required elements are the same as the SA/MP described above.
      • S3 buckets that hold customer data: With AWS, you can utilize S3 data across accounts. Set up cross-account settings and securely link customer data to each account.
      • Dynamo DB Table: Holds your AWS account ID, Pinpoint Project ID, and management information associated with it.
      • AWS Lambda: Create a Pinpoint project using Lambda.
      • Athena and S3 buckets to analyze event data: Event information from multiple accounts and Pinpoint projects is aggregated and analyzed.
  • AWS accounts and Pinpoint projects per tenant:
    • Depending on how tenants are separated, prepare an AWS account and Pinpoint Project. You can also consider automating account creation by using AWS CloudFormation.
    • There are cases where it is necessary to set the distribution channel email address, SMS number, etc. for each account. See the next section for details.
    • Amazon Kinesis is prepared for each account, but everything is stored in the same S3 in the Management account for easier bird-eye’s view reporting.

One thing to keep in mind is that since accounts are separated, it becomes necessary to manage each one separately. For example, newly created account will be placed in the sandbox state, and an application for actual use via support tickets is required for each account. Also, since all reputation is done on a single account, it is also necessary to monitor reputation for each account.

Navigating Channels in Amazon Pinpoint: Aligning Service Delivery with Architecture

Beyond choosing a Pinpoint architecture for multi-tenancy, it’s pivotal to decide which channels best deliver your services and how that decision is affected by your choice of multi-tenancy architecture. Below is a non-exhaustive lists of capabilities in Amazon Pinpoint that will help with your multi-channel, multi-tenancy configurations as well as potential blockers that you’d need to be aware of for each channels.

Email

Email is one of the most versatile channels, with integration with Amazon SES’s configuration sets and email suppression list capability, easily fitting into any of the three multi-tenancy models.

  • Configurations Sets: Using configuration sets, you’d be able to segregate your email sending activities using different IP Pools, as well as different event destinations.
    • You can use configuration sets in both Amazon Pinpoint and Amazon SES. Configuration sets rules that you configure in Amazon SES are also applied to email messages that you send using Amazon Pinpoint.
    • SA/SP and SA/MP: Email templates and sending IP addresses needs to be tagged using configuration sets for each tenant in the Pinpoint project.
    • MA/MP: Email templates and sending IP address can be sent using the account default, or follow granular tagging using configuration sets.
  • Email Suppression List: Suppression list is managed automatically at the account level. Alternatively, you can specify whether a specific configuration can override the account-level suppression list.
    • SA/SP and SA/MP:
      • All tenants will also follow the same account suppression list:
        • If any tenant sends to an email address that hard-bounced or complaint, all other tenants will also be unable to send emails to the same address.
        • You will have to manually override the account-level suppression list for each email addresses.
    • MA/MP:
      • If one of your tenant sends an email to a hard-bounced or complaint address, only the AWS account that the tenant belongs to will respect the suppression list i.e. other tenants in other AWS account can still send email to that email address.
  • Noisy Neighbour Threat: Broadly, this occurs when one tenant’s performance is degraded because of the activities of another tenant. Applied to email, the anti-pattern needs to be addressed because you don’t want one bad actor tenant to affect the entire environment’s email sending activity.
    • SA/SP and SA/MP:
      • Because email bounce and complaint rates are tracked at the account level, it is possible your entire account email sending domain to be blocked due to high bounce/complaint incidences from one bad tenant.
      • To mitigate this, it’s best practice to set up dedicated configuration sets and alarms to alert when any individual tenant is exhibiting high bounce/complaint rate.
    • MA/MP:
      • Offers the most segregation and ensure email identities/domains are only usable by one tenant/account.
  • Email Sending Quota:
    • Email daily sending quota and email sending rate live at the account level.
    • SA/SP and SA/MP:
      • You would need to anticipate the total daily sending quota and sending rate for all tenants in your AWS account and raise the service limits accordingly. Therefore, more planning will be involved to estimate the correct service limit threshold.
    • MA/MP:
      • You can raise service limits per individual tenant’s needs since each tenant will be on a separate AWS account.
      • It is best practice to have business process in place for individual tenant to notify of their email sending quota request in advance so that it can be raised accordingly for their AWS account.
  • For further discussion into sending emails in a multi-tenancy environment, refer to this AWS blog on Multi-Tenancy in SES.

SMS

  • Origination Identity procurement: When opting for MA/MP setup, remember that OIDs (phone numbers) are bound to AWS accounts.
  • Since OIDs do not carry across account, you will need to repeat the procurement process for every new AWS account.Number Pooling: This feature groups phone numbers or sender IDs. It’s particularly useful in a Single Project model to segment communications per tenant.
  • Configuration Sets: With the release of the V2 SMS and Voice API, you can now use configuration sets to manage your SMS opt-out lists, OIDs and event streaming destinations for a multi-tenant environment.
  • Noisy Neighbour Threat:
    • SA/SP and SA/MP:
      • Take note that if you do not specify an OID in your API call, Amazon Pinpoint will attempt to use the most suitable (in terms of throughput and deliverability) OID to send your SMS. This
      • Similar to email, you can leverage number pooling and configuration sets to segregate SMS sending activity within a single account. This helps protect’s your SMS OID reputation because it can be costly and time-consuming to request new OIDs.
    • MA/MP:
      • Offers the most segregation and ensure numbers are only usable by one tenant/account.
  • SMS Opt Outs: Similar to the email channel’s suppression list, opt-outs are managed per account and configuration sets. Therefore, in a MA/MP setup, a customer that has opted out from communication in one account can still receive communications from other accounts.

Push Notifications

Amazon Pinpoint integrates with various push services like FCM, APNS, Baidu Cloud Push, and ADM.

  • Project-level Authentication: Authentication information is set at the Pinpoint Project level, requiring separate management.
    • Therefore, you will not be able to use the SA/SP architecture for multiple tenants using different applications.
  • For more information, refer to the Mobile Push Guide

In-app Messages

  • Pinpoint Project Specific: Similar to push notifications, each Pinpoint Project can only house one in-app message application.
    • If you have multiple applications requiring in-app messages, you will not be able to employ the SA/SP architecture.
  • For more information, refer to the In-app Channel Documentation.

Custom Channels

  • Custom channels in Amazon Pinpoint allow you to send messages through any service that has an API, including third-party services. You can interact with APIs by using a webhook, or by calling an AWS Lambda function.If you are using custom channels extensively from Amazon Pinpoint, you’ll need to be aware of service limits in AWS Lambda, , especially if you’re considering SA/SP or SA/MP architectures.

Conclusion

In this blog, we’ve untangled the intricacies of implementing multi-tenancy in Amazon Pinpoint. Our deep dive covered three architectural patterns:

  • Single Account/Single Project (SA/SP): A beginner-friendly approach offering simple management but requiring meticulous permissions handling to segregate sending activity between different tenants.
  • Single Account/Multiple Projects (SA/MP): Offers granular control over customer data with slight increased in management complexity. However, this approach faces soft quotas and potential ‘Noisy Neighbor’ issues.
  • Multiple Accounts/Multiple Projects (MA/MP): Provides the most flexibility and isolation, albeit with increased management complexity.

Each approach comes with its own set of trade-offs related to ease of management/reporting, scalability, and control over customer data. Our discussion didn’t stop at architecture; we also examined how your multi-tenancy decisions will affect your channel configurations in Amazon Pinpoint. From email and SMS to push notifications, the architectural choices you make will have a direct impact on how efficiently you can manage these distribution channels. Armed with this information, you’re now better equipped to make informed decisions that align with your business objectives.

Call to Action

Your next step? Implement and architect your Amazon Pinpoint environment. Use the best practices and architectural guidelines outlined in this blog post as your north star. Going forward, the architectural blueprint you choose should be tailored to your specific needs—be it user count, company size, or distribution channels. Take into account not just the initial setup but also the long-term management aspects, including the respective service limits and quotas.

Relevant Links

About the Authors

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. At work, he specializes in technical implementation of communications services in enterprise systems and architecture/solutions design. In his spare time, he enjoys chess, rock climbing, hiking and triathlon.

Tatsuya Nakamura

Tatsuya Nakamura

Nakamura Tatsuya is a Solutions Architect in charge of enterprise companies at AWS. He is mainly in charge of the trading company industry and the distribution/retail industry, also supporting the implementation of Amazon Pinpoint for Japanese customers. His career so far includes ERP implementation support and multiple new web service launches.

Deploy Amazon QuickSight dashboard for Amazon Pinpoint engagement events.

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/deploy-amazon-quicksight-dashboard-for-amazon-pinpoint-engagement-events/

Abstract

Business intelligence (BI) dashboards provide a graphical representation of metrics and key performance indicators (KPIs) to monitor the health of your business. By leveraging BI dashboards to analyze the performance of your customer communications, you gain valuable insights into how they are engaging with your messages and can make data-driven decisions to improve your marketing and communication strategies.

In this blog post we introduce a solution that automates the deployment of an Amazon QuickSight dashboard that enables marketers to analyze their long-term Amazon Pinpoint customer engagement data. These dashboards can be customized further depending the use case. This solution alleviates the need to create data pipelines for storage and analysis of Amazon Pinpoint’s engagement data, while offering a greater variety of widgets and views across email, SMS, campaigns, journeys and transactional messages when comparing to Amazon Pinpoint’s native dashboards.

Amazon Pinpoint is a flexible, scalable marketing communications service that connects you with customers over email, SMS, push notifications, or voice. The service offers ready to use dashboards to view key performance indicators (KPIs) for the various messaging channels, It provides 90 days of events for analysis. However, the raw events used to populate Amazon Pinpoint’s dashboards, can be streamed using Amazon Kinesis Data Firehose to a destination of your choice. This blog will walk you through leveraging this feature to create a data lake to store and analyze data beyond the initial 90 days.

Amazon QuickSight is a cloud-scale business intelligence (BI) service that you can use to deliver easy-to-understand insights in an interactive visual environment.

The solutions leverages the Amazon Cloud Development Kit (CDK) to deploy the needed infrastructure and dashboards.

Use Case(s)

The Amazon QuickSight dashboards deployed through this solution are designed to serve several use cases. Here are just a few examples:

  • View email and SMS costs per Campaign and Journey.
  • Deep dive into engagement insights and performance. (eg: SMS events, Email events, Campaign events, Journey events).
  • Schedule reports to various business stakeholders.
  • Track individual email & SMS statuses to specific endpoints.
  • Analyze open and click rates based on the message send time.

These are some of the use cases you can use these dashboards for and with all the data points being available in Amazon QuickSight, you can create your own views and widgets based on your specific requirements.

Solution Overview

This solution builds upon the Digital User Engagement (DUE) Event Database AWS Solution. It creates a long-term Amazon Pinpoint event data lake. This solution also builds a QuickSight dashboard to visualize and analyze this data. It leverages several other AWS services to tie i all together. It uses AWS Lambda for processing AWS CloudTrail data, Amazon Athena to build views using SQL for Amazon QuickSight, AWS CloudTrail to record any new campaign, journey and segment updates and Amazon DynamoDB to store the campaign, journey and segment metadata. This solution can be segmented into three logical portions: 1) Pinpoint campaign/journey/segment lookup tables. 2) Amazon Athena Views. 3) Amazon QuickSight resources.

The AWS Cloud Development Kit (CDK) is used to deploy this solution to your account. AWS CDK is an open-source software development framework for defining cloud infrastructure as code with modern programming languages and deploying it through AWS CloudFormation.

Pinpoint campaign/journey/segment lookup tables

architecture-diagram

  1. A CloudFormation AWS Lambda-backed custom resource function adds current Pinpoint campaign, journey and segment meta data to Amazon DynamoDB lookup tables. An AWS CloudFormation custom resource is managed by a Lambda function that runs only upon the deployment, update and deletion of the AWS CloudFormation stack.
  2. AWS CloudTrail logs record API actions to an S3 bucket every 5 minutes.
  3. When an AWS CloudTrail log is written to the S3 bucket an AWS Lambda function is invoked and checks for Amazon Pinpoint campaigns/journeys/segments management events such as create, update and delete.
  4. For every Amazon Pinpoint action the AWS Lambda function finds, it queries Amazon Pinpoint to get the respective resource details.
  5. The AWS Lambda function will create or update records in the Amazon DynamoDB table to reflect the changes.
  6. This solution also deploys an Amazon Athena DynamoDB connector. Amazon Athena uses this to query the Amazon DynamoDB lookup tables to enrich the data in the Amazon Pinpoint event data lake.
  7. The Amazon Athena to Amazon DynamoDB connector requires an Amazon S3 spill bucket for any data that exceeds the AWS Lambda function limits

Amazon Athena views

Amazon Athena views are crucial for querying and organizing the data. These views allow QuickSight to interact with the Pinpoint event data lake through standard SQL queries and views. Here’s how they’re set up:

The application creates several named queries (called saved queries in the Amazon Athena console). Each named query uses a SQL statement to create a database view containing a subset of the data from the Pinpoint event data lake (or joins data from a previous view with the Amazon DynamoDB tables created above. The views are also created using an AWS Lambda-backed custom resource.

Amazon QuickSight resources

quicksight-resources-diagram

  1. This solution creates several Amazon QuickSight resources to support the deployed dashboard. These include data sources, datasets, refresh schedules, and an analysis. The refresh schedule determines the frequency that Amazon QuickSight queries the Amazon Athena views to update the datasets.
  2. Amazon Athena retrieves live data from the DUE event database data lake and the Athena DynamoDB Connector whenever the Amazon QuickSight refresh schedule runs.

Prerequisites

  • Deploy the Digital User Engagement (DUE) Event Database solution before continuing
    • After you have deployed this solution, gather the following data from the stack’s Resources section.
      • DUES3DataLake: You will need the bucket name
      • PinpointProject: You will need the project Id
      • PinpointEventDatabase: This is the name of the Glue Database. You will only need this if you used something other than the default of due_eventdb

Note: If you are installing the DUE event database for the first time as part of these instructions, your dashboard will not have any data to show until new events start to come in from your Amazon Pinpoint project.

Once you have the DUE event database installed, you are ready to begin your deployment.

Implementation steps

Step 1 – Ensure that Amazon Athena is setup to store query results

Amazon Athena uses workgroups to separate users, teams, applications, or workloads, to set limits on amount of data each query or the entire workgroup can process, and to track costs. There is a default workgroup called “primary” However, before you can use this workgroup, it needs to be configured with an Amazon S3 bucket for storing the query results.

  1. If you do not have an existing S3 bucket you can use for the output, create a new Amazon S3 bucket.
  2. Navigate to the Amazon Athena console and from the menu select workgroups > primary > Edit > Query result configuration
    1. Select the Amazon S3 bucket and any specific directory for the Athena query result location

Note: If you choose to use a workgroup other that the default “primary” workgroup. Please take note of the workgroup name to be used later.

Step 2 – Enable Amazon QuickSight

Amazon QuickSight offers two types of data sets: Direct Query data sets, which provides real-time access to data sources, and SPICE (Super-fast, Parallel, In-memory Calculation Engine) data sets, which are pre-aggregated and cached for faster performance and scalability that can be refreshed on a schedule.

This solution uses SPICE datasets set to incrementally refresh on a cycle of your choice (Daily or Hourly). If you have already setup Amazon QuickSight, please navigate to Amazon QuickSight in the AWS Console and skip to step 3.

  1. Navigate to Amazon QuickSight on the AWS console
  2. Setup Amazon QuickSight account by clicking the “Sign up for QuickSight” button.
    1. You will need to setup an Enterprise account for this solution.
    2. To complete the process for the Amazon QuickSight account setup follow the instructions at this link
  3. Ensure you have the Admin Role
    1. Choose the profile icon in the top right corner, select Manage QuickSight and click on Manage Users
    2. Subscription details should display on the screen.
  4. Ensure you have enough SPICE capacity for the datasets
    1. Choose the profile icon, and then select Manage QuickSight
    2. Click on SPICE Capacity
  5. Make sure you enough SPICE for all three datasets
    1. if you are still in the free tier, you should have enough for initial testing.
    2. You will need about 2GB of capacity for every 1,000,000 Pinpoint events that will be ingested in to SPICE
    3. Note: If you do not have enough SPICE capacity, deployment will fail
  6. Please note the Amazon QuickSight username. You can find this by clicking profile icon. Example username: Admin/user-name

Step 3 – Collect the Amazon QuickSight Service Role name in IAM

For Amazon Athena, Amazon S3, and Athena Query Federation connections, Amazon QuickSight uses the following IAM “consumer” role by default: aws-quicksight-s3-consumers-role-v0

If the “consumer” role is not present, then QuickSight uses the following “service” role instead : aws-quicksight-service-role-v0.

The version number at the end of the role could be different in your account. Please validate your role name with the following steps.

  1. Navigate to the Identity and Access Management (IAM) console
  2. Go to Roles and search QuickSight
  3. If the consumer role exists, please note its full name
  4. If you only find the service role, please note its full name

Note: For more details on these service roles, please see the QuickSight User Guide

Step 4 – Prepare the CDK Application

Deploying this solution requires no previous experience with the AWS CDK toolkit. If you would like to familiarize yourself with CDK, the AWS CDK Workshop is a great place to start.

  1. Setup your integrated development environment (IDE)
    1. Option 1 (recommended for first time CDK users): Use AWS Cloud9 – a cloud-based IDE that lets you write, run, and debug your code with just a browser
      1. Navigate to Cloud9 in the AWS console and click the Create Environment button
      2. Provide a descriptive name to your environment (e.g. PinpointAnalysis)
      3. Leave the rest of the values as their default values and click Create
      4. Open the Cloud9 IDE
        1. Node, TypeScript, and CDK should be come pre-installed. Test this by running the following commands in your terminal.
          1. node --version
          2. tsc --version
          3. cdk --version
          4. If dependencies are not installed, follow the Step 1 instructions from this article
        2. Using AWS Cloud 9 will incur a nominal charge if you are no longer Free Tier eligible. However, using AWS Cloud9 will simply setup if you do not already have a local environment with AWS CDK and the AWS CLI installed
    2. Option 2: local IDE such as VS Code
      1. Setup CDK locally using this documentation
      2. Install Node, TypeScript and the AWS CLI
        1. Once the CLI is installed, configure your AWS credentials
          1. aws configure
  2. Clone the Pinpoint Dashboard Solution from your terminal by running the command below:
    1. git clone https://github.com/aws-samples/digital-user-engagement-events-dashboards.git
  3. Install the required npm packages from package.json by running the commands below:
    1. cd digital-user-engagement-events-dashboards
    2. npm install

Open the file at digital-user-engagement-events-dashboards/bin/pinpoint-bi-analysis.ts for editing in your IDE.

Edit the following code block your your solution with the information you have gathered in the previous steps. Please reference Table 1 for a description of each editable field.

const resourcePrefix = "pinpoint_analytics_";

...

new MainApp(app, "PinpointAnalytics", {
  env: {
    region: "us-east-1",
  }
  
  //Attributes to change
  dueDbBucketName: "{bucket-name}",
  pinpointProjectId: "{pinpoint-project-id}",
  qsUserName: "{quicksight-username}",

  //Default settings
  athenaWorkGroupName: "primary",
  dataLakeDbName: "due_eventdb",
  dateRangeNumberOfMonths: 6,
  qsUserRegion: "us-east-1", 
  qsDefaultServiceRole: "aws-quicksight-service-role-v0", 
  spiceRefreshInterval: "HOURLY",

  //Constants
  athena_util: athena_util,
  qs_util: qs_util,
});
Attribute Definition Example
resourcePrefix The prefix for all created Athena and QuickSight resources pinpoint_analytics_
region Where new resources will be deployed. This must be the same region that the DUE event database solution was deployed us-east-1
dueDbBucketName The name of the DUE event database S3 Bucket due-database-xxxxxxxxxxus-east-1
qsUserName The name of your QuickSight User Admin/my-user
athenaWorkGroupName The Athena workgroup that was previously configured primary
dataLakeDbName The Glue database created during the DUE event database solution. By default the database name is “due_eventdb” due_eventdb
dateRangeNumberOfMonths The number of months of data the Athena views will contain. QuickSight SPICE datasets will contain this many months of data initially and on full refresh. The QuickSight dataset will add new data incrementally without deleting historical data. 6
qsUserRegion The region where your quicksight user exists. By default, new users will be created in us-east-1. You can check your user location with the AWS CLI: aws quicksight list-users --aws-account-id {accout-id} --namespace default and look for the region in the arn us-east-1
qsDefaultServiceRole The service role collected during Step 3. aws-quicksight-service-role-v0
spiceRefreshInterval Options Include HOURLY, DAILY – This is how often the SPICE 7-day incremental window will be refreshed DAILY

Step 5 – Deploy

  1. CDK requires you to bootstrap in each region of an account. This creates a S3 bucket for deployment. You only need to bootstrap once per account/region
    1. cdk bootstrap
  2. Deploy the application
    1. cdk deploy

Step 6 – Explore

Once your solution deploys, look for the Outputs provided by the CDK CLI. You will find a link to your new Amazon Quicksight Analysis, or Dashboard, as well as a few other key resources. Also, explore the resources sections of the deployed stacks in AWS CloudFormation for a complete list of deployed resources. In the AWS CloudFormation, you should have two stacks. The main stack will be called PinpointAnalytics and a nested stack.

Pricing

The total cost to run this solution will depend on several factors. To help explore what the costs might look like for you, please look at the following examples.

All costs outlined below will assume the following:

  • 1 Amazon QuickSight author
  • 100 Amazon QuickSight analysis reader sessions
  • 100k write API actions for all services in AWS account
  • A total of 1k Amazon Pinpoint campaigns, journeys, and segments resulting in 1k Amazon DynamoDB records
  • 5 million monthly Amazon Pinpoint events – email send, email delivered, etc.

Base Costs:

  • 1 Amazon QuickSight author – can edit all Amazon QuickSight resources
    • $24 – There is a a 30 day trial for 4 authors in the free tier
  • 100 Amazon QuickSight analysis reader sessions OR 6 readers with unlimited access – max $5 per month per reader
    • $30
  • Total Monthly Costs: $54 / month

Variable Costs:

Even with the assumptions listed above, the costs will vary depending on the chosen data retention window as well as the the refresh schedule.

  • SPICE data storage costs.
    • Total size of storage will depend on how many months you choose to display in the dashboard
    • For the above assumptions, the SPICE datasets will cost roughly $3.25 for each month stored in the datasets.
  • Amazon Athena data volume costs
    • With Athena you are charged for the total number of bytes scanned in a query. The solution implements incremental data resfreshes in SPICE. Amazon QuickSight will only query and updates the most recent 7 days of data during each refresh cycle. This can be adjusted as needed.

Scenario 1 – 6-month data analysis with daily refresh:

  • Fixed costs: $57
  • SPICE datasets: $19.50
  • Athena Scans: $1.25
  • Total Costs: $77.75 / Month

Scenario 2 – 12-month data analysis with daily refresh:

  • Fixed costs: $57
  • SPICE datasets: $39
  • Athena Scans: $1.25
  • Total Costs: $97.25 / Month

Scenario 3 – 12-month data analysis with hourly refresh:

  • Fixed costs: $57
  • SPICE datasets: $39
  • Athena Scans: $27.50
  • Total Costs: $123.5 / Month

Note: Several services were not mentioned in the above scenarios (e.g., DynamoDB, Cloudtrail, Lambda, etc). The limited usage of these services resulted in a combined cost of less than a few US dollars per month. Even at a greater scale, the costs from these services will not increase in any significant way.

Clean up

  • Delete the CDK stack running the following from your command line
    • cdk destroy
  • Delete QuickSight account
  • Delete Athena views
    • Go to Glue > Data Catalog > Databases > Your Database Name
    • This should delete all Athena views no longer needed. Views created will start with the resourcePrefix specified in the bin/athena-quicksight-cdk.ts file
  • Delete S3 buckets
    • DynamoDB cloud watch log bucket
    • Dynamo Athena Connector Spill bucket
    • Athena workgroup output bucket
  • Delete DynamoDB tables
    • This solution creates two DynamoDB lookup tables prefixed with the Stack name

Conclusion

In this blog, you have deployed a solution that visualizes Amazon Pinpoint’s email and SMS engagement data using Amazon QuickSight. This solution provides you with an Amazon QuickSight functional dashboard as well as a foundation to design and build new Amazon QuickSight dashboards that meet your bespoke requirements. Parts of the solution, such as the Amazon Athena views, can be ingested with other business intelligence tools that your business might already be using.

Next steps

This solution can be expanded to include Amazon Pinpoint engagement events from other channels such as push notifications, Amazon Connect outbound calls, in-app and custom events. This will require certain updates on the Amazon Athena views and consequently on the Amazon QuickSight dashboards. Furthermore, the Amazon DynamoDB tables store only campaign, journey and segment meta-data. You can extend this part of the solution to include message template meta-data, which will help to analyze performance per message template.

Considerations / Troubleshooting

  • Pinpoint Standard account can be upgraded to an Enterprise account. Enterprise accounts cannot be downgraded to a Standard account.
  • SPICE capacity is allocated separately for each AWS Region. Default SPICE capacity is automatically allocated to your home AWS Region. For each AWS account, SPICE capacity is shared by all the people using QuickSight in a single AWS Region. The other AWS Regions have no SPICE capacity unless you choose to purchase some.
  • The QuickSight Analysis Event rates are calculated on Pinpoint message_id and endpoint_id grain – click rate will be the same if a user clicks an email link one or more than one times
  • All timestamps are in UTC. To display data in another timezone edit event_timestamp_timezone calculated field in every dataset
  • Data inside Amazon QuickSight will refresh depending on the schedule set during deployment. Current options include hourly and daily refreshes.
  • AWS CloudTrail has 5 cloudtrail trails per AWS account.

About the Authors

Spencer Harrison

Spencer Harrison

Spencer was a 2023 WWPS Solution Architect intern at Amazon Web Services. He will graduate with his Masters of Information Systems Management from Brigham Young University in the spring of 2024. After graduation he is aspiring to find opportunities as a solution architect, cloud engineer, or DevOps engineer. Outside of work, Spencer loves going outdoors to wake surf, downhill ski, and play pickle ball.

Daniel Wells

Daniel Wells

With over 20 years of IT experience, Daniel has held many architecture and director positions supporting a wide variety of technologies. He currently works as an AWS Solutions Architect supporting Education Technology companies striving to make a difference for learners and educators worldwide. Daniel’s interests outside of work include music, family, health, education and anything that allows him to express himself creatively.

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Senior Specialist Solutions Architect at AWS. He enjoys diving deep into customers’ technical issues and help in designing communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Prime Day 2023 Powered by AWS – All the Numbers

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/prime-day-2023-powered-by-aws-all-the-numbers/

As part of my annual tradition to tell you about how AWS makes Prime Day possible, I am happy to be able to share some chart-topping metrics (check out my 2016, 2017, 2019, 2020, 2021, and 2022 posts for a look back).

This year I bought all kinds of stuff for my hobbies including a small drill press, filament for my 3D printer, and irrigation tools. I also bought some very nice Alphablock books for my grandkids. According to our official release, the first day of Prime Day was the single largest sales day ever on Amazon and for independent sellers, with more than 375 million items purchased.

Prime Day by the Numbers
As always, Prime Day was powered by AWS. Here are some of the most interesting and/or mind-blowing metrics:

Amazon Elastic Block Store (Amazon EBS) – The Amazon Prime Day event resulted in an incremental 163 petabytes of EBS storage capacity allocated – generating a peak of 15.35 trillion requests and 764 petabytes of data transfer per day. Compared to the previous year, Amazon increased the peak usage on EBS by only 7% Year-over-Year yet delivered +35% more traffic per day due to efficiency efforts including workload optimization using Amazon Elastic Compute Cloud (Amazon EC2) AWS Graviton-based instances. Here’s a visual comparison:

AWS CloudTrail – AWS CloudTrail processed over 830 billion events in support of Prime Day 2023.

Amazon DynamoDB – DynamoDB powers multiple high-traffic Amazon properties and systems including Alexa, the Amazon.com sites, and all Amazon fulfillment centers. Over the course of Prime Day, these sources made trillions of calls to the DynamoDB API. DynamoDB maintained high availability while delivering single-digit millisecond responses and peaking at 126 million requests per second.

Amazon Aurora – On Prime Day, 5,835 database instances running the PostgreSQL-compatible and MySQL-compatible editions of Amazon Aurora processed 318 billion transactions, stored 2,140 terabytes of data, and transferred 836 terabytes of data.

Amazon Simple Email Service (SES) – Amazon SES sent 56% more emails for Amazon.com during Prime Day 2023 vs. 2022, delivering 99.8% of those emails to customers.

Amazon CloudFront – Amazon CloudFront handled a peak load of over 500 million HTTP requests per minute, for a total of over 1 trillion HTTP requests during Prime Day.

Amazon SQS – During Prime Day, Amazon SQS set a new traffic record by processing 86 million messages per second at peak. This is 22% increase from Prime Day of 2022, where SQS supported 70.5M messages/sec.

Amazon Elastic Compute Cloud (EC2) – During Prime Day 2023, Amazon used tens of millions of normalized AWS Graviton-based Amazon EC2 instances, 2.7x more than in 2022, to power over 2,600 services. By using more Graviton-based instances, Amazon was able to get the compute capacity needed while using up to 60% less energy.

Amazon Pinpoint – Amazon Pinpoint sent tens of millions of SMS messages to customers during Prime Day 2023 with a delivery success rate of 98.3%.

Prepare to Scale
Every year I reiterate the same message: rigorous preparation is key to the success of Prime Day and our other large-scale events. If you are preparing for a similar chart-topping event of your own, I strongly recommend that you take advantage of AWS Infrastructure Event Management (IEM). As part of an IEM engagement, my colleagues will provide you with architectural and operational guidance that will help you to execute your event with confidence!

Jeff;

Building Generative AI into Marketing Strategies: A Primer

Post Syndicated from nnatri original https://aws.amazon.com/blogs/messaging-and-targeting/building-generative-ai-into-marketing-strategies-a-primer/

Introduction

Artificial Intelligence has undoubtedly shaped many industries and is poised to be one of the most transformative technologies in the 21st century. Among these is the field of marketing where the application of generative AI promises to transform the landscape. This blog post explores how generative AI can revolutionize marketing strategies, offering innovative solutions and opportunities.

According to Harvard Business Review, marketing’s core activities, such as understanding customer needs, matching them to products and services, and persuading people to buy, can be dramatically enhanced by AI. A 2018 McKinsey analysis of more than 400 advanced use cases showed that marketing was the domain where AI would contribute the greatest value. The ability to leverage AI can not only help automate and streamline processes but also deliver personalized, engaging content to customers. It enhances the ability of marketers to target the right audience, predict consumer behavior, and provide personalized customer experiences. AI allows marketers to process and interpret massive amounts of data, converting it into actionable insights and strategies, thereby redefining the way businesses interact with customers.

Generating content is just one part of the equation. AI-generated content, no matter how good, is useless if it does not arrive at the intended audience at the right point of time. Integrating the generated content into an automated marketing pipeline that not only understands the customer profile but also delivers a personalized experience at the right point of interaction is also crucial to getting the intended action from the customer.

Amazon Web Services (AWS) provides a robust platform for implementing generative AI in marketing strategies. AWS offers a range of AI and machine learning services that can be leveraged for various marketing use cases, from content creation to customer segmentation and personalized recommendations. Two services that are instrumental to delivering customer contents and can be easily integrated with other generative AI services are Amazon Pinpoint and Amazon Simple Email Service. By integrating generative AI with Amazon Pinpoint and Amazon SES, marketers can automate the creation of personalized messages for their customers, enhancing the effectiveness of their campaigns. This combination allows for a seamless blend of AI-powered content generation and targeted, data-driven customer engagement.

As we delve deeper into this blog post, we’ll explore the mechanics of generative AI, its benefits and how AWS services can facilitate its integration into marketing communications.

What is Generative AI?

Generative AI is a subset of artificial intelligence that leverages machine learning techniques to generate new data instances that resemble your training data. It works by learning the underlying patterns and structures of the input data, and then uses this understanding to generate new, similar data. This is achieved through the use of models like Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Transformer models.

What do Generative AI buzzwords mean?

In the world of AI, buzzwords are abundant. Terms like “deep learning”, “neural networks”, “machine learning”, “generative AI”, and “large language models” are often used interchangeably, but they each have distinct meanings. Understanding these terms is crucial for appreciating the capabilities and limitations of different AI technologies.

Machine Learning (ML) is a subset of AI that involves the development of algorithms that allow computers to learn from and make decisions or predictions based on data. These algorithms can be ‘trained’ on a dataset and then used to predict or classify new data. Machine learning models can be broadly categorized into supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning.

Deep Learning is a subset of machine learning that uses neural networks with many layers (hence “deep”) to model and understand complex patterns. These layers of neurons process different features, and their outputs are combined to produce a final result. Deep learning models can handle large amounts of data and are particularly good at processing images, speech, and text.

Generative AI refers specifically to AI models that can generate new data that mimic the data they were trained on. This is achieved through the use of models like Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Generative AI can create anything from written content to visual designs, and even music, making it a versatile tool in the hands of marketers.

Large Language Models (LLMs) are a type of generative AI that are trained on a large corpus of text data and can generate human-like text. They predict the probability of a word given the previous words used in the text. They are particularly useful in applications like text completion, translation, summarization, and more. While they are a type of generative AI, they are specifically designed for handling text data.

Simply put, you can understand that Large Language Model is a subset of Generative AI, which is then a subset of Machine Learning and they ultimately falls under the umbrella term of Artificial Intelligence.

What are the problems with generative AI and marketing?

While generative AI holds immense potential for transforming marketing strategies, it’s important to be aware of its limitations and potential pitfalls, especially when it comes to content generation and customer engagement. Here are some common challenges that marketers should be aware of:

Bias in Generative AI Generative AI models learn from the data they are trained on. If the training data is biased, the AI model will likely reproduce these biases in its output. For example, if a model is trained primarily on data from one demographic, it may not accurately represent other demographics, leading to marketing campaigns that are ineffective or offensive. Imagine if you are trying to generate an image for a campaign targeting females, a generative AI model might not generate images of females in jobs like doctors, lawyers or judges, leading your campaign to suffer from bias and uninclusiveness.

Insensitivity to Cultural Nuances Generative AI models may not fully understand cultural nuances or sensitive topics, which can lead to content that is insensitive or even harmful. For instance, a generative AI model used to create social media posts for a global brand may inadvertently generate content that is seen as disrespectful or offensive by certain cultures or communities.

Potential for Inappropriate or Offensive Content Generative AI models can sometimes generate content that is inappropriate or offensive. This is often because the models do not fully understand the context in which certain words or phrases should be used. It’s important to have safeguards in place to review and approve content before it’s published. A common problem with LLMs is hallucination: whereby the model speaks false knowledge as if it is accurate. A marketing team might mistakenly publish a auto-generated promotional content that contains a 20% discount on an item when no such promotions were approved. This could have disastrous effect if safeguards are not in place and erodes customers’ trust.

Intellectual Property and Legal Concerns Generative AI models can create new content, such as images, music, videos, and text, which raises questions of ownership and potential copyright infringement. Being a relatively new field, legal discussions are still ongoing to discuss legal implications of using Generative AI, e.g. who should own generated AI content, and copyright infringement.

Not a Replacement for Human Creativity Finally, while generative AI can automate certain aspects of marketing campaigns, it cannot replace the creativity or emotional connections that marketers use in crafting compelling campaigns. The most successful marketing campaigns touch the hearts of the customers, and while Generative AI is very capable of replicating human content, it still lacks in mimicking that “human touch”.

In conclusion, while generative AI offers exciting possibilities for marketing, it’s important to approach its use with a clear understanding of its limitations and potential pitfalls. By doing so, marketers can leverage the benefits of generative AI while mitigating risks.

How can I use generative AI in marketing communications?

Amazon Web Services (AWS) provides a comprehensive suite of services that facilitate the use of generative AI in marketing. These services are designed to handle a variety of tasks, from data processing and storage to machine learning and analytics, making it easier for marketers to implement and benefit from generative AI technologies.

Overview of Relevant AWS Services

AWS offers several services that are particularly relevant for generative AI in marketing:

  • Amazon Bedrock: This service makes FMs accessible via an API. Bedrock offers the ability to access a range of powerful FMs for text and images, including Amazon’s Titan FMs. With Bedrock’s serverless experience, customers can easily find the right model for what they’re trying to get done, get started quickly, privately customize FMs with their own data, and easily integrate and deploy them into their applications using the AWS tools and capabilities they are familiar with.
  • Amazon Titan Models: These are two new large language models (LLMs) that AWS is announcing. The first is a generative LLM for tasks such as summarization, text generation, classification, open-ended Q&A, and information extraction. The second is an embeddings LLM that translates text inputs into numerical representations (known as embeddings) that contain the semantic meaning of the text. In response to the pitfalls mentioned above around Generative AI hallucinations and inaccurate information, AWS is actively working on improving accuracy and ensuring its Titan models produce high-quality responses, said Bratin Saha, an AWS vice president.
  • Amazon SageMaker: This fully managed service enables data scientists and developers to build, train, and deploy machine learning models quickly. SageMaker includes modules that can be used for generative AI, such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs).
  • Amazon Pinpoint: This flexible and scalable outbound and inbound marketing communications service enables businesses to engage with customers across multiple messaging channels. Amazon Pinpoint is designed to scale with your business, allowing you to send messages to a large number of users in a short amount of time. It integrates with AWS’s generative AI services to enable personalized, AI-driven marketing campaigns.
  • Amazon Simple Email Service (SES): This cost-effective, flexible, and scalable email service enables marketers to send transactional emails, marketing messages, and other types of high-quality content to their customers. SES integrates with other AWS services, making it easy to send emails from applications being hosted on services such as Amazon EC2. SES also works seamlessly with Amazon Pinpoint, allowing for the creation of customer engagement communications that drive user activity and engagement.

How to build Generative AI into marketing communications

Dynamic Audience Targeting and Segmentation: Generative AI can help marketers to dynamically target and segment their audience. It can analyze customer data and behavior to identify patterns and trends, which can then be used to create more targeted marketing campaigns. Using Amazon Sagemaker or the soon-to-be-available Amazon Bedrock and Amazon Titan Models, Generative AI can suggest labels for customers based on unstructured data. According to McKinsey, generative AI can analyze data and identify consumer behavior patterns to help marketers create appealing content that resonates with their audience.

Personalized Marketing: Generative AI can be used to automate the creation of marketing content. This includes generating text for blogs, social media posts, and emails, as well as creating images and videos. This can save marketers a significant amount of time and effort, allowing them to focus on other aspects of their marketing strategy. Where it really shines is the ability to productionize marketing content creation, reducing the needs for marketers to create multiple copies for different customer segments. Previously, marketers would need to generate many different copies for each granularity of customers (e.g. attriting customers who are between the age of 25-34 and loves food). Generative AI can automate this process, providing the opportunities to dynamically create these contents programmatically and automatically send out to the most relevant segments via Amazon Pinpoint or Amazon SES.

Marketing Automation: Generative AI can automate various aspects of marketing, such as email marketing, social media marketing, and search engine marketing. This includes automating the creation and distribution of marketing content, as well as analyzing the performance of marketing campaigns. Amazon Pinpoint currently automates customer communications using journeys which is a customized, multi-step engagement experience. Generative AI could create a Pinpoint journey based on customer engagement data, engagement parameters and a prompt. This enables GenAI to not only personalize the content but create a personalized omnichannel experience that can extend throughout a period of time. It then becomes possible that journeys are created dynamically by generative AI and A/B tested on the fly to achieve an optimal pre-defined Key Performance Indicator (KPI).

A Sample Generative AI Use Case in Marketing Communications

AWS services are designed to work together, making it easy to implement generative AI in your marketing strategies. For instance, you can use Amazon SageMaker to build and train your generative AI models which assist with automating marketing content creation, and Amazon Pinpoint or Amazon SES to deliver the content to your customers.

Companies using AWS can theoretically supplement their existing workloads with generative AI capabilities without the needs for migration. The following reference architecture outlines a sample use case and showcases how Generative AI can be integrated into your customer journeys built on the AWS cloud. An e-commerce company can potentially receive many complaints emails a day. Companies spend a lot of money to acquire customers, it’s therefore important to think about how to turn that negative experience into a positive one.

GenAIMarketingSolutionArchitecture

When an email is received via Amazon SES (1), its content can be passed through to generative AI models using GANs to help with sentiment analysis (2). An article published by Amazon Science utilizes GANs for sentiment analysis for cases where a lack of data is a problem. Alternatively, one can also use Amazon Comprehend at this step and run A/B tests between the two models. The limitations with Amazon Comprehend would be the limited customizations you can perform to the model to fit your business needs.

Once the email’s sentiment is determined, the sentiment event is logged into Pinpoint (3), which then triggers an automatic winback journey (4).

Generative AI (e.g. HuggingFace’s Bloom Text Generation Models) can again be used here to dynamically create the content without needing to wait for the marketer’s input (5). Whereas marketers would need to generate many different copies for each granularity of customers (e.g. attriting customers who are between the age of 25-34 and loves food), generative AI provides the opportunities to dynamically create these contents on the fly given the above inputs.

Once the campaign content has been generated, the model pumps the template backs into Amazon Pinpoint (6), which then sends the personalized copy to the customer (7).

Result: Another customer is saved from attrition!

Conclusion

The landscape of generative AI is vast and ever-evolving, offering a plethora of opportunities for marketers to enhance their strategies and deliver more personalized, engaging content. AWS plays a pivotal role in this landscape, providing a comprehensive suite of services that facilitate the implementation of generative AI in marketing. From building and training AI models with Amazon SageMaker to delivering personalized messages with Amazon Pinpoint and Amazon SES, AWS provides the tools and infrastructure needed to harness the power of generative AI.

The potential of generative AI in relation to the marketer is immense. It offers the ability to automate content creation, personalize customer interactions, and derive valuable insights from data, among other benefits. However, it’s important to remember that while generative AI can automate certain aspects of marketing, it is not a replacement for human creativity and intuition. Instead, it should be viewed as a tool that can augment human capabilities and free up time for marketers to focus on strategy and creative direction.

Get started with Generative AI in marketing communications

As we conclude this exploration of generative AI and its applications in marketing, we encourage you to:

  • Brainstorm potential Generative AI use cases for your business. Consider how you can leverage generative AI to enhance your marketing strategies. This could involve automating content creation, personalizing customer interactions, or deriving insights from data.
  • Start leveraging generative AI in your marketing strategies with AWS today. AWS provides a comprehensive suite of services that make it easy to implement generative AI in your marketing strategies. By integrating these services into your workflows, you can enhance personalization, improve customer engagement, and drive better results from your campaigns.
  • Watch out for the next part in the series of integrating Generative AI into Amazon Pinpoint and SES. We will delve deeper into how you can leverage Amazon Pinpoint and SES together with generative AI to enhance your marketing campaigns. Stay tuned!

The journey into the world of generative AI is just beginning. As technology continues to evolve, so too will the opportunities for marketers to leverage AI to enhance their strategies and deliver more personalized, engaging content. We look forward to exploring this exciting frontier with you.

About the Author

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. At work, he specializes in technical implementation of communications services in enterprise systems and architecture/solutions design. In his spare time, he enjoys chess, rock climbing, hiking and triathlon.

How to Manage Global Sending of SMS with Amazon Pinpoint

Post Syndicated from Tyler Holmes original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-manage-global-sending-of-sms-with-amazon-pinpoint/

Amazon Pinpoint has a global SMS reach, of 240 countries and regions around the world, enabling companies of all sizes to send SMS globally. Unlike the process of sending a personal message from your phone to someone in another country, sending Application to Person (A2P) messages, also known as bulk SMS, involves many more regulations and requirements that vary from country to country. In this post we will review best practices for sending Global SMS and share a selection of AWS resources to help you send SMS globally.

The first thing to understand about delivering SMS around the world is that it takes a vast network of components working seamlessly together around the globe to deliver an SMS globally. The image below gives a simple example of delivering an SMS in the United States. Mobile devices are at the center of this, connecting to mobile carriers or operators, who operate the infrastructure necessary for SMS transmission. Once you hit that send button from AWS, your message travels to an Aggregator, who has connections to Operators, Partners, and/or other Aggregators. The reason for this is that there is no one vendor who delivers globally. AWS uses many Aggregators that both enable us to send globally as well as improve resiliency and deliverability of your messages. The last stop on the journey is the Short Message Service Center (SMSC), a central hub that receives, stores, and forwards text messages. The SMSC acts as a gateway, routing your message to the recipient’s carrier or operator through a series of interconnected networks, thanks to agreements between different carriers known as interconnection agreements. The entire process is facilitated by the Signaling System 7 (SS7), a set of protocols that enables the exchange of information between telecommunication networks, ensuring messages reach their intended recipients.
Diagram showing how SMS is delivered using aggregators
Every country has its own regulations and processes that you need to comply with in order to successfully deliver SMS to handsets that are registered to a particular country. There are some countries with little regulation and others that will block all SMS traffic unless it has been registered with the proper authorities.

Each country’s requirements include the origination identities (OIDs) that their networks support, some of these include long codes (standard phone numbers that typically have 10 or more digits), short codes (phone numbers that contain between four and seven digits), and Sender IDs (names that contain 6–11 alphanumeric characters). Each of these types of origination identities has unique benefits and drawbacks and you will need one for each use case and country you plan on supporting. Here is a list of the countries that AWS currently sends to and the OIDs that are supported.

Pre-Planning and Country Selection
The first step to planning a global roll out of SMS is to know what countries you want to send to and what each of your use cases are. Put together a spreadsheet for each unique use case you have and the countries you plan on sending to with the below key details:

  • The volumes you expect to send to each country
  • The throughput (Also referred to as Messages per Second, MPS, Transactions per Second, or TPS) at which you expect to deliver these messages
  • Whether your use case is one-way or two-way
    • Not all countries support 2-way communications, which is the ability to have the recipient send a message back to the OID. Sender ID also does not support 2-way communication so if you are planning on using Sender ID you will need to account for how to opt recipients out of future communications.
  • Leave a column for the Origination Identity you will use for each country
  • Leave a column for whether this country requires advanced registration
  • Leave a column for any country specific limitations or requirements such as language limitations
  • Leave a column for the estimated time it takes to register
    • This chart has estimates for common countries but there are others that also have lead time in procuring an OID so please open a support case for review

Selecting an Origination Identity

Now that you have these details all in one place consult this table to determine what OIDs each country supports, and, if your use case requires it, which countries support two-way.

In countries where there are multiple options for OIDs there are several guidelines to consider when you’re deciding what type of origination identity to use:

  • Sender IDs are a great option for one-way use cases. However, they’re not available in all countries and if you are needing to opt-out your customers you will need to provide a way for them to do so since they are only one-way.
    • In some countries (such as India and Saudi Arabia), long codes can be used to receive incoming messages, but can’t be used to send outgoing messages. You can use these inbound-only long codes to provide your recipients with a way to opt out of messages that you send using a Sender ID.
  • Short codes are a great option for two-way use cases and have the highest throughput of all OIDs.
    • While short codes have a higher throughput they also come at a much higher cost than other OIDs so weigh your cost against your use case requirements.
  • In some countries, we maintain a pool of shared origination identities. If you send messages to recipients in a particular country, but you don’t have a dedicated origination identity in that country, we make an effort to deliver your message using one of these shared identities.
    • Shared identities are unavailable in some countries, including the United States and China.
    • Shared identities cannot be 2-way so make sure you have a way of opting customers out of communication

With these in mind consult this guide to help you decide which OID to use for each country and use case. Update your sheet as you review each country. Many of our customers opt for a phased roll-out, enabling SMS for the countries that do not require registration and can be put into production swiftly while working through the registration process for those countries that require it and bringing those to production as they are approved. A phased approach is also preferred as it allows customers to monitor for any problems with deliverability with a smaller volume than their full production workload.

Procurement and Registration of Origination Identities

In countries where registration is onerous it is important to have a few things about your process all in one place. Some registrations are very similar in the information that they ask for while others have special processes that you need to follow. Examples include:

Once you have decided on your OIDs for each of your countries you can begin the process of procuring them. Depending on where you plan on sending you may need to open a case to procure them. Short codes you also need to open a case but the process is slightly different so review the documentation here. If you are having trouble making a decision on OIDs you may have the option of engaging with AWS support or your Account Manager dependent on the support level you have opted for on your account.

Testing SMS Sending

Once you have procured OIDs and are ready to begin testing, it is essential that you set up a way of monitoring the events that Pinpoint generates. Pay attention to the Delivery Receipts (DLRs) that are returned back into the event stream. These provide you details on the success or failure of your sends. Pinpoint delivers all events via Amazon Kinesis, which needs to be enabled within each Project you are using. This is a common solution among our customers. It enables the stream, sends it to a user-specified S3 Bucket, and sets up Tables and Views within Amazon Athena, our serverless SQL query engine.. Kinesis can stream to many different destinations, including Redshift and HTTP endpoints, among many others. This gives you flexibility in how you deliver the events to their required locations. Monitoring SMS events is an important part of sending globally, these are the SMS Events that are possible to receive in your stream.

TPS limits can vary depending on the countries you’re sending to and the OIDs you’re using. If there’s a risk of exceeding these limits and triggering rate limiting errors, it’s crucial to devise a strategy for queuing your messages. Keep in mind, Amazon Pinpoint doesn’t offer queueing capabilities. Therefore, message queueing must be incorporated at your application level or by leveraging AWS services. For instance, you could deploy this commonly used architecture that’s adjustable according to your specific use case.

Once you have your monitoring solution in place, you are read to begin testing sends to real destination phone numbers. Keep in mind that at this point you are likely still in the Sandbox for SMS. This means you have much lower quotas for sending and can only send to verified phone numbers or the SMS Simulator numbers. Pinpoint includes an SMS simulator, which you can use to send text messages and receive realistic event records to 51 commonly sent to countries. Messages sent to these destination phone numbers are not sent over the carrier network but do incur the standard outbound SMS messaging rate for the country that the simulated phone number is based in.

Best Practices for Sending
Before beginning There are two common ways of sending SMS via Pinpoint. The first option is the Pinpoint API using the SendMessages Action, which you can send a direct message to as many as 100 recipients at a time. The second option is to use the SMS and Voice v2 API and the SendTextMessage Action, which has more options available to configure your sends and can send to a single recipient with each call. The V2 API is the preferred way of sending as it allows for more fine grained control over your messages and is the API upon which new functionality will be built. Keep in mind that sending via the API does not attribute any metrics back to an endpoint unless you are specifying an endpoint ID in your call, so if you are using other features of Pinpoint such as campaigns or journeys or sending via other channels such as email you will need to consider your strategy for measuring success and how you will tie all of your communication efforts together.

When sending SMS Pinpoint includes logic for selecting the best OID to send from based on the country code. If there are multiple OIDs available to send to a particular country Pinpoint will default to the highest throughput OID available in your Account/Region. If there are not OIDs specific to the country being sent to Pinpoint will default to SenderID or to a shared OID owned by Pinpoint in that order, if the country allows these OIDs to be used. Given this functionality the best practice for sending SMS is to not specify the OID needed to send to a specific country and to allow Pinpoint to select. You can restrict Pinpoint to send to only those countries that you have OIDs for by using Pools, and turning off Shared Routes, more on this below.

If you have multiple use cases and need to specify the correct OID for each, this is where the V2 API is useful. OIDs can be attached to Pools, which can be configured to serve a particular use case, and the pool can be specified in your SendTextMessage call. Sending using a PoolID and allowing Pinpoint to select the right OID from that pool for the destination phone number simplifies your sending process. This blogpost details the process for creating Pools and using them to send SMS.

As mentioned above Pools also serve an additional use case, which is to limit message sending to specific countries. Some countries allow messages without an OID. If you don’t modify your settings to disable this feature, Pinpoint will attempt to deliver messages to these countries, even if you don’t have an explicit OID for them. Restricting SMS sends only to countries that you have OIDs for can be accomplished by using Pools and configuring “SharedRoutesEnabled“ to false by using the UpdatePool Action. Once configured you will receive an error back if attempting to send to a destination phone number that you do not have an OID for in the Pool. This configuration gives you the ability to control your costs while simplifying your process.

Managing Opt-Outs

As we have seen, managing SMS in an environment of increasing global regulation is challenging. An area of importance that needs to be configured is how you plan on managing the ability for recipients to opt out of your communications. Pinpoint can automatically opt your customers out of SMS communications using predefined keywords such as, “stop” or “unsubscribe.” However, this would make for an Account wide opt-out, and not ideal for customers that have multiple use cases such as OTP and Marketing communications. This blogpost details the process of managing opt-outs for multiple use cases. The configuration is enabled through the V2 API and is another reason to standardize your process on this API.

Monitoring Sending

The last step in ensuring success for SMS sending is having a solid platform for monitoring your sending. SMS is not a guaranteed delivery channel. You will always receive an event for a successful send in the event stream but there is no guarantee of a return status event, if a DLR from a carrier is not sent. A list of SMS Events and possible statuses can be found here.

The first Event you should see returned when watching the Event Stream for an SMS send activity is the “PENDING” event. This means we’ve sent the message to the carrier, where it’s buffered, and we’re waiting for the carrier to return a status message. There are no status messages between the “PENDING” state and the “whatever happens next” state, so if the carrier is retrying, we simply stay in PENDING and do not create more events. If a message is successfully delivered and a DLR is sent back from the carrier then a new event will be generated with a status of “SUCCESSFUL/DELIVERED.”

Make sure to review all of the possible values for the record_status attribute so that you are aware of varying issues with your sending that can arise. For example, statuses such as “Blocked,” “Spam,” and “Carrier_Blocked“ can indicate systemic issues that should be investigated.

Updates sent from a carrier via a DLR can be delayed for up to 72 hours or never sent at all. This varies based on the carrier and the country being sent to. Should you require a higher level of reliability, you need to establish business logic around monitoring SMS messages. If messages remain in a PENDING status longer than your business requirements permit, you must make a decision on how to handle them. You need to consider whether missed or duplicated messages are acceptable, or if it’s preferable to retry messages that are stuck in pending. The following is an example architecture for failed SMS retries that you can adjust to your needs.

Conclusion

This post covers the general process for getting started with Global SMS but as you have learned each country presents a different challenge and the regulatory environment is constantly evolving. It’s important to make sure that you are receiving messages from AWS that detail new regulations, new feature launches, and other major announcements to continually improve your process and make sure your SMS are delivering at the highest rate possible.

Take the time to plan out your approach, follow the steps outlined in this blog, and take advantage of any resources available to you within your support tier.

Decide what origination IDs you will need here
Review the documentation for the V2 SMS and Voice API here
Review the Pinpoint API and SendMessage here
Check out the support tiers comparison here

Resources:
https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-countries.html
https://aws.amazon.com/blogs/messaging-and-targeting/how-to-utilise-amazon-pinpoint-to-retry-unsuccessful-sms-delivery/
https://datatracker.ietf.org/doc/html/draft-wilde-sms-uri-20#section-4
https://docs.aws.amazon.com/pinpoint/latest/developerguide/event-streams-data-sms.html
https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-limitations-opt-out.html
https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-simulator.html

How to send geofenced marketing messages using Amazon Pinpoint

Post Syndicated from Zach Elliott original https://aws.amazon.com/blogs/messaging-and-targeting/send-geofenced-marketing-messages-using-amazon-pinpoint/

Introduction

Geofencing, which creates a virtual geographical boundary that triggers a marketing action to a mobile device when a user enters or exits that boundary, can be used in marketing messages to drive more traffic and increase conversions. Amazon Pinpoint, AWS’ multichannel communication tool, can be used to create mobile notifications using geofencing technology, so customers receive notifications about a business when they’re close by that physical location.

Ways retailers can use geofencing:

There are a number of different use cases that retail or location-based businesses can use geofencing to drive customer conversions:

  1. Target the customer with real-time offers and promotions when the customer is near the store: Detecting and establishing an interaction with the customer while in the store improves the customer experience. Using geofencing, retailers will be able to detect the presence and will be able to send coupon or promotional notifications.
  2. Improve product search in the store: As the consumer enters the geofenced store, activate the product search for the store to help the consumer to search and navigate easily within the store.
  3. Get more information about the customer in the store: Retailers will be able to collect more accurate consumer behavior inside the store by recording the interaction between the consumers and product search, and using geofencing and position to calculate the dwell time inside the store or how long the consumer is waiting in the queue.

In this blog we will talk about how you can use Amazon Location Service to trigger a notification using Amazon Pinpoint when a consumer enters a geofenced store.

Architecture Overview

Architecture Overview for Pinpoint and Geofencing Solution

Fig. 1: Geofencing and Pinpoint – Sample Architecture

Figure 1 depicts the solution architecture and resources deployed by the AWS CloudFormation Template, described in more detail in later sections. In the solution workflow:

  1.  Store Management defines a Geofence around store locations they wish to enroll using Amazon Location Service Geofencing and circular geofences.
  2. A customer who has opted into location tracking using the app will update an Amazon Location Service Tracker Resource. This tracker will be evaluated against the store geofences.
  3. If a geofence ENTER event is triggered, a message is sent to Amazon EventBridge.
  4. EventBridge will trigger an AWS Lambda function.
  5. The Lambda function looks up the Store Information in an Amazon DynamoDB table that matches the geofence ID in order to enrich the email.
  6. Event is sent to a Pinpoint Journey with information from the Geofence event as well as store info.
  7. Personalized email is sent to customer via Pinpoint

Configuring AWS Cloudformation

To deploy the Amazon Location Service resources as well as EventBridge, DynamoDB, and Lambda, we have created an AWS Cloudformation Template.

Use this link to launch the CloudFormation stack in the US-West-2 region. Selecting the button next to “I acknowledge that AWS CloudFormation might create IAM resources.” click Create stack

Fig 2. Cloudformation Console

Fig. 2: AWS CloudFormation Console showing stack options.

Once the stack is complete. We can begin configuring Pinpoint.

Configuring Pinpoint

Our project was created for us via the CloudFormation template, but we still need to configure some items in Pinpoint. First, we’ll set up our email identity to send and receive messages from; for the purposes of this blog, you’ll use the same email address for sending and receiving the email, but in a production environment, your sending identity could either be a specific email address you’ve verified for messaging, or an entire email domain you’ve verified via DNS.

Configuring email channel

Adding an email

  1. On the left-side Pinpoint menu, expand the Email option and choose Email identities
  2. Select Verify email identity
  3. Enter an email address you have access to for the confirmation step
  4. Select Verify email address
Fig. 3: Verifying email identity

Fig. 3: AWS Console showing email verification

Fig. 4: Email verification options

Fig. 4: Email verification options

Now, check your inbox for a verification email. It should look something like this:

Fig. 5: Email Verification message from Amazon Pinpoint

Fig. 5: Email Verification message from Amazon Pinpoint

Click the link to verify your email address. Now we can begin sending and receiving messages at this address.

Now that we have a verified email, we can configure the email channel.

Configuring the email channel

  1. On the left-side Pinpoint menu, navigate to All projects and select CoffeeShop
  2. Navigate to Settings and select Email
  3. Select Edit next to Identity details
  4. Select the checkbox for Enable the email channel for this project
  5. Select Use an existing email address and select the address you verified in the previous step.
  6. Select Save
Fig. 6: Configuring the email channel

Fig. 6: Configuring the email channel

Configuring email template

Next, we need to define what our email looks like that is sent to our customers when they enter a geofence. We’ve provided HTML code for a basic Coffee Shop template here

Configure email template

  1. On the left-side Pinpoint menu, navigate to Message templates, select Create template
  2. Name the template CoffeeShopGeoTarget and set the subject to “We haven’t seen you in a while”
  3. Paste the contents of the HTML template into the Message field.
  4. Select Create
Fig. 7: Configuring the email template

Fig. 7: Configuring the email template

You can see multiple attributes are used in the template. These attributes come from our segment in the case of FirstName, and DynamoDB in the case of the store name and address.

Configuring email segment

Now we need to define who we are going to send an email to. For this, we need to set up our segment within Pinpoint. We’ve provided a sample segment file here. Download this file and open it in a text editor.

Fig. 8: Configuring the email segment

Fig. 8: Configuring the email segment

Replace all the values with your own information . The email needs to be the same email we verified in an earlier step. Create a UserID for the user that can be used to uniquely identify them. Leave ChannelType as “EMAIL” to indicate we are using the email channel in Pinpoint, and leave OptOut as “NONE” which indicates the user would like to receive all communications and has not opted-out of receiving notifications. Once the information is edited, save the file.

Importing the segment

  1. On the left-side Pinpoint menu, navigate to All projects, and select your CoffeeShop Project
  2. Navigate to Segments and select Import a segment
  3. Drag the downloaded csv file into the Drop files here box.
  4. Select Create Segment
Fig. 9: Importing a segment

Fig. 9: Importing a segment

Configuring Journey

In this post, we will be setting up a very simple Journey that sends an email anytime a user enters a geofence. If we wanted to go a step farther, we could add additional activities later in the Journey such as determining if the customer purchased something based on receiving the email, and sending them targeted emails based on the drink they ordered.
Now that we’ve added the email channel, we can set up our journey.

Configuring journey entry

  1. On the left-side Pinpoint menu, navigate to the CoffeeShop Project and select Journeys
  2. Select Create journey
  3. Name the journey “CoffeeShopGeoTarget
  4. Set the entry condition to “geofence enter”
  5. Select Save
Fig. 10: Journey event configuration

Fig. 10: Journey event configuration

Configuring journey activity

  1. Select the Add activity icon
  2. Select Send an email from the dropdown
  3. Choose the email template we created earlier
  4. Enter the verified email we configured earlier
  5. Select Save
Fig. 11: Journey email destination configuration

Fig. 11: Journey email destination configuration

Reviewing Journey

  1. Select Review
  2. Select Mark as reviewed
  3. Select Publish
Fig. 12: Reviewing the Journey

Fig. 12: Reviewing the Journey

Once we publish our journey, a 5 minute timer will start, which will give us time to set up our tracking environment.

Configuring Amazon Location Resources

Now that we’ve configured Pinpoint to send geotargeted emails, we need to set up our Geofences as well as emulate a person passing nearby our coffee shops. To do that, we will use the AWS CLI and AWS Cloudshell .

To open AWS CloudShell, select it in the upper right near the region selection.

Fig. 13: Location of AWS Cloudshell in the AWS Console

Fig. 13: Location of AWS Cloudshell in the AWS Console

AWS CloudShell will now open in the bottom half of the AWS Console , note it may take up to a minute on first launch. First, we’ll create our geofences. For this, we will use Circular geofences around a point location. In this case, we will create two geofences, one for a Coffee shop at Amazon’s Doppler office, and one for a shop at Amazon’s Nitro North office. These correlate with the DynamoDB store information table.

aws location put-geofence --collection-name StoreCollection --geofence-id store_1508 --geometry 'Circle={Center=[-122.33826293063228, 47.61530011310656], Radius=100}'

Successful Geofence creation will create output similar to the below:

{
"CreateTime": "2023-04-21T19:31:57.807000+00:00",
"GeofenceId": "store_1508",
"UpdateTime": "2023-04-21T19:31:57.807000+00:00"
}

Next we create our second geofence:

aws location put-geofence --collection-name StoreCollection --geofence-id store_1509 --geometry 'Circle={Center=[-122.34051934099395, 47.61751544952795], Radius=100}'

Successful Geofence creation will create output similar to the below:

{
"CreateTime": "2023-04-21T19:32:41.980000+00:00",
"GeofenceId": "store_1509",
"UpdateTime": "2023-04-21T19:32:41.980000+00:00"
}

Now that our geofences are created, we can emulate a person walking by and triggering a geofence. We will do this using Amazon Location Service Trackers. In CloudShell, enter the following command:

aws location batch-update-device-position --tracker-name CustomerDevices --updates Accuracy={Horizontal=0},DeviceId=111,Position=-122.33811005706218,47.61541094771129,SampleTime=$(date +%s)

When this command is issued, a geofence is then evaluated which will trigger an event sent to Amazon EventBridge. This event then triggers a Lambda, which creates an event with Pinpoint. This triggers the Journey, which sends an email.

Now check your email, you should see a customized email with the store you were close to and your name . Note because we are not using domain verification, you may receive a warning on the email message. See our documentation on how to use domain verification.

Fig. 14: Email received from Amazon Pinpoint

Fig. 14: Email received from Amazon Pinpoint

Next Steps

For this blog, we used the default Journey configuration. However, we can further optimize our Journey by following Tips and best practices for journeys. You can also set up push notifications or in-app notifications to further optimize the customer experience to catch them in the moment they walk by, instead of when they may check their email next. You can read more about push notifications here.

Clean up

Deleting CloudFormation template

  1. In the AWS Console, navigate to the AWS CloudFormation console. Select the PinpointGeotarget stack
  2. Select Delete Stack

Deleting Pinpoint resources

  1. In the AWS Console, navigate to the Pinpoint Console
  2. Select Message templates
  3. Select the CoffeeShop template
  4. Select Delete then confirm you wish to delete it

Removing email identity

  1. In the AWS Console, navigate to the Pinpoint Console
  2. Navigate to Email, and select Email identities
  3. Select the radio button next to the verified email you configured
  4. Select Remove email identity
  5. Type Delete to confirm the removal

Conclusion

In this post, we explored how you can detect the presence of the customer whenever they cross near the geofenced physical store, using Amazon Location Service in which Amazon EventBridge receives the event, triggers an AWS Lambda function, and then triggers a Journey in Amazon Pinpoint to send a notification to the customer with a coupon.

Further more, integrating this solution with your customer data platform and with Amazon Personalize will help you to personalize the promotions and vouchers to fit the tastes and tendencies of customers

Zach Elliott works as a Solutions Architect focusing on Amazon Location Service at AWS. He is passionate about helping customers build geospatial solutions on AWS. He is also part of the IoT Subject Matter Expert community at AWS and loves helping customers develop unique IoT-based solutions.

Anshul Srivastava Headshot

With an illustrious track record as a technology thought-leader, Anshul joined AWS in 2016 and is the EMEA technology leader for retail. He is responsible for defining and executing the company’s retail technology strategy, which includes building retail-focused solutions with services like Amazon Forecast and Amazon Personalize, as well as experiences like Frictionless Shopping with AI/ML and IoT services from AWS. Anshul also works very closely with AWS global retail customers to help transform their businesses with cutting-edge AWS technologies.

How to create a WhatsApp custom channel with Amazon Pinpoint

Post Syndicated from Sparsh Wadhwa original https://aws.amazon.com/blogs/messaging-and-targeting/whatsapp-with-amazon-pinpoint/

How to add WhatsApp as an Amazon Pinpoint Custom Channel

WhatsApp now reports over 2 billion users in 180 countries, making it a prime place for businesses to communicate with their customers. In addition to native channels like SMS, push notifications, and email, Amazon Pinpoint’s custom channels enable you to extend the capabilities of Amazon Pinpoint and send messages to customers through any API-enabled service, like WhatsApp. With these new channels, you have full control over the message delivery to the endpoints associated with each custom channel campaign.

In this post, we provide a quick overview of the features and capabilities of using a custom channel as part of campaigns. We also provide a blueprint that you can use to build your first sandbox integration with WhatsApp as a custom channel.

Note: WhatsApp is a third-party service subject to additional terms and charges. Amazon Web Services isn’t responsible for any third-party service that you use to send messages with custom channels. 

How to add WhatsApp as a custom channel:

Prerequisites

Before creating your new custom channel, you must have the integration ready and an Amazon Identity and Account Management (IAM) User created with the necessary permissions. First set up the following:

  1. Create an IAM administrator. For more information, see Creating your first IAM admin user and group in the IAM User Guide. Specify the credentials of this IAM User when you set up the AWS Command Line Interface (CLI).
  2. Configure the AWS CLI. For more information about setting up the AWS CLI, see Configuring the AWS CLI.
  3. Follow the steps at Meta documentation – https://developers.facebook.com/docs/whatsapp/cloud-api/get-started to register as a Meta Developer and getting started with WhatsApp Business Cloud API provided directly by Meta. By completing step 1 and step 2 of the above documentation, you should be able to
    1. Register as a Meta Developer,
    2. Claim a test phone for sending messages on WhatsApp,
    3. Verify a recipient phone number (since, currently you’re in Sandbox, you can send WhatsApp messages only to the verified phone numbers. You can verify upto 5 phone numbers)
    4. and finally send a test message on Whatsapp using a provided sample POST request. Remember to review the terms of use for WhatsApp.Screenshot of WhatsApp API in Meta console
  4. In the test message sent above, you have used temporary Access Token credentials which expires in 23 hours. In order to get permanent Access Token, generate a ‘System User Access Token’ by following the steps mention here – https://developers.facebook.com/docs/whatsapp/business-management-api/get-started/

Screenshot of WhatsApp test message sent from Meta Console.

Procedure:

Step 1: Create an Amazon Pinpoint project.

In this section, you create and configure a project in Amazon Pinpoint. Later, you use this data to create segments and campaigns.

To set up the Amazon Pinpoint project

  1. Sign in to the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint/.
  2. On the All projects page, choose Create a project. Enter a name for the project, and then choose Create.
  3. On the Configure features page, under SMS and Voice, choose Configure.
  4. Under General settings, select Enable the SMS channel for this project, and then choose Save changes.
  5. In the navigation pane, under Settings, choose General settings. In the Project details section, copy the value under Project ID. You need this value for later.

Step 2: Create an endpoint.

In Amazon Pinpoint, an endpoint represents a specific method of contacting a customer. This could be their email address (for email messages) or their phone number (for SMS messages) or a custom endpoint type. Endpoints can also contain custom attributes, and you can associate multiple endpoints with a single user. In this step, we create an SMS endpoint that is used to send a WhatsApp message.

To create an endpoint using AWS CLI, at the command line, enter the following command:

aws pinpoint update-endpoint –application-id <project-id> \
–endpoint-id 12456 –endpoint-request “Address='<mobile-number>’, \
ChannelType=’SMS’,Attributes={username=[‘testUser’],integrations=[‘WhatsApp’]}”

In the preceding example, replace <project-id> with the Amazon Pinpoint Project ID that you copied in step 1.

Replace <mobile-number> with your phone number with country code (for example, 12065550142). For the WhatsApp integration to work, you must use the mobile number that are registered on WhatsApp and are already verified on Meta Developer Portal (since your Meta account is currently in sandbox).

Note: WhatsApp Business Cloud message API doesn’t require ‘+’ symbol in the front of the Phone number. So in case you plan to use this segment for both SMS and Custom Channel, you may configure Phone Number in E.164 format (for example, +12065550142) and remove ‘+’ symbol in the Lambda function code that we create in the step 4.

Step 3: Storing WHATSAPP_AUTH_TOKEN, and WHATSAPP_FROM_NUMBER_ID in AWS Secrets Manager.

We can securely store the WhatsApp Auth Token and WhatsApp From Number Id which we have received in the previous steps in AWS Secrets Manager.

  1. Open the AWS Secrets Manager console at https://us-east-1.console.aws.amazon.com/secretsmanager/listsecrets?region=us-east-1 (in the required AWS region), and then click on “Store a new Secret”.
  2. Under “Secret Type”, choose Other type of secret.
  3. Under Key/value Pair, add the following Key-Value pairs:
    1. WHATSAPP_AUTH_TOKEN: <Pass the Auth Token generated previously>
    2. WHATSAPP_FROM_NUMBER_ID : <Pass the From Number Id>.
      AWS Secret Manager Console screenshot storing WHATSAPP_AUTH_TOKEN and WHATSAPP_FROM_NUMBER_ID secrets.
  4. Click Next
  5. Provide the Secret name “MetaWhatsappCreds” and provide a suitable description.
  6. Click Next twice and finally click “Store” button.

Step 4: Create an AWS Lambda.

You must create an AWS Lambda that has the code that calls Meta WhatsApp Business Cloud API and sends a message to the endpoint.

  1. Open the AWS Lambda console at http://console.aws.amazon.com/AWSLambda, and then click on Create Function.
  2. Choose Author from scratch.
  3. For Function Name, enter ‘WhatsAppTest’.
  4. For Runtime, select Python 3.9.
  5. Click Create Function.
  6. For the function code, copy the following and paste into the code editor in your AWS Lambda function:
import base64
import json
import os
import urllib
from urllib import request, parse
import boto3
from botocore.exceptions import ClientError

WhatsApp_messageAPI_URL = "https://graph.facebook.com/v15.0/" 

def get_secret():

    secret_name = "MetaWhatsappCreds"
    region_name = "us-east-1"
    # Pass the required AWS Region in which Secret is stored

    # Create a Secrets Manager client
    session = boto3.session.Session()
    client = session.client(
        service_name='secretsmanager',
        region_name=region_name
    )

    try:
        get_secret_value_response = client.get_secret_value(
            SecretId=secret_name
        )
    except ClientError as e:
        # For a list of exceptions thrown, see
        # https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
        raise e

    # Decrypts secret using the associated KMS key.
    secret = get_secret_value_response['SecretString']
    return secret
   
def lambda_handler(event, context):
    credentials = get_secret()
    WhatsApp_AUTH_TOKEN = json.loads(credentials)["WHATSAPP_AUTH_TOKEN"]
    WhatsApp_FROM_NUMBER_ID = json.loads(credentials)["WHATSAPP_FROM_NUMBER_ID"]
    if not WhatsApp_AUTH_TOKEN:
        return "Unable to access WhatsApp Auth Token."
    elif not WhatsApp_FROM_NUMBER_ID:
        return "Unable to access WhatsApp From Number Id."
    # Lets print out the event for our logs 
    print("Received event: {}".format(event))

    populated_url = WhatsApp_messageAPI_URL + WhatsApp_FROM_NUMBER_ID + "/messages"

    for key in event['Endpoints'].keys(): 
        to_number = event['Endpoints'][key]['Address']
        # Example body and using an attribute from the endpoint
        username = event['Endpoints'][key]['Attributes']['username'][0]
        body = "Hello {}, here is your weekly 10% discount coupon: SAVE10".format(username)
        post_params = {"messaging_product":"whatsapp","to": to_number ,"recipient_type": "individual","type": "text", "text":{"preview_url": "false","body": body}}
        # encode the parameters for Python's urllib 
        print(post_params)
        data = parse.urlencode(post_params).encode('ascii') 
        req = request.Request(populated_url)
        req.add_header("Authorization", WhatsApp_AUTH_TOKEN ) 
        req.add_header("Content-Type","application/json")
        try:
            # perform HTTP POST request
            with request.urlopen(req, data) as f:
                print("WhatsApp returned {}".format(str(f.read().decode('utf-8')))) 
        except Exception as e:
            # something went wrong!
            print(e)

    return "WhatsApp messages sent successfully"
  1. Add permissions to your AWS Lambda to allow Amazon Pinpoint to invoke it using AWS CLI:

aws lambda add-permission \
–function-name WhatsAppTest \
–statement-id sid \
–action lambda:InvokeFunction \
–principal pinpoint.us-east-1.amazonaws.com \
–source-arn arn:aws:mobiletargeting:us-east-1:<account-id>:apps/<Pinpoint ProjectID>/*

Step 5: Create a segment and campaign in Amazon Pinpoint.

Now that we have an endpoint, we must add it to a segment so that we can use it within a campaign. By sending a campaign, we can verify that our Amazon Pinpoint project is configured correctly, and that we created the endpoint correctly.

To create the segment and campaign:

    1. Open the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint, and then choose the project that you created in step 1.
    2. In the navigation pane, choose Segments, and then choose Create a segment.
    3. Name the segment “WhatsAppTest.” Under Segment group 1, include all audiences in the Base Segment and add the following Criteria:
    4. For Choose an endpoint attribute, choose integrations, then for values, choose WhatsApp.Amazon Pinpoint Create Segment Console Screenshot showing the various configurations of Pinpoint Segment.
    5. Confirm that the Segment estimate section shows that there is one eligible endpoint, and then choose Create segment.
    6. In the navigation pane, choose Campaigns, and then choose Create a campaign.
    7. Name the campaign “WhatsAppTest.” Under Choose a channel for this campaign, choose Custom, and then choose Next.
    8. On the Choose a segment page, choose the “WhatsAppTest” segment that you just created, and then choose Next.
    9. In Create your message, choose the AWS Lambda function we just created, ‘WhatsAppTest.’ Select SMS in the Endpoint Options. On the Choose when to send the campaign page, keep all of the default values, and then choose Next. On the Review and launch page, choose Launch campaign.

Screenshot of Pinpoint console showing creation of message for Custom Channel.

Within a few seconds, you should receive a WhatsApp message at the phone number that you specified when you created the endpoint and verified on the Meta Developer portal.

Your Custom channel solution for WhatsApp is now ready to use. But first, review and upgrade your WhatsApp sandbox. This post is simply a walkthrough to show you how quickly you can prototype and start sending WhatsApp messages with Pinpoint and Meta. However, for production usage, you need to make sure to review all of the additional terms and charges. Start here to understand more: https://developers.facebook.com/docs/whatsapp/cloud-api/get-started

As a next steps, you can go ahead and claim a Phone number for sending WhatsApp messages in production. You can further configure a Webhook which can help you in receiving WhatsApp message delivery status and other WhatsApp supported events.

There are several ways you can make this solution your own.

  • Customize your messaging: This post used an example message to be sent to your endpoints within the AWS Lambda. You can customize that message to fit your needs. See the various ways in which you can send WhatsApp messages here.
  • Expand endpoints in your application: This post only used one endpoint for the integration. You can use your WhatsApp integration with new endpoints by importing a segment that can be used with a new campaign. Learn how to import a segment here: https://docs.aws.amazon.com/pinpoint/latest/userguide/segments-importing.html
  • Use new integrations: This post focused on integrating your custom channel with WhatsApp but there are many other integrations that are possible when using AWS Lambda.

Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service. Learn more here: https://aws.amazon.com/pinpoint/

Send WhatsApp messages via Amazon Pinpoint

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/send-whatsapp-messages-via-amazon-pinpoint/

In this blog you will deploy a solution that integrates Amazon Pinpoint with WhatsApp for outbound and inbound messages.

Amazon Pinpoint is a multichannel customer engagement platform allowing you to engage with your customers across 6 different channels (push notifications, email, SMS, voice, in-app messages and custom channel). Using Amazon Pinpoint’s custom channel you can extend its capabilities via a webhook or AWS Lambda function. Among many other possibilities, you can use custom channels to send messages to your customers through any API-enabled service, for example WhatsApp.

According to statista, WhatsApp is one of the most used apps in the world and the most popular messaging app in over 100 countries. It reached 2.3 billion active users in 2022 while in January 2022, WhatsApp was the most downloaded chat and messaging app worldwide, amassing approximately 40.6 million downloads across the Apple App Store and the Google Play Store.

Note: WhatsApp is a third-party service subject to additional terms and charges. Amazon Web Services isn’t responsible for any third-party service that you use to send messages with custom channels.

Solution & Architecture

An integration between Amazon Pinpoint and WhatsApp can be achieved for both outbound and inbound messages. The next section dives deeper into the architecture for both outbound and inbound messages. The solution uses Amazon Pinpoint custom channel, AWS Lambda, Amazon API Gateway, AWS Cloudformation and AWS Secrets Manager.

Outbound messages

For outbound messages Amazon Pinpoint integrates with WhatsApp via its custom channel allowing users to send WhatsApp messages using Pinpoint campaigns and journeys. Specifically, Pinpoint invokes an AWS Lambda function and performs an API call to WhatsApp. The API call contains the WhatsApp access token, the customer’s mobile number and the WhatsApp message template name.

outbound-message

  1. Amazon Pinpoint campaign or journey using endpoint type CUSTOM invokes an AWS Lambda function. The payload along with the endpoint data should contain the WhatsApp message template name as part of the Custom Data field.
  2. The AWS Lambda obtains the WhatsApp access token from the AWS Secrets Manager and performs a POST API call to the WhatsApp API.
  3. The WhatsApp message gets delivered to the customer.

Inbound messages

For inbound messages WhatsApp requires a Callback URL. This solution utilizes Amazon API Gateway to create the Callback URL and AWS Lambda to authorize and process inbound messages.

inbound-message

  1. Customer sends a message to your WhatsApp number.
  2. WhatsApp makes a GET API call to the Amazon API Gateway endpoint for verification purposes. All subsequent calls containing the customers’ messages are POST.
  3. If the API call method is GET, the AWS Lambda checks if the verify token matches the one stored as an AWS Lambda Environment Variable. If it’s TRUE, it returns a code called HubChallenge that WhatsApp is expecting in order to verify the connection. For POST API calls, the AWS Lambda loops through the customer messages and retrieves the customer’s phone number, timestamp, message_id and message_body. For each message processed, the AWS Lambda function performs an API call to WhatsApp to mark the message as read.

Considerations

  • Message delivery/engagement events aren’t being recorded.
  • Messages sent aren’t personalized and they are currently using message templates hosted by WhatsApp.
  • It is recommended to use endpoint type CUSTOM and not SMS for the following reasons:
    • WhatsApp’s phone number format doesn’t contain + comparing to Pinpoint SMS address format. If you decide to use the endpoint type SMS you will need to process the endpoint Address by removing the +.
    • Using the endpoint type SMS forces you to send WhatsApp messages with the same throughput (messages per second) as your Pinpoint SMS channel.

Prerequisites

  1. AWS account.
  2. An Amazon Pinpoint project – How to create an Amazon Pinpoint project.
  3. An Amazon Pinpoint CUSTOM endpoint with address a mobile number which is associated to a WhatsApp account. See example CUSTOM endpoint in a CSV here.
  4. A Meta (Facebook) developer account, for more details please go to the Meta for Developers console.

Implementation

Meta for Developers console

  1. Navigate and login into the Meta for Developers console, click My Apps and select Create App (or use an existing app of type Business).
  2. Select Business as an app type, which supports WhatsApp and click Next.
  3. Provide a display name, contact email, choose whether or not to attach Business Account (optional) and select Create App.
  4. Navigate to the Dashboard and select Set Up in the WhatsApp service in the Add product to your app section.
  5. Create or select an existing Meta Business Account and select Continue.
  6. Navigate to WhatsApp/Getting Started and take a note of the Phone number ID, which will be needed in AWS CloudFormation template later on. WhatsAppPhoneNumberId
  7. On the WhatsApp/Getting Started page, add your customer phone number you are going to use for testing in the Select a recipient phone number dropdown. Follow the instructions to add and verify your phone number. Note: You must have WhatsApp registered with the number and the WhatsApp client installed on your mobile device. Verification message could appear in the Archived list in your WhatsApp client and not in the main list of messages.

Create a new user to access WhatsApp via API

  1. Open Meta’s Business Manager and select business you created or associated your app with earlier.
  2. Below Users, select System Users and choose Add to create a new system user.
  3. Give a name to the system user and set their role as Admin and click Create System User.
  4. Use the Add Assets button to associate the new user with your WhatsApp app. From the Select asset type list, select Apps, then in the Select assets, select your WhatsApp app’s name. Enable the Test app Partial access for the user, select Save Changes and Done.
  5. Click on the Generate new token button, select the WhatsApp app created earlier and choose Permanent as Token expiration.
  6. Select whatsapp_business_messaging and whatsapp_business_management from the list of Available Permissions and click Generate token at the bottom.
  7. Copy and save your access token. This will be needed in AWS CloudFormation template later on. Make sure you copied the token before clicking on OK.

For more details on creating the access token, you can navigate to WhatsApp/Configuration and click on Learn how to create a permanent token.

Solution deployment

  1. Download the AWS CloudFormation template and navigate to the AWS CloudFormation console under the AWS region you want to deploy the solution.
  2. Select Create stack and With new resources. Choose Template is ready as Prerequisite – Prepare template and Upload a template file as Specify template. Upload the template downloaded in step 1.
  3. Fill the AWS CloudFormation parameters as shown below:
    1. ApiGatewayName: This is the name of the Amazon API Gateway resource.
    2. PhoneNumberId: This is the WhatsApp phone number Id you obtained from the Meta for Developers console under WhatsApp/Getting Started.
    3. PinpointProjectId: Paste your Amazon Pinpoint’s project Id. This allows Amazon Pinpoint to invoke the AWS Lambda, which sends WhatsApp messages as part of a campaign or journey.
    4. VerifyToken: The verify token is an alphanumeric token that you provide to WhatsApp when setting up the Webhook Callback URL for inbound messages and notifications. You can decide the value of this token e.g. 123abc.
    5. WhatsAppAccessToken: The access token should start with Bearer EEAEAE… and you should have obtained it from the section of this blog Create a new user to access WhatsApp via API.
  4. Once the AWS CloudFormation stack is deployed, copy the Amazon API GateWay endpoint from the AWS CloudFormation outputs tab. Navigate to the Meta for Developers App dashboard, choose Webhooks, select Whatsapp Business Account and subscribe to messages. SubscribeToMessages
  5. Paste the Amazon API Gateway endpoint as a Callback URL. For the Verify token, provide the same value as the AWS CloudFormation template parameter VerfiyToken and select Verify and save. VerifyAndSave

Testing

  • Sending messages: To test sending a message to WhatsApp using Amazon Pinpoint:
    • Navigate to the Amazon Pinpoint Campaigns
    • Create a new Campaign with WhatsAppCampaign as the Campaign name, select Standard campaign as the Campaign type, choose Custom as Channel and select Next.
    • Select a segment that includes the CUSTOM endpoint that you will send the message to
    • Choose the AWS Lambda Function containing the name WhatsAppSendMessageLambda. Under Custom data type hello_world, for Endpoint Options choose Custom and select Next. Note that the hello_world is the WhatsApp default message template.
    • In Step 4 leave everything with the default values, scroll to the bottom of the page and select Next.
    • Choose Launch campaign.
  • Receiving messages: Text or reply to the WhatsApp number. The inbound messages are being printed in the Amazon CloudWatch logs of the AWS Lambda function containing the name WhatsAppWebHookLambda. ReceivedMessage

Next steps

There are several ways to extend this solution’s functionality, see some of them below:

  • Instead of specifying the WhatsApp message template name, provide directly the text you want to send using the Pinpoint’s custom channel Custom data field. To do this, update the AWS Lambda function code responsible for sending messages with the one below:
    import os
    import json
    import boto3
    from urllib import request, parse
    from botocore.exceptions import ClientError
    phone_number_id = os.environ['PHONE_NUMBER_ID']
    secret_name = os.environ['SECRET_NAME']
    def handler(event, context):
        print("Received event: {}".format(event))
        session = boto3.session.Session()
        client = session.client(service_name='secretsmanager')
        try:
            get_secret_value_response = client.get_secret_value(SecretId=secret_name)
        except ClientError as e:
            raise e
        else:
            secret = get_secret_value_response['SecretString']
            url = 'https://graph.facebook.com/v15.0/'+ phone_number_id + '/messages'
            message = event['Data'] # Obtaining the message from the Custom Data field
            for key in event['Endpoints'].keys(): 
                to_number = str(event['Endpoints'][key]['Address'])
                send_message(secret, to_number, url, message_template)
    def send_message(secret, to_number, url, message_template):
        headers = {
            'content-type': 'application/json',
            'Authorization': secret
        }
        # Building the request body and insted of type = template, it's replaced with type = text
        data = parse.urlencode({
            'messaging_product': 'whatsapp',
            'to': to_number,
            'type': 'text',
            'text': {
                'body': message
            }
        }).encode()
        req =  request.Request(url, data=data, headers=headers)
        resp = request.urlopen(req)
  • Use WhatsApp’s message template components to populated dynamically variables. This requires an update on the respective WhatsApp message template and API request body to WhatsApp’s API. The message template should look like this:

PersonalizedMessageTemplate

And the API request body should look like this. Note that the value for each variable should be obtained from the Pinpoint endpoint or user attributes.

{
  "from": from_number,
  "to": to_number,
  "channel": "whatsapp",
  "content": {   
    "contentType": "template",
    "template": {
        "templateId" : "first_pinpoint_message",
        "templateLanguage" : "en",
        "components" : {
            "body" : [
                    {
                        "type": "text",
                        "text": "Pavlos"
                    }
            ]           
        }
   }
  }
}

Clean-up

To delete the solution, navigate to the AWS CloudFormation console and delete the stack deployed.

About the Authors

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Senior Specialist Solutions Architect at AWS. He enjoys diving deep into customers’ technical issues and help in designing communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Customize marketing messages and promotions for personalized outreach

Post Syndicated from binpazho original https://aws.amazon.com/blogs/messaging-and-targeting/customize-marketing-messages-and-promotions-for-personalized-outreach/

Introduction

Amazon Pinpoint is widely used by many customers for their various user engagement use cases like marketing campaigns, scheduled communications (newsletters, reminders, etc.), and transactional messaging. By using the message template feature in Amazon Pinpoint, customers can design messages personalized to the specific end users, by using variable attributes. While Amazon Pinpoint enables customers to include up to 250 attributes for each user, often times there might be need to pick and choose from a wide range of attributes about a user, that can lead to needing more than the allowed number of attributes.

The CampaignHook feature of Amazon Pinpoint can come to rescue for a situation like this. Using the CampainHook feature, we can filter out attributes that are not applicable to a specific user, while allowing to add new attributes, right before of sending the message. In this blog, I will walk you through how I have implemented the CampaignHook feature for a similar use case.

Sample Use-Cases

When setting up your Pinpoint campaign, following are the use cases where a CampaignHook can be enabled:

  • Retrieving data and perform custom compute logic in real time from third party data stores.
  • Filter endpoints out of the send: This is useful if you need to do some type of custom logic that you can’t do in Segmentation (custom opt-out, quiet time, campaign prioritization, etc.)
  • Avoid costly and time consuming Extract, Transform & Load (ETL) processes by accessing the data sources directly and applying custom compute logic in real-time.

Solution overview

CampaignHook Demo Architecture

The diagram above shows the solution that we will setup in this blog. As you can see, the Campaign event will trigger the Amazon Pinpoint Campaign. The event can be triggered from your web or mobile app that are accessed by your end-users, and can be setup to be triggered when the user performs a certain action. You can read more about setting up Amazon Pinpoint campaign in the user guide. By having the CampaignHook enabled on your Amazon Pinpoint campaign, the Lambda function that is configured with the CampaignHook will be triggered. This function will have access to the endpoint attributes passed by the Campaign event, and perform additional logic to derive new attributes for the user. Once all the new fields are derived, the function will update the user endpoint. Amazon pinpoint will then perform the next steps in the Campaign, and substitute the variables in the message template, before the personalized message is sent to the end user.

Prerequisites

  • AWS Account with Console and Programmatic access
  • Access to AWS CloudShell
  • Email channel enabled in Amazon Pinpoint

Building the demo

Build the Amazon Pinpoint Project

From the AWS Management console, go to Amazon Pinpoint and create a new project called “PinpointCampaignHookDemo”, and choose the option to enable the email channel. For more information about creating a project see the user guide, and follow the instructions here to setup your email channel.

If your account is in the Sandbox account, you will need to verify the email address, before you can send the email. You can follow the steps here to upgrade your account to a Production status if you are ready to deploy this solution to production.

Create the segment.

A segment is a group of your users that share certain attributes. For example, a segment might contain all of your users who use version 2.0 of your app on an Android device, or all users who live in the city of Los Angeles. You can send multiple campaigns to a single segment, and you can send a single campaign to multiple segments.

For this demo, let’s create a Dynamic Segment. Let’s call it ‘CampaignHookDemoSegment’.  Follow the steps here to create your Dynamic Segment.

Create a Segment

Setup message template

Let’s create our first template and call it “CampaignHookDemoTemplate”. You can read more about Amazon Pinpoint templates in the user guide.

For this demo, I have used the HTML template shown below, and I have 3 endpoint attribute variables: 2 that are passed from the campaign event trigger, and the third one (Company) that will be generated by the CampaignHook lambda function. For the subject of the email, I used “Campaign Hook Demo Campaign“.

Create eMail Template

The email template can be found in this GitHub repository.

Create Campaign

Next, create your campaign and use the Segment and email Template that you created in the previous steps by following the instructions here.

Select the ‘when an event occurs’ option to trigger the campaign when an event occurs. (This option will trigger the campaign when a specific event occurs). Yoy may also schedule your campaign to run on a scheduled bases as available in the setup screen. I used ‘CampaignHookTrigger’ as my event name.

Create a campaign

Set your Campaign Start date, time and end date. I have left all the other settings to default and saved the campaign. Now that you have successfully created your first Campaign, you are ready for the next steps.

Set Campaign Start and End Times

Create the Lambda function

This is the function that we will configure to trigger the Amazon pinpoint campaign event . From the Lambda console page, create a new function by clicking on the ‘Create function’ button. You can then pick the following options and create the function.

Name: Campaign_event_trigger_function

Runtime: Python 3.9 or higher.

Replace the default script with the code from the GitHub repository, and then deploy your code by clicking on the “Deploy” button.

Assign permissions

In-order for the Lambda function trigger to trigger the Pinpoint Campaign, you will need to add an inline policy to the IAM role that is attached to your Lambda function, by selecting Pinpoint as the service and PutEvents from the Write options. You can select the Lambda function as the resource to which the access will be granted.

{

    "Version" :"2012-10-17",

    "Statement":[

        {

            "Sid": "VisualEditor0",

            "Effect": "Allow",

            "Action": [

                "mobiletargeting:PutEvents"

            ],

            "Resource":"ARN of your Lambda function goes here."

        }

    ]

}

Create the CampaignHook Lambda function

This is the function that we will triggered from the CampaignHook. From your Lambda console, click on “Create function” and enter the basic information as shown below to create your function.

Name: CampaignHookFunction

Runtime: Python 3.9 or higher.

Next replace your default code with the sample GitHub code, and then deploy your code by clicking on the “Deploy” button.

Assign permissions

Next add permissions for Amazon Pinpoint to invoke the Lambda function by running the command below from your Command Shell. Replace the Lambda function name and Account number with yours.

aws lambda add-permission \

--function-name [YourCampaignHookLambdaFunctionName] \

--statement-id my-hook-id1 \

--action lambda:InvokeFunction \

--principal pinpoint.us-east-1.amazonaws.com \

--source-arn 'arn:aws:mobiletargeting:us-east-1:[YourAccountNumber]:apps/*'

You can also do this from the Lambda console, by clicking on “Configuration” and then scrolling down to “Resource based Policy” and by clicking on “Add permissions“.

Update Campaign settings to add the Campaign Hook

Now that you have the Lambda function that needs to act as the hook is created, and granted Amazon Pinpoint service to invoke that function, run the command below to update the Campaign settings to add the Campaign Hook. You can also set a default CampaignHook for ALL campaigns in the project by setting the CampaignHook property on the Project Settings via this API.

Replace the application-id (project id), campaign-id, and the arn of the Campaign Hook lambda function and run the command below. (You can find the Project ID by clicking on All Projects at the top-left of the Pinpoint Console. The Campaign ID can be found by opening your Pinpoint Project and then clicking Campaigns in the Pinpoint Console.)

aws pinpoint   update-campaign --application-id /

[your-application-id-goes-here] –campaign-id /

[your-campaign-id-goes-here] --cli-input-json '{"ApplicationId": /

"","CampaignId": "","WriteCampaignRequest": {"Hook": {"LambdaFunctionName": /

"your-CampaignHook-Function-goes-here","Mode": "FILTER","WebUrl": ""}}}'

You can optionally run the command below to make sure that the campaign settings have been updated:

aws pinpoint get-campaign –application-id [your-application-id-goes-here]  –campaign-id [your-campaign-id-goes-here]

Test your Campaign.

Go back to your Lambda function that you have created to trigger the Campaign in the “Create the Lambda function” step above. I have used the test event as shown below. Update the Application id to reflect your Project id and change the email address to the email you verified earlier and click on “Test” button.

{

    "application_id": "your application id",

    "endpoint_id": "223",

    "event_type": "CampaignHookEvent",

    "nextTestDate": "12/15/2025",

    "FirstName": "Jack",

    "email": "[email protected]",

    "userid": "Jack123"

}

You should now receive an email with the variables replaced with the values that was passed from your json payload. Further you can see the Company name was added to the endpoint from the CampaignHook Lambda, which is passed to the email template. If you have not received the email, please check the following:

  • The Lambda function ran without any errors
  • The LambdaHook function has the proper rights assigned to be invoked from Pinpoint
  • The From and To email id that you have used are verified in SES.

Verify email identity

Clean up resources

Once you are satisfied with your setup and testing, you can now clean up the resources by following the steps below:

  • Delete your Amazon Pinpoint Project, Campaign and Segment.
    • aws pinpoint delete-campaign –application-id [your appl id] –campaign-id [your campaign id]
    • aws pinpoint delete-segment –application-id [your app id]  –segment-id [your segment id]
    • aws pinpoint delete-app –application-id [your app id]
  • Delete you Lambda functions
    • aws lambda delete-function –function-name CampaignHookFunction
    • aws lambda delete-function –function-name Campaign_event_Trigger_Function

Conclusion

By dynamically generating the attributes in real-time, customers can now add greater levels of personalization within a single user message template. By invoking a Lambda function, you can perform custom compute logic, calculate new attribute values, and access external data stores, to modify the campaign’s segment, right before Amazon Pinpoint sends the message. Campaign Hook feature makes this possible as explained in this blog by running few basic CLI commands to enable the feature on your Amazon Pinpoint Campaign. You can read more about Amazon Pinpoint Campaign from the user guide documentation”.