Tag Archives: AWS

Replace traditional email mailbox polling with real-time reads using Amazon SES and Lambda

Post Syndicated from agardezi original https://aws.amazon.com/blogs/messaging-and-targeting/replace-traditional-email-mailbox-polling-with-real-time-reads-using-amazon-ses-and-lambda/

Integrating emails into an automated workflow for automated processing can be challenging. Traditionally, applications have had to use the POP protocol to connect to mail servers and poll for emails to arrive in a mailbox and then process the messages inline and perform actions on the message. This can be an inefficient mechanism and prone to errors that result in the workflow missing messages. Since this method requires polling it’s not great if you need real-time processing of messages and introduces inefficiencies in the design. Amazon Simple Email Service (Amazon SES) is a cost effective, scalable and flexible email service with support for different workflows including the ability to perform spam checks and virus scans. In this blog you will see how to use Amazon SES with AWS Lambda and Amazon S3 in order to automate the processing of emails in real time and integrate with an application without the need for polling.

The use case explored in this blog focuses on automation for CRM or order processing platforms and processing of email related to customer contact or direct email requests. An example of this use case is copying a client engagement email to Salesforce (or any other database) where it is recorded and can later be categorized or attached to the appropriate client account or opportunity. When designing an application that needs to read emails from a mailbox, a developer would traditionally have to use a mail library (like JavaMail if using Java) to make a call to the mailbox, authenticate and then pull messages into an application object. This would mean polling the mailbox every 10 – 15 minutes to check for new messages, handle errors when the mailbox is unavailable and maintaining a fully functioning mailbox. This solution can help you implement automated processing of emails arriving in a mailbox without the need to poll the mailbox. The entire solution will be implemented in a serverless fashion.

Solution

This blog post shows how to use SES to perform automated processing of email in an application workflow. I will use the option in SES to save received emails in S3 and trigger a Lambda function to process the message without having to poll a mailbox. This sample application demo is using email to receive simple orders which get automatically processed and the details stored in DynamoDB. The following diagram shows the high-level architecture:

Step 1: Create an S3 Bucket for Email Storage

Start by creating an S3 bucket where received emails will be stored in order for the full email to be processed by the lambda. The bucket must have a policy attached so SES can put objects in the bucket on your behalf:

{
  "Version":"2012-10-17",
  "Statement":[
    {
      "Sid":"AllowSESPuts",
      "Effect":"Allow",
      "Principal":{
        "Service":"ses.amazonaws.com"
      },
      "Action":"s3:PutObject",
      "Resource":"arn:aws:s3:::myBucket/*",
      "Condition":{
        "StringEquals":{
          "aws:Referer":"111122223333"
        }
      }
    }
  ]
}

Make the following changes to the preceding policy example:

  1. Replace myBucket with the name of the Amazon S3 bucket that you want to write to.
  2. Replace 111122223333 with your AWS account ID.

You can find out more about the policy here.

Step 2: Create DynamoDB Table to Simulate Application

Next, add a DynamoDB table. The DynamoDB table will store the incoming order info. For this sample we will keep it simple and have a table with email as a partition key. Here is the data model:

{   
    "email_order_received": {
        "email": "string",
        "itemname": "string",
        "quantity": "number"
    }   
}

Step 3: Create Lambda Function triggered by SES to Process Email

Now the DynamoDB table is ready, create the Lambda function to process the email and send data to the DynamoDB table. The lambda function needs an execution role that has permissions to access the S3 bucket, the DynamoDB table and create the CloudWatch log group. It also needs a Resource-based Policy so SES can invoke the Lambda function. In the final step when we configure SES to call the lambda, SES automatically adds the necessary permissions to the function as detailed here.  This is a sample policy statement:

{
  "Version": "2012-10-17",
  "Id": "default",
  "Statement": [
    {
      "Sid": "allowSesInvoke",
      "Effect": "Allow",
      "Principal": {
        "Service": "ses.amazonaws.com"
      },
      "Action": "lambda:InvokeFunction",
      "Resource": "arn:aws:lambda:eu-west-1:111122223333:function:email-event-ses",
      "Condition": {
        "StringEquals": {
          "AWS:SourceAccount": "111122223333"
        }
      }
    }
  ]
}

Sample Lambda code in python:

import boto3
import email


def lambda_handler(event, context):
    s3 = boto3.client("s3")
    dynamodb = boto3.resource("dynamodb")
    table = dynamodb.Table('email_order_received')
    
    print("Spam filter")
    # Check the SES spam and virus filter settings
    if (
        event["Records"][0]["ses"]["receipt"]["spfVerdict"]["status"] == "FAIL" or
        event["Records"][0]["ses"]["receipt"]["dkimVerdict"]["status"] == "FAIL" or
        event["Records"][0]["ses"]["receipt"]["spamVerdict"]["status"] == "FAIL" or
        event["Records"][0]["ses"]["receipt"]["virusVerdict"]["status"] == "FAIL"
       ):
        print("Dropping Spam")
    else:
        print("Not Spam")
        email_bucket = "email-handling-test"
        bucketkey = "monitor/" + event["Records"][0]["ses"]["mail"]["messageId"]
    
        fileObj = s3.get_object(Bucket = email_bucket, Key=bucketkey)
    
        msg = email.message_from_bytes(fileObj['Body'].read())
        From = msg['From']
        itemname = msg['Subject']
        body = ""
        if msg.is_multipart():
            for part in msg.walk():
                type = part.get_content_type()
                disp = str(part.get('Content-Disposition'))
                # look for plain text parts, but skip attachments
                if type == 'text/plain' and 'attachment' not in disp:
                    charset = part.get_content_charset()
                    # decode the base64 unicode bytestring into plain text
                    body = part.get_payload(decode=True).decode(encoding=charset, errors="ignore")
                    # if we've found the plain/text part, stop looping thru the parts
                    break
        else:
            # not multipart - i.e. plain text, no attachments
            charset = msg.get_content_charset()
            body = msg.get_payload(decode=True).decode(encoding=charset, errors="ignore")
            
        table.put_item(
            Item={
                'email': From,
                'itemname': itemname,
                'quantity': body
            }
        )
        print("inserted data into dynamodb")

When you add a Lambda action to a receipt rule, Amazon SES sends an event record to Lambda every time it receives an incoming message. This event contains information about the email headers for the incoming message, as well as the results of tests (spam filtering and virus scanning) that Amazon SES performs on incoming messages, however it omits the body of the incoming email. This is why the lambda has to process the body form the email stored in S3. You can see details of the event here. In this demo app we assume the item name is in the subject and the body of the email has the quantity of the items and this data is written to the DynamoDB table.

Step 4: Configure SES to Send Emails to S3 and Trigger Lambda Function

The final step is to configure Amazon SES. Start by verifying a domain so SES can use it to send and receive emails. Domain verification helps ensure you are the owner of the domain and are thus authorised to manage the sending and receiving of the emails from addresses in the domain. To verify your domain:

  1. In the SES console in the navigation pane under Identity Management, choose Domains.
  2. Choose Verify new Domain
  3. In the Verify new Domain dialog enter your domain name
  4. Choose Verify This Domain
  5. In the dialogue box you will see a Domain verification record set. You need to add this record to your domain DNS server. You will also have to add the email receiving record (MX Record) to you domain DNS server.
  6. If your DNS server is Route53 and it is registered under the same account then SES also gives you the option to update your DNS server from within the SES console.

Once the domain is verified its status goes from “pending verification” to “verified” and now it can used it to send and receive emails.

Next, create a recipient rule set. The Rule Set lets you specify what SES does with emails it receives on domains you own. You can create rules for individual addresses or any address under the domain. To create the Rule Set:

  1. In the left navigation pane, under Email Receiving, choose Rule Sets.
  2. Choose Create Rule.
  3. Enter the recipient email address you want to configure the rule for. You can add up to a maximum of 100 recipient addresses or just set it up for any address in the domain using just the domain name as a wildcard.
  4. Once the addresses have been added, add the actions for the rule. Add two actions:
    1. First one is of type S3, this is to save a copy of the email to the S3 bucket created in step 1. Select the bucket name created in step 1 from the drop-down list. You can add a prefix to the filename as well to categorise the output of different rules.
    2. Second is of type Lambda to trigger the lambda for processing the email. Select the lambda created in step 3 from the drop-down list.

Once the SES Rule is configured, we have the full workflow in place. Now any email sent to the [email protected] address will be processed by the Lambda. In this way you can configure email processing to be part of your application workflow without having to perform polling.

Clean-up

To clean up the resources used in your account:

  1. Navigate to Amazon S3 and delete the contents of the bucket you created where your emails are stored.
  2. Once the bucket is empty, delete the bucket.
  3. Navigate to the DynamoDB console and delete the table you created above. Make sure you select the option to “Delete all CloudWatch alarms for this table”
  4. Remove the domain from Amazon SES. To do this, navigate to the Amazon SES Console and choose Domains from the left navigation. Select the domain you want to remove and choose Remove button to remove it from Amazon SES.
  5. From the Amazon SES Console, navigate to the Rule Sets from the left navigation. On the Active Rule Set section, choose View Active Rule Set button and delete all the rules you have created, by selecting the rule and choosing Action, Delete.
  6. On the Rule Sets page choose Disable Active Rule Set button to disable listening for incoming email messages.
  7. On the Rule Sets page, Inactive Rule Sets section, delete the only rule set, by selecting the rule set and choosing Action, Delete.
  8. Navigate to the Lambda console and delete the Lambda you created earlier. Select the Lambda and choose Delete from the Actions menu.
  9. Navigate to CloudWatch console and from the left navigation choose Logs, Log groups. Find the log group that belongs to the resources and delete it by selecting it and choosing Actions, Delete log group(s).

Conclusion

In this post, we have shown you how to integrate email processing into an application workflow without having to resort to polling a mail box.

By using SES to receive emails you can create a modular serverless architecture that allows emails to be processed and checked for spam plus viruses and the output can then be sent to any downstream system or stored in a database for application use.


About the Author

Syed Ali Abbas Gardezi is a Sr. Solution Architect for AWS based in London, United Kingdom. He works with AWS GSI Partners architecting, designing and implementing various large-scale IT solution. Before joining AWS he worked in several Architecture roles in a tier 1 financial organisation in London.

How to use domain with Amazon SES in multiple accounts or regions

Post Syndicated from Leonardo Azize original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-use-domain-with-amazon-ses-in-multiple-accounts-or-regions/

Sometimes customers want to use their email domain with Amazon Simples Email Service (Amazon SES) across multiple accounts, or the same account but across multiple regions.

For example, AnyCompany is an insurance company with marketing and operations business units. The operations department sends transactional emails every time customers perform insurance simulations. The marketing department sends email advertisements to existing and prospective customers. Since they are different organizations inside AnyCompany, they want to have their own Amazon SES billing. At the same time, they still want to use the same AnyCompany domain.

Other use-cases include customers who want to setup multi-region redundancy, need to satisfy data residency requirements, or need to send emails on behalf of several different clients. In all of these cases, customers can use different regions, in the same or across different accounts.

This post shows how to verify and configure your domain on Amazon SES across multiple accounts or multiple regions.

Overview of solution

You can use the same domain with Amazon SES across multiple accounts or regions. Your options are: different accounts but the same region, different accounts and different regions, and the same account but different regions.

In all of these scenarios, you will have two SES instances running, each sending email for example.com domain – let’s call them SES1 and SES2. Every time you configure a domain in Amazon SES it will generate a series of DNS records you will have to add on your domain authoritative DNS server, which is unique for your domain. Those records are different for each SES instance.

You will need to modify your DNS to add one TXT record, with multiple values, for domain verification. If you decide to use DomainKeys Identified Mail (DKIM), you will modify your DNS to add six CNAME records, three records from each SES instance.

When you configure a domain on Amazon SES, you can also configure a MAIL FROM domain. If you decide to do so, you will need to modify your DNS to add one TXT record for Sender Policy Framework (SPF) and one MX record for bounce and complaint notifications that email providers send you.

Furthermore, your domain can be configured to support DMAC for email spoofing detection. It will rely on SPF or DKIM configured above. Below we walk you through these steps.

  • Verify domain
    You will take TXT values from both SES1 and SES2 instances and add them in DNS, so SES can validate you own the domain
  • Complying with DMAC
    You will add a TXT value with DMAC policy that applies to your domain. This is not tied to any specific SES instance
  • Custom MAIL FROM Domain and SPF
    You will take TXT and MX records related from your MAIL FROM domain from both SES1 and SES2 instances and add them in DNS, so SES can comply with DMARC

Here is a sample matrix of the various configurations:

Two accounts, same region Two accounts, different regions One account, two regions
TXT records for domain verification*

1 record with multiple values

_amazonses.example.com = “VALUE FROM SES1”
“VALUE FROM SES2”

CNAMES for DKIM verification

6 records, 3 from each SES instance

record1-SES1._domainkey.example.com = VALUE FROM SES1
record2-SES1._domainkey.example.com = VALUE FROM SES1
record3-SES1._domainkey.example.com = VALUE FROM SES1
record1-SES2._domainkey.example.com = VALUE FROM SES2
record2-SES2._domainkey.example.com = VALUE FROM SES2
record3-SES2._domainkey.example.com = VALUE FROM SES2

TXT record for DMARC

1 record. It is not related to SES instance or region

_dmarc.example.com = DMARC VALUE

MAIL FROM MX record to define message sender for SES

1 record for entire region

mail.example.com = 10 feedback-smtp.us-east-1.amazonses.com

2 records, one for each region

mail1.example.com = 10 feedback-smtp.us-east-1.amazonses.com
mail2.example.com = 10 feedback-smtp.eu-west-1.amazonses.com

MAIL FROM TXT record for SPF

1 record for entire region

mail.example.com = “v=spf1 include:amazonses.com ~all”

2 records, one for each region

mail1.example.com = “v=spf1 include:amazonses.com ~all”
mail2.example.com = “v=spf1 include:amazonses.com ~all”

* Considering your DNS supports multiple values for a TXT record

Setup SES1 and SES2

In this blog, we call SES1 your primary or existing SES instance. We assume that you have already setup SES, but if not, you can still follow the instructions and setup both at the same time. The settings on SES2 will differ slightly, and therefore you will need to add new DNS entries to support the two-instance setup.

In this document we will use configurations from the “Verification,” “DKIM,” and “Mail FROM Domain” sections of the SES Domains screen and configure SES2 and setup DNS correctly for the two-instance configuration.

Verify domain

Amazon SES requires that you verify, in DNS, your domain, to confirm that you own it and to prevent others from using it. When you verify an entire domain, you are verifying all email addresses from that domain, so you don’t need to verify email addresses from that domain individually.

You can instruct multiple SES instances, across multiple accounts or regions to verify your domain.  The process to verify your domain requires you to add some records in your DNS provider. In this post I am assuming Amazon Route 53 is an authoritative DNS server for example.com domain.

Verifying a domain for SES purposes involves initiating the verification in SES console, and adding DNS records and values to confirm you have ownership of the domain. SES will automatically check DNS to complete the verification process. We assume you have done this step for SES1 instance, and have a _amazonses.example.com TXT record with one value already in your DNS. In this section you will add a second value, from SES2, to the TXT record. If you do not have SES1 setup in DNS, complete these steps twice, once for SES1 and again for SES2. This will prove to both SES instances that you own the domain and are entitled to send email from them.

Initiate Verification in SES Console

Just like you have done on SES1, in the second SES instance (SES2) initiate a verification process for the same domain; in our case example.com

  1. Sign in to the AWS Management Console and open the Amazon SES console.
  2. In the navigation pane, under Identity Management, choose Domains.
  3. Choose Verify a New Domain.
  4. In the Verify a New Domain dialog box, enter the domain name (i.e. example.com).
  5. If you want to set up DKIM signing for this domain, choose Generate DKIM Settings.
  6. Click on Verify This Domain.
  7. In the Verify a New Domain dialog box, you will see a Domain Verification Record Set containing a Name, a Type, and a Value. Copy Name and Value and store them for the step below, where you will add this value to DNS.
    (This information is also available by choosing the domain name after you close the dialog box.)

To complete domain verification, add a TXT record with the displayed Name and Value to your domain’s DNS server. For information about Amazon SES TXT records and general guidance about how to add a TXT record to a DNS server, see Amazon SES domain verification TXT records.

Add DNS Values for SES2

To complete domain verification for your second account, edit current _amazonses TXT record and add the Value from the SES2 to it. If you do not have an _amazonses TXT record create it, and add the Domain Verification values from both SES1 and SES2 to it. We are showing how to add record to Route 53 DNS, but the steps should be similar in any DNS management service you use.

  1. Sign in to the AWS Management Console and open the Amazon Route 53 console.
  2. In the navigation pane, choose Hosted zones.
  3. Choose the domain name you are verifying.
  4. Choose the _amazonses TXT record you created when you verified your domain for SES1.
  5. Under Record details, choose Edit record.
  6. In the Value box, go to the end of the existing attribute value, and then press Enter.
  7. Add the attribute value for the additional account or region.
  8. Choose Save.
  9. To validate, run the following command:
    dig TXT _amazonses.example.com +short
  10. You should see the two values returned:
    "4AjLMzUu4nSjrz4QVqDD8rXq8X2AHr+JhGSl4foiMmU="
    "abcde12345Sjrz4QVqDD8rXq8X2AHr+JhGSl4foiMmU="

Please note:

  1. if your DNS provider does not allow underscores in record names, you can omit _amazonses from the Name.
  2. to help you easily identify this record within your domain’s DNS settings, you can optionally prefix the Value with “amazonses:”.
  3. some DNS providers automatically append the domain name to DNS record names. To avoid duplication of the domain name, you can add a period to the end of the domain name in the DNS record. This indicates that the record name is fully qualified and the DNS provider need not append an additional domain name.
  4. if your DNS server does not support two values for a TXT record, you can have one record named _amazonses.example.com and another one called example.com.

Finally, after some time SES will complete its validation of the domain name and you should see the “pending validation” change to “verified”.

Verify DKIM

DomainKeys Identified Mail (DKIM) is a standard that allows senders to sign their email messages with a cryptographic key. Email providers then use these signatures to verify that the messages weren’t modified by a third party while in transit.

An email message that is sent using DKIM includes a DKIM-Signature header field that contains a cryptographically signed representation of the message. A provider that receives the message can use a public key, which is published in the sender’s DNS record, to decode the signature. Email providers then use this information to determine whether messages are authentic.

When you enable DKIM it generates CNAME records you need to add into your DNS. As it generates different values for each SES instance, you can use DKIM with multiple accounts and regions.

To complete the DKIM verification, copy the three (3) DKIM Names and Values from SES1 and three (3) from SES2 and add them to your DNS authoritative server as CNAME records.

You will know you are successful because, after some time SES will complete the DKIM verification and the “pending verification” will change to “verified”.

Configuring for DMARC compliance

Domain-based Message Authentication, Reporting and Conformance (DMARC) is an email authentication protocol that uses Sender Policy Framework (SPF) and/or DomainKeys Identified Mail (DKIM) to detect email spoofing. In order to comply with DMARC, you need to setup a “_dmarc” DNS record and either SPF or DKIM, or both. The DNS record for compliance with DMARC is setup once per domain, but SPF and DKIM require DNS records for each SES instance.

  1. Setup “_dmarc” record in DNS for your domain; one time per domain. See instructions here
  2. To validate it, run the following command:
    dig TXT _dmarc.example.com +short
    "v=DMARC1;p=quarantine;pct=25;rua=mailto:[email protected]"
  3. For DKIM and SPF follow the instructions below

Custom MAIL FROM Domain and SPF

Sender Policy Framework (SPF) is an email validation standard that’s designed to prevent email spoofing. Domain owners use SPF to tell email providers which servers are allowed to send email from their domains. SPF is defined in RFC 7208.

To comply with Sender Policy Framework (SPF) you will need to use a custom MAIL FROM domain. When you enable MAIL FROM domain in SES console, the service generates two records you need to configure in your DNS to document who is authorized to send messages for your domain. One record is MX and another TXT; see screenshot for mail.example.com. Save these records and enter them in your DNS authoritative server for example.com.

Configure MAIL FROM Domain for SES2

  1. Open the Amazon SES console at https://console.aws.amazon.com/ses/.
  2. In the navigation pane, under Identity Management, choose Domains.
  3. In the list of domains, choose the domain and proceed to the next step.
  4. Under MAIL FROM Domain, choose Set MAIL FROM Domain.
  5. On the Set MAIL FROM Domain window, do the following:
    • For MAIL FROM domain, enter the subdomain that you want to use as the MAIL FROM domain. In our case mail.example.com.
    • For Behavior if MX record not found, choose one of the following options:
      • Use amazonses.com as MAIL FROM – If the custom MAIL FROM domain’s MX record is not set up correctly, Amazon SES will use a subdomain of amazonses.com. The subdomain varies based on the AWS Region in which you use Amazon SES.
      • Reject message – If the custom MAIL FROM domain’s MX record is not set up correctly, Amazon SES will return a MailFromDomainNotVerified error. Emails that you attempt to send from this domain will be automatically rejected.
    • Click Set MAIL FROM Domain.

You will need to complete this step on SES1, as well as SES2. The MAIL FROM records are regional and you will need to add them both to your DNS authoritative server.

Set MAIL FROM records in DNS

From both SES1 and SES2, take the MX and TXT records provided by the MAIL FROM configuration and add them to the DNS authoritative server. If SES1 and SES2 are in the same region (us-east-1 in our example) you will publish exactly one MX record (mail.example.com in our example) into DNS, pointing to endpoint for that region. If SES1 and SES2 are in different regions, you will create two different records (mail1.example.com and mail2.example.com) into DNS, each pointing to endpoint for specific region.

Verify MX record

Example of MX record where SES1 and SES2 are in the same region

dig MX mail.example.com +short
10 feedback-smtp.us-east-1.amazonses.com.

Example of MX records where SES1 and SES2 are in different regions

dig MX mail1.example.com +short
10 feedback-smtp.us-east-1.amazonses.com.

dig MX mail2.example.com +short
10 feedback-smtp.eu-west-1.amazonses.com.

Verify if it works

On both SES instances (SES1 and SES2), check that validations are complete. In the SES Console:

  • In Verification section, Status should be “verified” (in green color)
  • In DKIM section, DKIM Verification Status should be “verified” (in green color)
  • In MAIL FROM Domain section, MAIL FROM domain status should be “verified” (in green color)

If you have it all verified on both accounts or regions, it is correctly configured and ready to use.

Conclusion

In this post, we explained how to verify and use the same domain for Amazon SES in multiple account and regions and maintaining the DMARC, DKIM and SPF compliance and security features related to email exchange.

While each customer has different necessities, Amazon SES is flexible to allow customers decide, organize, and be in control about how they want to uses Amazon SES to send email.

Author bio

Leonardo Azize Martins is a Cloud Infrastructure Architect at Professional Services for Public Sector.

His background is on development and infrastructure for web applications, working on large enterprises.

When not working, Leonardo enjoys time with family, read technical content, watch movies and series, and play with his daughter.

Contributor

Daniel Tet is a senior solutions architect at AWS specializing in Low-Code and No-Code solutions. For over twenty years, he has worked on projects for Franklin Templeton, Blackrock, Stanford Children’s Hospital, Napster, and Twitter. He has a Bachelor of Science in Computer Science and an MBA. He is passionate about making technology easy for common people; he enjoys camping and adventures in nature.

 

Amazon Pinpoint achieves HITRUST certification

Post Syndicated from Srini Sekaran original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-pinpoint-achieves-hitrust-certification/

Securing the storage and flow of data is increasingly critical to the healthcare industry. More stringent security and compliance needs, along with mandates like HIPAA, help the industry mitigate risk but navigating multiple frameworks makes the process complex.

The Health Information Trust Alliance Common Security Framework (HITRUST CSF) reduces complexity for organizations by providing a single framework to create a comprehensive set of baseline security and privacy controls. HITRUST CSF leverages nationally and international standards: GDPR, ISO, NIST, PCI, and HIPAA. As a result, many healthcare networks and hospitals view HITRUST as a widely accepted framework to reduce risk.

Today, we’re announcing that Amazon Pinpoint has achieved HITRUST CSF v9.4, helping our customers in the healthcare industry, such as Care Connectors, continue to engage with their constituents at scale—securely.

Using HITRUST certified services means you can take a consistent approach to managing compliance as well as assessing and reporting against multiple sets of requirements. This can also help in getting your own HITRUST certification.

For more information on HITRUST and what it means for your organization, visit https://aws.amazon.com/compliance/hitrust/.

For more on how you can engage with your constituents reliably, securely, and at scale, visit https://aws.amazon.com/pinpoint/.

Amazon SES configuration for an external SMTP provider with Auth0

Post Syndicated from Raghavarao Sodabathina original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-ses-configuration-for-an-external-smtp-provider-with-auth0/

Many organizations are using an external identity provider to manage user identities. With an identity provider (IdP), customers can manage their user identities outside of AWS and give these external user identities permissions to use AWS resources in customer AWS accounts. The most common requirement when setting up an external identity provider is sending outgoing emails, such as verification e-mails using a link or code, welcome e-mails, MFA enrollment, password changes and blocked account e-mails. This said, most external identity providers’ existing e-mail infrastructure is limited to testing e-mails only and customers need to set up an external SMTP provider for outgoing e-mails.

Managing and running e-mail servers on-premises or deploying an EC2 instance dedicated to run a SMTP server is costly and complex. Customers have to manage operational issues such as hardware, software installation, configuration, patching, and backups.

In this blog post, we will provide step-by-step guidance showing how you can set up Amazon SES as an external SMTP provider with Auth0 to take advantage of Amazon SES capabilities like sending email securely, globally, and at scale.

Amazon Simple Email Service (SES) is a cost-effective, flexible, and scalable email service that enables developers to send email from within any application. You can configure Amazon SES quickly to support several email use cases, including transactional, marketing, or mass email communications.

Auth0 is an identity provider that provides flexible, drop-in solution to add authentication and authorization services (Identity as a Service, or IDaaS) to customer applications. Auth0’s built-in email infrastructure should be used for testing emails only. Auth0 allows you to configure your own SMTP email provider so you can more completely manage, monitor, and troubleshoot your email communications.

Overview of solution

In this blog post, we’ll show you how to perform the below steps to complete the integration between Amazon SES and Auth0

  • Amazon SES setup for sending emails with SMTP credentials and API credentials
  • Auth0 setup to configure Amazon SES as an external SMTP provider
  • Testing the Configuration

The following diagram shows the architecture of the solution.

Prerequisites

Amazon SES Setup

As first step, you must configure a “Sandbox” account within Amazon SES and verify a sender email address for initial testing. Once all the setup steps are successful, you can convert this account into Production and the SES service will be accepting all emails and for more details on this topic, please see the Amazon SES documentation.

1. Log in to the Amazon SES console and choose the Verify a New Email Address button.

2. Once the verification is completed, the Verification Status will change to green under Verification Status  

3. You need to create SMTP credentials which will be used by Auth0 for sending emails.  To create the credentials, click on SMTP settings from left menu and press the Create My SMTP Credentials button.

Please note down the Server Name as it will be required during Auth0 setup.

4. Enter a meaningful username like autho-ses-user and click on Create bottom in the bottom-right page

5. You can see the SMTP username and password on the screen and also, you can download SMTP credentials into a csv file as shown below.

Please note the SMTP User name and SMTP Password as it will be required during Auth0 setup.

6. You need Access key ID and Secret access key of the SES IAM user autho-ses-user as created in step 3 for configuring Amazon SES with API credentials in Auth0.

  • Navigate to the AWS IAM console and click on Users in left menu
  • Double click on autho-ses-user IAM user and then, click on Security credentials

  • Choose on Create access key button to create new Access key ID and Secret access key. You can see the Access key ID and Secret access key on the screen and also, you can download them into a csv file as shown below.

Please note down the Access key ID and Secret access key as it will be required during Auth0 setup.

Auth0 Setup

To ensure that emails can be sent from Auth0 to your Amazon SES SMTP, you need to configure Amazon SES details into Auth0. There are two ways you can use Amazon SES credentials with Auth0, one with SMTP and the other with API credentials.

1. Navigate to auth0 Dashboard, Select Branding and then, Email Provider from left menu. Enable Use my own email provider button as shown below.

2. Let us start with Auth0 configuration with Amazon SES SMTP credentials.

  • Click on SMTP Provider option as shown below

  • Provide below SMTP Provider settings as shown below and then, click on Save button complete the setup.
    • From: Your from email address.
    • Host: Your Amazon SES Server name as created in step 2 of Amazon SES setup. For this example, it is email-smtp.us-west-1.amazonaws.com
    • Port: 465
    • User Name: Your Amazon SES SMTP user name as created in step 4 of Amazon SES setup.
    • Password: Your Amazon SES SMTP password as created in step 4 of Amazon SES setup.

  • Choose on Send test email button to test Auth0 configuration with Amazon SES SMTP credentials.
  • You can look at Autho logs to validate your test as shown below.

  • If you have configured it successfully, you should receive an email from auth0 as shown below.

3. Now, complete Auth0 configuration with Amazon SES API credentials.

  • Click on Amazon SES as shown below

  • Provide Amazon SES settings as shown below and then, click on Save button complete the setup.
    • From: Your from email address.
    • KeyKey Id: Your autho-ses-user IAM user’s Access key ID as created in step 5 of Amazon SES setup.
    • Secret access key: Your autho-ses-user IAM user’s Secret access key as created in step 5 of Amazon SES setup.
    • Region: For this example, choose us-west-1.

  • Click on the Send test email button to test Auth0 configuration with Amazon SES API credentials.
  • You can look at Auth0 logs and If you have configured successfully, you should receive an email from auth0 as illustrated in Auth0 configuration with Amazon SES SMTP credentials section.

Conclusion

In this blog post, we have demonstrated how to setup Amazon SES as an external SMTP email provider with Auth0 as Auth0’s built-in email infrastructure is limited for testing emails. We have also demonstrated how quickly and easily you can setup Amazon SES with SMTP credentials and API credentials. With this solution you can setup your own Amazon SES with Auth0 as an email provider. You can also get a JumpStart by checking the Amazon SES Developer guide, which provides guidance on Amazon SES that provides an easy, cost-effective way for you to send and receive email using your own email addresses and domains.

About the authors

Raghavarao Sodabathina

Raghavarao Sodabathina

Raghavarao Sodabathina is an Enterprise Solutions Architect at AWS. His areas of focus are Data Analytics, AI/ML, and the Serverless Platform. He engages with customers to create innovative solutions that address customer business problems and accelerate the adoption of AWS services. In his spare time, Raghavarao enjoys spending time with his family, reading books, and watching movies.

 

Pawan Matta

Pawan Matta is a Boston-based Gametech Solutions Architect for AWS. He enjoys working closely with customers and supporting their digital native business. His core areas of focus are management and governance and cost optimization. In his free time, Pawan loves watching cricket and playing video games with friends.

Apple Mail’s iOS15 Privacy Protection Impact to Senders

Post Syndicated from Matt Strzelecki original https://aws.amazon.com/blogs/messaging-and-targeting/apple-mails-ios15-privacy-protection-impact-to-senders-2/

On June 7th at Apple’s Worldwide Developer’s Conference (WWDC 2021) Apple announced that Apple Mail users can now choose to use Apple Mail Privacy Protection. Apple Mail Privacy Protection will allow iOS to privately load remote message content which will hide recipient’s mail activity information like IP and user agent information, including geolocation and device(s) used to engage with the message. Apple Mail Privacy Protection will eliminate the open as being a reliable metric to evaluate user engagement on the sender’s side as all tracking pixels and images will be cached and fired as it hits Apple Mail. Apple is doing this in order to protect user information and increase privacy while also helping to facilitate a richer user experience as Apple Mail user can confidently open, read and engage with messages without all their email interactions are being tracked through remote images and tracking pixels. This will result in all messages that have the Apple Mail Privacy Protection enabled to register an open regardless of whether the recipient has read the email message or not. The end user will also have more confidence in the security of the message including its links.

When a user starts Apple Mail on their iOS device, emails to that user are initiated for download to their device but are first cached by Apple including all images and pixels, to a proxy server that does not expose individual recipient IP addresses but rather a generic IP of the Apple Cache. This happens regardless of if the user actually opens the mail at that time or not. If the user opens the email it pulls the message from the Apple Cache rather than from the original sending source, typically an email service provider (ESP). As a result, senders will not have open tracking insight as all tracking images and pixels will fire as the messages are downloaded to the Apple Cache.

Apple Mail Privacy Protection will apply to email opened on the Apple Mail app. If a user engages messages through another mail application such as the Gmail app, Apple Mail Privacy Protection will not be applied. Apple Mail Privacy Protection is not enabled by default but as you launch the Apple Mail app in iOS 15 initially, the user will be prompted to enable privacy protection which most users will choose to turn on.

Impact to Marketers

There will be a major impact to marketers who rely heavily on open rates as a conversion metric for user engagement as open data will be skewed as messages containing tracking links will fire regardless of if a recipient actually engages with the message or not. However, other data points and user activity will still be available such as click-through rates, onsite activity, and conversion history. These types of metrics will need to be relied upon to supplement open tracking data. Additionally, email deliverability best practices will be more important than ever to help maintain healthy lists and a responsive user base. Best practices such as confirmed opt-in list building, list maintenance & hygiene, consistent sending patterns and cadence, and honoring opt-outs and complaints will be even more important for marketers to adhere to as they adjust to the new Mail Privacy Protection feature.

While Mail Privacy Protection reduces visibility of open rates there are benefits to the user experience as user trust increases in the messages received through Apple Mail. For example, previous users who chose to receive text-only based messages to protect their privacy will now receive the more rich content of the full message providing a better user experience while engaging with the message. Full load of images and content will be sent to the recipients who will have a much higher sense of security in reading/ingesting/actioning the email and its content. Prior to Apple Mail Privacy Protection there could be skepticism of URLs and links within the messages leading to more deletes or false positive, potentially also resulting in more complaints and/or unsubscribes.

There are other benefits of Apple’s Mail Privacy Protection to marketers such as validation of email addresses. Since emails are cached as the messages are initiated for download to a device, and as a result it is downloaded to the Apple Cache and the tracking image or pixel is fired, it validates the existence of that email address. This does not mean you should use this feature as a validation tool as mailbox providers such as Gmail will still evaluate senders in part on list hygiene and high invalid requests will still lead to negative sender reputation with those providers. Confirmed opt-in practices are going to be even more crucial for managing healthy and long-term lists for marketers than it was prior to Apple Mail Privacy Protection. If a marketer is unsure about opt-in status, look into creating a re-confirmation campaign and only add back in recipients that re-confirm the opt-in by clicking a confirmation link in the message.

Conclusion

Email is still the most used tool to communicate whether that’s business-to-business, business-to-consumer or peer-to-peer, especially when it comes to marketing. Marketers need to continue to evolve and be creative when sending messages to their recipients because email, as it it relates to privacy & security, will continue evolve and leave marketers who don’t keep pace behind. While Apple’s Mail Privacy Protection reduces open rate visibility it does provide its user base with more security and confidence in messages passed to their devices. That confidence can allow marketers to focus on developing richer content for a better user experience and drive conversions rather than just opens.

Developing and managing a list with proper confirmed opt-in methods are crucial to developing long-term email lists and the trust of your recipients. The implementation of Apple Mail Privacy Protection reinforces this principle.

Lastly, email privacy & security will continue to advance forward and marketers along with email service providers should not be trying to “get around” these privacy features, rather they need to understand that these features are intended to help the end user and your customers. Work within the ideology of providing the customers what they want to receive and nothing more or less, and you can help your emails thrive. Stay tuned for more updates as they become available.

Orchestrating and Monitoring Multichannel Messaging with Amazon Pinpoint

Post Syndicated from Hamilton Oliveira original https://aws.amazon.com/blogs/messaging-and-targeting/orchestrating-and-monitoring-multichannel-messaging-with-amazon-pinpoint/

The union of marketing and technology (MarTech) has contributed to making communications and customers interactions more dynamic and personalized. In a multichannel environment with increasingly connected customers, it is essential for a MarTech system to orchestrate a digital marketing strategy using customers’ preferred channels in addition to monitoring their effectiveness during these engagements.

Companies in a variety of industries, from financial and retail to manufacturing seek to communicate with customers in the most efficient way, at the right time and channels. One way to facilitate this communication is to engage the customer in a personalized multi-step experience, or journeys. Amazon Pinpoint is a tool that gives marketers the flexibility to create multi-channel campaigns and monitor end user interactions such as email opens and clicks.

In this blog post we’ll go deeper into how Amazon Pinpoint can be configured for customer interactions and orchestration. We’ll also learn how to monitor and observe the results of these interactions through other AWS services that complement the MarTech stack.

Enabling Multi-Channel on Amazon Pinpoint

Sign in to the Amazon Pinpoint console and choose a region where the service is available. To organize the settings, campaigns, segments, and data, marketers can create a project on Amazon Pinpoint. To do this, simply specify a name for the project in the Get started box and select Create a Project.

After creating the project, a number of options related to the newly created project will appear on the menu on the left.

The first step to getting a project running is to activate the desired channels. A channel represents the platform through which you engage your audience segment with messages.  Currently Amazon Pinpoint supports push notifications, email, SMS, voice and the creation of custom channels such as WhatsApp, Facebook Messenger or any other service that allows API integrations. In this blog post we will use the native Amazon Pinpoint channels: email, push notifications and SMS.

Let’s start by configuring the e-mail channel. From the menu related to the newly created project, navigate to Settings → Email and follow step 5 of the Creating an Amazon Pinpoint project with email support.

After configuring the email channel, we will start with configuring the SMS channel by navigating to Settings → SMS and Voice. Follow the walkthrough available in Setting up the Amazon Pinpoint SMS channel from the step 5. Then activate a phone number for the SMS service by following the steps on Requesting a number.

Note that Amazon Pinpoint supports more types of phone numbers in the United States than in other countries. Please review the available numbers within the United States and other countries. For testing in the United States a Toll Free Number (TFN) can be provisioned to the account immediately.

Remember that the usage of AWS services may incur costs and for detailed information about the costs regarding each service, by region, please visit this .

(Optional) Activate the push notification channel by going to, Settings → Push notifications and follow from Step 5 of the guide Setting up Amazon Pinpoint mobile push channels.

At the end of the settings, when accessing the Settings menu of the created project, you will see a similar screen like the following image.

We’ve now finished the channel configuration and are ready to move onto building Amazon Pinpoint Journeys.

Configuring Multi-Channel Experiences on Amazon Pinpoint Journeys

Now, let’s create a multichannel journey based on an external event. A journey is a personalized engagement experience made up of multiple steps across multiple channels. For example, in the case of a financial institution that wants to communicate with a customer over their preferred channel to notify the customer to activate a travel notice.

To simulate this use case, we will insert some endpoints. An Endpoint represents a destination that you can send messages, and a user can have one or more endpoints.

The example below is a json-document with 4 endpoints for 3 users, since the same user has two endpoints for two different channels. You should change the addresses to your own test email addresses, phone numbers, and push tokens, before using the example below.

Note that if your account is still in the sandbox these will need to be verified email addresses.

If you only have access to a single email address you can use labels by adding a plus sign (+) followed by a string of text after the local part of the address and before the at (@) sign.  For example: [email protected] and [email protected]

Then, the following steps:

  1. Create a json file based on the example below.
  2. Update the Address fields with your test email addresses and phone numbers.
  3. Run AWS CLI to import the JSON file created in step 1.
{
    "Item": [
        {
            "ChannelType": "EMAIL",
            "Address": "[email protected]",
            "Attributes": {
                "PreferredChannel": ["N"]
            },
            "Id": "example_endpoint_1",
            "User": {
                "UserId": "example_user_1",
                "UserAttributes": {
                    "FirstName": ["Richard"],
                    "LastName": ["Roe"]
                }
            }
        },
        {
            "ChannelType": "SMS",
            "Address": "+16145550100",
            "Attributes": {
                "PreferredChannel": ["Y"]
            },
            "Id": "example_endpoint_1b",
            "User": {
                "UserId": "example_user_1",
                "UserAttributes": {
                    "FirstName": ["Richard"],
                    "LastName": ["Roe"]
                }
            }
        },
        {
            "ChannelType": "SMS",
            "Address": "+16145550102",
            "Attributes": {
                "PreferredChannel": ["Y"]
            },
            "Id": "example_endpoint_2",
            "User": {
                "UserId": "example_user_2",
                "UserAttributes": {
                    "FirstName": ["Mary"],
                    "LastName": ["Major"]
                }
            }
        },
        {
            "ChannelType": "APNS",
            "Address": "1a2b3c4d5e6f7g8h9i0j1k2l3m4n5o6p7q8r9s0t1u2v3w4x5y6z7a8b9c0d1e2f",
            "Attributes": {
                "PreferredChannel": ["Y"]
            },
            "Id": "example_endpoint_3",
            "User": {
                "UserId": "example_user_3",
                "UserAttributes": {
                    "FirstName": ["Wang"],
                    "LastName": ["Xiulan"]
                }
            }
        }
    ]
}

Once the endpoints are inserted, let’s create 3 segments to represent each preferred channel — Email, Push Notifications, and SMS:

  1. Navigate to your project in the Amazon Pinpoint Console, choose Segments and then Create a segment.
  2. Select Build a segment.
  3. Provide a name for your segment, for example, SMS Preferred.
  4. Configure Segment Group 1 following the steps below to filter the endpoints where the preferred channel is SMS.
    1. Under Base segments, select Include any audiences
    2. Choose Add criteria and choose Channel Types → SMS.
    3. Choose Add filter, select Custom Endpoint AttributesPreferredChannel, Operator Is, and on the dropdown choose Y.

Follow the same steps above for the Push and Email channels, choosing each of these channels in step 4.2. When you finish the configuration, you will have a result similar to the one presented below.

Next, let’s create the message templates for each of the channels. Follow the step-by-step in the User Guide for each of the following channels:

You should see the following:

Next, lets create the journey to notify users when a travel notice event occurs.

  1. Under your project Amazon Pinpoint Console, navigate to Journeys and choose Create journey.
    1. If this is your first time creating a Journey, click through the help messages
  2. Name your journey Travel Notice.
  3. Choose Set entry condition
    1. In Choose how to start the journey, select: Add participants when they perform an activity.
    2. In the field Events enter TravelNoticeAlert
    3. Choose Save.
  4. Click Add activity under the Journey Entry box and select Multivariate split
    1. Add 2 new branches by selecting Add Another Branch
    2. For the Branch A, under Choose a condition type, select Segment and for Segments choose E-mail Preferred
    3. For the Branch B, under Choose a condition type select Segment and for Segments choose SMSPreferred
    4. For the Branch C, under Choose a condition type select Segment and for Segments choose Push Preferred
    5. Leave everything else as the default values and select Save
  5. Finally, add a message sending activity for each segment.
    1. Under Branch A, select Add Activity, choose Send an email, then Choose an email template and select the template you created before for email channel.
    2. Choose Save.
    3. Under Branch B, select Add Activity, choose Send an SMS message, then Choose an SMS template and select the template you created before for SMS channel.
    4. Under Origination phone number, select the phone you configured when creating the SMS Channel
    5. Choose Save.
    6. Under Branch C, select Add Activity, choose Send a push notification activity, then Choose a push notification template and select the template you created before for push channel.
    7. Choose Save.
    8. When you complete these steps your journey will have a similar structure to the one presented below.
  6. Choose
    1. Under Review your journey choose Next, Mark as reviewed and finally Publish.
    2. Wait for the Journey to begin before continuing.

Installing Event Monitoring Components on Amazon Pinpoint

We can monitor and analysys the events generated by Amazon Pinpoint in real time by installing the Digital User Engagement Events Database solution, which is a reference implementation that installs the necessary services to track and query Amazon Pinpoint events.

To install this solution, follow the walkthrough available at Digital User Engagement Events Database Automated Deployment making sure to select the same region you used to configure Pinpoint earlier.

In Step 1. Launch the stack, for the Amazon Pinpoint Project ID field enter the Project ID that you created earlier, and leave the other fields as default. Wait for the end of the solution deployment. It will create a bucket in Amazon S3, a delivery stream in Amazon Kinesis Firehose, and a database and views in Amazon Athena, plus an AWS Lambda function responsible for partitioning the data.

Remember that the usage of AWS services may incur costs and for detailed information about the costs regarding the Digital User Engagement Events Database, please refer to the solution cost page.

Validating Your Multi-Channel Journey

Finally, we will use the commands below, to validate the event that triggers the journey and monitoring.

Note that we are using an Endpoint ID and not User ID.  Amazon Pinpoint will see that the endpoint is associated with a user and as such use the appropriate Preferred Channel for that user.

For the following commands you can use AWS CLI.

aws pinpoint put-events\
--application-id application-id\
--events-request '{"BatchItem": { "example_endpoint_1": { "Endpoint": {}, "Events": { "TravelNoticeAlert": {"EventType": "TravelNoticeAlert", "Timestamp": "2021-03-09T08:00:00Z"}}}}}'
aws pinpoint put-events\
--application-id application-id\
--events-request '{"BatchItem": { "example_endpoint_2": { "Endpoint": {}, "Events": { "TravelNoticeAlert": {"EventType": "TravelNoticeAlert", "Timestamp": "2021-03-09T08:00:00Z"}}}}}'
aws pinpoint put-events\
--application-id application-id\
--events-request '{"BatchItem": { "example_endpoint_3": { "Endpoint": {}, "Events": { "TravelNoticeAlert": {"EventType": "TravelNoticeAlert", "Timestamp": "2021-03-09T08:00:00Z"}}}}}'

application-id is your Amazon Pinpoint project ID. It can be accessed within AWS Pinpoint Console.

The value for the EventType parameter is the same you defined during the configuration of the Event field within the journey. In our example the value is TravelNoticeAlert.

Monitoring the Events of Your Multi-Channel Journey

Amazon Pinpoint natively offers a set of dashboards that can be accessed through the Analytics menu. However, with the architecture proposed in this blogpost it is possible to extract more detailed analysis. Navigate to the Amazon Athena console.

  1. Choose the Database due_eventdb that was configured by the solution above.
  2. Under New query tab copy and paste the statement below and choose Run query. The statement below creates a view that returns all endpoints to which SMS messages have been sent, with the status of sending at the telephone carrier. For more information about Views, access the topic Working With Views in Amazon Athena User Guide. Note that you may need to configure an S3 Bucket to store Athena Query Results.
    CREATE OR REPLACE VIEW sms_carrier_delivery AS
    SELECT event_type,
            client.client_id,
            from_unixtime(event_timestamp/1000) event_date,
            attributes['journey_activity_id'] journey_activity_id,
            attributes['destination_phone_number'] destination_phone_number, 
            attributes['record_status'] record_status
    FROM "due_eventdb"."all_events"
    WHERE event_type = '_SMS.SUCCESS'
    ORDER BY event_timestamp
  3. Open a new tab, copy and paste the following query, and select Run query. The command below creates a view that returns all endpoints to which SMS were sent, the message type (transactional or promotional), and the cost of sending.
    CREATE OR REPLACE VIEW sms_pricing AS
    SELECT event_type,
            client.client_id,
            from_unixtime(event_timestamp/1000) event_date,
            attributes['destination_phone_number'] destination_phone_number, 
            attributes['message_type'] message_type,
            metrics.price_in_millicents_usd/100000 sms_message_price
    FROM "due_eventdb"."all_events"
    WHERE event_type = '_SMS.SUCCESS'
    ORDER BY event_timestamp

To see all of the events available please refer to the Events Database Data Dictionary.

Finally, let’s further explore other monitoring options by creating dashboards in Amazon Quicksight.

From the AWS console, go to Amazon Quicksight and, if necessary, sign up.

  1. Select the top left menu where your username is and then Manage QuickSight.
    1. Select Security & permissions
    2. On QuickSight access to AWS services, select Add or remove.
    3. Check the option Amazon Athena, access Next and in S3 S3 Buckets Linked To QuickSight Account.
      1. If the check box is clear, enable the check box next to Amazon S3.
      2. If the check box is already enabled, choose Details, and then choose Select S3 buckets.
    4. Check the S3 bucket created by the Digital User Engagement Events Database solution. If you have questions about the bucket name, check the Outputs tab for the value for the Dues3DataLakeName key of the CloudFormation stack you created.
    5. Select Finish and Update.
  2. Go back to the Amazon QuickSight home screen and select Datasets and then New dataset.
  3. Choose Athena.
  4. In Data source name field enter Pinpoint Dataset.
  5. Choose Validate connection, and Create data source.
    1. In the window Choose your table, in the Database: contain sets of tables select due_eventdb and the table sms_carrier_delivery.
    2. Select Edit/Preview data
    3. On the dataset definition screen press Save button.
  6. Choose Dataset
    1. Press the button New dataset.
    2. Scroll down to FROM EXISTING DATA SOURCES and access Pinpoint Dataset.
    3. Select Create dataset
    4. In the window Choose your table, in the Database: contain sets of tables select due_eventdb and the table sms_pricing.
    5. Select Edit/Preview data
    6. On the dataset definition screen press Save
    7. Repeat these steps again but select the journey_send table for the step
  7. Choose Analyses
    1. Press the button New analysis.
    2. For Your Datasets, choose journey_send and then access Create analysis. This view was created by Digital User Engagement Events Database solution.
    3. Under Field lists choose journey_send_status. Amazon QuickSight will draw a chart showing journeys events by status.
    4. Select the pen symbol next to Dataset and press the button Add dataset.
    5. Choose sms_carrier_delivery and Select.
    6. Choose the field record_status.
    7. Under Visual types, choose Pie chart. This chart will display message delivery status on your carrier.
    8. Press the pencil symbol next to Dataset and press the button Add dataset.
    9. Check sms_pricing and
    10. Choose sms_message_price and message_type
    11. Under Visual types, select Donut chart. This graph will display costs by transactional or promotional message type.

The final result will be something close to the one shown in the image below:

Conclusion

In this blogpost we walked through how to set up Amazon Pinpoint for an end-to-end scenario. We defined the basic components to a multichannel journey and monitoring, introduced AWS services as a MarTech solution that allows companies to send notifications to their customers preferred channels and also monitor their engagement data using Amazon Pinpoint events.

Clean up

  1. Choose AWS CloudFormation.
    1. Delete and Delete stack
  2. Navigate to Amazon Pinpoint console.
    1. Go to SettingsSMS and voice, select the number created during the execution of this blogpost and choose Remove phone number.
    2. Under All projects, open the created project and then in the menu on the left select SettingsGeneral settings. Choose Delete project and confirm the deletion by filling “delete” in the indicated field and select Delete.
  3. Choose Amazon Quicksight.
    1. Delete your user.

How to Log Amazon SES details using Amazon CloudWatch

Post Syndicated from Rajat Kashyap original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-log-amazon-ses-details-using-amazon-cloudwatch/

One of the use cases Amazon Simple Email Service (Amazon SES) users try to implement is to centralize various SES notifications from different domains or email addresses to get insights about how many of those emails were delivered, bounced or complaint. If you have these details, they can be valuable to help you take appropriate business decisions to further strengthen your mail delivery process. Currently SES provides various metrics like number of sends, reject etc. but there is no direct way to log information like From email identity, recipient email identity, Subject, timestamp, source IP address, messageId etc. when sending emails.

In this blog post, you will learn how to capture detailed notifications about your bounces, complaints, and deliveries and log those in Amazon CloudWatch. With a centralized logging solution, customers can keep track of domains or email addresses from which they received complaints, identify email issues, stay informed, and even build custom dashboard capabilities. Logging the notifications in CloudWatch will help you to store these notifications for the long term and also will allow you to set up a process to back up this data and setup life-cycle policies for data retention.

Let’s quickly understand how Amazon SES categorizes if any email got delivered, bounced or received a complaint.

Types of notifications

Bounce occurs when a message cannot be delivered to the intended recipient. And there are two types of Bounce, hard bounce and soft bounce.

  • Hard bounces occur when email cannot be delivered because of a persistent issue, such as when a recipient’s email address or domain does not exist and Amazon SES will no longer attempt to deliver the message.
  • Soft bounces occur when there is a temporary issue preventing the email from being delivered, such as when the recipient’s mailbox is full, when the connection to the receiving email server times out, or when there are too many simultaneous connections to the receiving mail server. When there are soft bounces, Amazon will attempt to redeliver it again.

Complaint occurs when a recipient,

  • Reports that they don’t want to receive an email.
  • Clicks the “Report spam” button in their email client, and complains to their email provider that such emails belong to the Spam category.

Delivery occurs when,

  • An email is delivered successfully to recipient’s mail server.

Prerequisites

For this post, you should be familiar with the following:

Architecture Overview

The AWS CloudFormation template given in this post automatically sets up the different architecture components, to capture detailed notifications about your bounces, complaints, and deliveries and log those in Amazon CloudWatch. You still have to perform some manual tasks of configuring and validating components. For details, please follow the below steps in sequence.


Getting Started with Solution Deployment

Prerequisite tasks to be completed before deploying the logging solution:

  1. Domain and Email Address are verified
  2.  Creation of Amazon Simple Notification Service (Amazon SNS) Topic to capture detailed notifications for bounces, complaints or deliveries, please refer Create SNS Topic.
  3. Setup notifications at domain or from email address level, for the notifications you want to log into CloudWatch.
    1. Click on the Verified Identity domain name you want to setup bounce notifications.
    2. In the Notifications tab, click on Feedback Notifications section -> Click edit button to navigate on next page.
    3. From the drop down select and Update Bounce Notifications topic with the SNS Topic ARN that you created in prior step and click Save Changes.

Note: For this blog post, you will learn how to log bounce notifications. To capture complaints or delivery notifications you can configure SNS topic and redeploy the CloudFormation template multiple times choosing the Complaint and Delivery event types to capture all notifications.

Once the prerequisite tasks are completed, the logging solution is ready to be deployed.

As a very first step, download ses_bounce_logging_blog.yml CloudFormation file from the below given link, once you saved this on your local machine, follow the next steps to install this solution.

https://github.com/aws-samples/digital-user-engagement-reference architectures/blob/master/cloudformation/ses_bounce_logging_blog.yml

Steps to run the CloudFormation template:

  1. Go to CloudFormation Console and Click Create Stack.
  2. Select Upload template file radio button and Click Choose file to upload ses_bounce_logging_blog.yml file you downloaded earlier.
  3. Click Next on Create Stack screen.
  4. Specify Stack Name, for example ses-bounce-logging.
  5. Change default value of CloudWatchGroupName if needed.
  6. Select the Event TypeBounce”, “Complaint”, or “Delivery” you wish to track.
  7. Enter Amazon Resource Name (ARN) of Amazon SNS topic created in prior step, to capture bounce notification in SNSTopicName parameters field, and click Next.
  8. Click Next on Configure stack options screen.
  9. Select “I acknowledge that AWS CloudFormation might create IAM resources” and click Create Stack.

Wait for the CloudFormation template to complete and then verify resources in the CloudFormation stack has been created. Click on individual resources and verify.

  • IAM Role was created.
  • Lambda function to log capture bounce notification was created.
  • Verify that Lambda function subscription to SNS topic has been created and confirmed.

  • You can also verify SNS and Lambda integration in Lambda console.

How to test the solution?

You can test the Bounce scenario using Amazon SES mailbox simulators. When you send an email to selected mailbox simulator scenario, you will get a simulated detailed notification back to the Amazon SNS topic configured in the notification section described in the pre-requisite section. You can use AWS CLI, an AWS SDK, or Amazon SES console for the particular domain that you have configured to receive notifications.

[email protected] (In scope of this blog post and applies to test bounce notifications)

Use below in case you want to test other scenario:

[email protected]

[email protected]

Screenshots showing how to send bounce email using AWS Console

  1. Go to Amazon SES -> Select Verified Domain Identity Checkbox.
  2. Click on Send a Test Email Button.
  3. Fill in the required information, as given in the below snapshot like From-address, test Scenario (Bounce) and click Send Test Email.
  4. As soon as bounce notification is received in the SNS topic it will be sent to the Lambda function and finally logged in the /aws /ses/bounce_logs CloudWatch log group.

Clean up

When you’re done with this exercise, complete the following steps to delete your resources and stop incurring costs:

  1. Delete the SNS topic that you created.
  2. On the CloudFormation console, select your stack and choose Delete.

This cleans up all the resources created by the stack.

Conclusion

In this blog post, we have shown you how to capture and build a solution for Bounce notifications. We explained how to combine Amazon Simple Notification Service, AWS Lambda, and Amazon CloudWatch to create the logging solution. To enhance the visualization, you can filter Metrics in Amazon CloudWatch, allowing you to graph metrics and make it searchable. As the notifications are stored in Amazon CloudWatch, you can export the logs to Amazon S3 for the long term. You can modify the CloudFormation template in this blog and deploy it multiple times to capture complaints or delivery notifications for your business use cases.

 


About the Authors

Rajat Kashyap is a Solutions Architect at AWS. He is Containers, DevOps, BigData, Analytics and AI/ML enthusiast and loves helping customers design secure, reliable, scalable and cost-effective solutions on AWS. As a trusted customer advisor, he help organizations understand best practices for advanced AWS cloud-based solutions and help them in migration and modernization of their workload using Well Architected design principles and best practices.

 

 

Ashish Mehra is a Solutions Architecture Manager at AWS. He is a Middleware, Serverless, IoT and Containers enthusiast and loves helping customers design secure, reliable and cost-effective solutions on AWS.

Blue/Green deployment with AWS Developer tools on Amazon EC2 using Amazon EFS to host application source code

Post Syndicated from Rakesh Singh original https://aws.amazon.com/blogs/devops/blue-green-deployment-with-aws-developer-tools-on-amazon-ec2-using-amazon-efs-to-host-application-source-code/

Many organizations building modern applications require a shared and persistent storage layer for hosting and deploying data-intensive enterprise applications, such as content management systems, media and entertainment, distributed applications like machine learning training, etc. These applications demand a centralized file share that scales to petabytes without disrupting running applications and remains concurrently accessible from potentially thousands of Amazon EC2 instances.

Simultaneously, customers want to automate the end-to-end deployment workflow and leverage continuous methodologies utilizing AWS developer tools services for performing a blue/green deployment with zero downtime. A blue/green deployment is a deployment strategy wherein you create two separate, but identical environments. One environment (blue) is running the current application version, and one environment (green) is running the new application version. The blue/green deployment strategy increases application availability by generally isolating the two application environments and ensuring that spinning up a parallel green environment won’t affect the blue environment resources. This isolation reduces deployment risk by simplifying the rollback process if a deployment fails.

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, and fully-managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It scales on demand, thereby eliminating the need to provision and manage capacity in order to accommodate growth. Utilize Amazon EFS to create a shared directory that stores and serves code and content for numerous applications. Your application can treat a mounted Amazon EFS volume like local storage. This means you don’t have to deploy your application code every time the environment scales up to multiple instances to distribute load.

In this blog post, I will guide you through an automated process to deploy a sample web application on Amazon EC2 instances utilizing Amazon EFS mount to host application source code, and utilizing a blue/green deployment with AWS code suite services in order to deploy the application source code with no downtime.

How this solution works

This blog post includes a CloudFormation template to provision all of the resources needed for this solution. The CloudFormation stack deploys a Hello World application on Amazon Linux 2 EC2 Instances running behind an Application Load Balancer and utilizes Amazon EFS mount point to store the application content. The AWS CodePipeline project utilizes AWS CodeCommit as the version control, AWS CodeBuild for installing dependencies and creating artifacts,  and AWS CodeDeploy to conduct deployment on EC2 instances running in an Amazon EC2 Auto Scaling group.

Figure 1 below illustrates our solution architecture.

Sample solution architecture

Figure 1: Sample solution architecture

The event flow in Figure 1 is as follows:

  1. A developer commits code changes from their local repo to the CodeCommit repository. The commit triggers CodePipeline execution.
  2. CodeBuild execution begins to compile source code, install dependencies, run custom commands, and create deployment artifact as per the instructions in the Build specification reference file.
  3. During the build phase, CodeBuild copies the source-code artifact to Amazon EFS file system and maintains two different directories for current (green) and new (blue) deployments.
  4. After successfully completing the build step, CodeDeploy deployment kicks in to conduct a Blue/Green deployment to a new Auto Scaling Group.
  5. During the deployment phase, CodeDeploy mounts the EFS file system on new EC2 instances as per the CodeDeploy AppSpec file reference and conducts other deployment activities.
  6. After successful deployment, a Lambda function triggers in order to store a deployment environment parameter in Systems Manager parameter store. The parameter stores the current EFS mount name that the application utilizes.
  7. The AWS Lambda function updates the parameter value during every successful deployment with the current EFS location.

Prerequisites

For this walkthrough, the following are required:

Deploy the solution

Once you’ve assembled the prerequisites, download or clone the GitHub repo and store the files on your local machine. Utilize the commands below to clone the repo:

mkdir -p ~/blue-green-sample/
cd ~/blue-green-sample/
git clone https://github.com/aws-samples/blue-green-deployment-pipeline-for-efs

Once completed, utilize the following steps to deploy the solution in your AWS account:

  1. Create a private Amazon Simple Storage Service (Amazon S3) bucket by using this documentation
    AWS S3 console view when creating a bucket

    Figure 2: AWS S3 console view when creating a bucket

     

  2. Upload the cloned or downloaded GitHub repo files to the root of the S3 bucket. the S3 bucket objects structure should look similar to Figure 3:
    AWS S3 bucket object structure after you upload the Github repo content

    Figure 3: AWS S3 bucket object structure

     

  3. Go to the S3 bucket and select the template name solution-stack-template.yml, and then copy the object URL.
  4. Open the CloudFormation console. Choose the appropriate AWS Region, and then choose Create Stack. Select With new resources.
  5. Select Amazon S3 URL as the template source, paste the object URL that you copied in Step 3, and then choose Next.
  6. On the Specify stack details page, enter a name for the stack and provide the following input parameter. Modify the default values for other parameters in order to customize the solution for your environment. You can leave everything as default for this walkthrough.
  • ArtifactBucket– The name of the S3 bucket that you created in the first step of the solution deployment. This is a mandatory parameter with no default value.
Defining the stack name and input parameters for the CloudFormation stack

Figure 4: Defining the stack name and input parameters for the CloudFormation stack

  1. Choose Next.
  2. On the Options page, keep the default values and then choose Next.
  3. On the Review page, confirm the details, acknowledge that CloudFormation might create IAM resources with custom names, and then choose Create Stack.
  4. Once the stack creation is marked as CREATE_COMPLETE, the following resources are created:
  • A virtual private cloud (VPC) configured with two public and two private subnets.
  • NAT Gateway, an EIP address, and an Internet Gateway.
  • Route tables for private and public subnets.
  • Auto Scaling Group with a single EC2 Instance.
  • Application Load Balancer and a Target Group.
  • Three security groups—one each for ALB, web servers, and EFS file system.
  • Amazon EFS file system with a mount target for each Availability Zone.
  • CodePipeline project with CodeCommit repository, CodeBuild, and CodeDeploy resources.
  • SSM parameter to store the environment current deployment status.
  • Lambda function to update the SSM parameter for every successful pipeline execution.
  • Required IAM Roles and policies.

      Note: It may take anywhere from 10-20 minutes to complete the stack creation.

Test the solution

Now that the solution stack is deployed, follow the steps below to test the solution:

  1. Validate CodePipeline execution status

After successfully creating the CloudFormation stack, a CodePipeline execution automatically triggers to deploy the default application code version from the CodeCommit repository.

  • In the AWS console, choose Services and then CloudFormation. Select your stack name. On the stack Outputs tab, look for the CodePipelineURL key and click on the URL.
  • Validate that all steps have successfully completed. For a successful CodePipeline execution, you should see something like Figure 5. Wait for the execution to complete in case it is still in progress.
CodePipeline console showing execution status of all stages

Figure 5: CodePipeline console showing execution status of all stages

 

  1. Validate the Website URL

After completing the pipeline execution, hit the website URL on a browser to check if it’s working.

  • On the stack Outputs tab, look for the WebsiteURL key and click on the URL.
  • For a successful deployment, it should open a default page similar to Figure 6.
Sample “Hello World” application (Green deployment)

Figure 6: Sample “Hello World” application (Green deployment)

 

  1. Validate the EFS share

After the website deployed successfully, we will get into the application server and validate the EFS mount point and the application source code directory.

  • Open the Amazon EC2 console, and then choose Instances in the left navigation pane.
  • Select the instance named bg-sample and choose
  • For Connection method, choose Session Manager, and then choose connect

After the connection is made, run the following bash commands to validate the EFS mount and the deployed content. Figure 7 shows a sample output from running the bash commands.

sudo df –h | grep efs
ls –la /efs/green
ls –la /var/www/
Sample output from the bash command (Green deployment)

Figure 7: Sample output from the bash command (Green deployment)

 

  1. Deploy a new revision of the application code

After verifying the application status and the deployed code on the EFS share, commit some changes to the CodeCommit repository in order to trigger a new deployment.

  • On the stack Outputs tab, look for the CodeCommitURL key and click on the corresponding URL.
  • Click on the file html.
  • Click on
  • Uncomment line 9 and comment line 10, so that the new lines look like those below after the changes:
background-color: #0188cc; 
#background-color: #90ee90;
  • Add Author name, Email address, and then choose Commit changes.

After you commit the code, the CodePipeline triggers and executes Source, Build, Deploy, and Lambda stages. Once the execution completes, hit the Website URL and you should see a new page like Figure 8.

New Application version (Blue deployment)

Figure 8: New Application version (Blue deployment)

 

On the EFS side, the application directory on the new EC2 instance now points to /efs/blue as shown in Figure 9.

Sample output from the bash command (Blue deployment)

Figure 9: Sample output from the bash command (Blue deployment)

Solution review

Let’s review the pipeline stages details and what happens during the Blue/Green deployment:

1) Build stage

For this sample application, the CodeBuild project is configured to mount the EFS file system and utilize the buildspec.yml file present in the source code root directory to run the build. Following is the sample build spec utilized in this solution:

version: 0.2
phases:
  install:
    runtime-versions:
      php: latest   
  build:
    commands:
      - current_deployment=$(aws ssm get-parameter --name $SSM_PARAMETER --query "Parameter.Value" --region $REGION --output text)
      - echo $current_deployment
      - echo $SSM_PARAMETER
      - echo $EFS_ID $REGION
      - if [[ "$current_deployment" == "null" ]]; then echo "this is the first GREEN deployment for this project" ; dir='/efs/green' ; fi
      - if [[ "$current_deployment" == "green" ]]; then dir='/efs/blue' ; else dir='/efs/green' ; fi
      - if [ ! -d $dir ]; then  mkdir $dir >/dev/null 2>&1 ; fi
      - echo $dir
      - rsync -ar $CODEBUILD_SRC_DIR/ $dir/
artifacts:
  files:
      - '**/*'

During the build job, the following activities occur:

  • Installs latest php runtime version.
  • Reads the SSM parameter value in order to know the current deployment and decide which directory to utilize. The SSM parameter value flips between green and blue for every successful deployment.
  • Synchronizes the latest source code to the EFS mount point.
  • Creates artifacts to be utilized in subsequent stages.

Note: Utilize the default buildspec.yml as a reference and customize it further as per your requirement. See this link for more examples.

2) Deploy Stage

The solution is utilizing CodeDeploy blue/green deployment type for EC2/On-premises. The deployment environment is configured to provision a new EC2 Auto Scaling group for every new deployment in order to deploy the new application revision. CodeDeploy creates the new Auto Scaling group by copying the current one. See this link for more details on blue/green deployment configuration with CodeDeploy. During each deployment event, CodeDeploy utilizes the appspec.yml file to run the deployment steps as per the defined life cycle hooks. Following is the sample AppSpec file utilized in this solution.

version: 0.0
os: linux
hooks:
  BeforeInstall:
    - location: scripts/install_dependencies
      timeout: 180
      runas: root
  AfterInstall:
    - location: scripts/app_deployment
      timeout: 180
      runas: root
  BeforeAllowTraffic :
     - location: scripts/check_app_status
       timeout: 180
       runas: root  

Note: The scripts mentioned in the AppSpec file are available in the scripts directory of the CodeCommit repository. Utilize these sample scripts as a reference and modify as per your requirement.

For this sample, the following steps are conducted during a deployment:

  • BeforeInstall:
    • Installs required packages on the EC2 instance.
    • Mounts the EFS file system.
    • Creates a symbolic link to point the apache home directory /var/www/html to the appropriate EFS mount point. It also ensures that the new application version deploys to a different EFS directory without affecting the current running application.
  • AfterInstall:
    • Stops apache web server.
    • Fetches current EFS directory name from Systems Manager.
    • Runs some clean up commands.
    • Restarts apache web server.
  • BeforeAllowTraffic:
    • Checks application status if running fine.
    • Exits the deployment with error if the app returns a non 200 HTTP status code. 

3) Lambda Stage

After completing the deploy stage, CodePipeline triggers a Lambda function in order to update the SSM parameter value with the updated EFS directory name. This parameter value alternates between “blue” and “green” to help CodePipeline identify the right EFS file system path during the next deployment.

CodeDeploy Blue/Green deployment

Let’s review the sequence of events flow during the CodeDeploy deployment:

  1. CodeDeploy creates a new Auto Scaling group by copying the original one.
  2. Provisions a replacement EC2 instance in the new Auto Scaling Group.
  3. Conducts the deployment on the new instance as per the instructions in the yml file.
  4. Sets up health checks and redirects traffic to the new instance.
  5. Terminates the original instance along with the Auto Scaling Group.
  6. After completing the deployment, it should appear as shown in Figure 10.
AWS CodeDeploy console view of a Blue/Green CodeDeploy deployment on Ec2

Figure 10: AWS console view of a Blue/Green CodeDeploy deployment on Ec2

Troubleshooting

To troubleshoot any service-related issues, see the following links:

More information

Now that you have tested the solution, here are some additional points worth noting:

  • The sample template and code utilized in this blog can work in any AWS region and are mainly intended for demonstration purposes. Utilize the sample as a reference and modify it further as per your requirement.
  • This solution works with single account, Region, and VPC combination.
  • For this sample, we have utilized AWS CodeCommit as version control, but you can also utilize any other source supported by AWS CodePipeline like Bitbucket, GitHub, or GitHub Enterprise Server

Clean up

Follow these steps to delete the components and avoid any future incurring charges:

  1. Open the AWS CloudFormation console.
  2. On the Stacks page in the CloudFormation console, select the stack that you created for this blog post. The stack must be currently running.
  3. In the stack details pane, choose Delete.
  4. Select Delete stack when prompted.
  5. Empty and delete the S3 bucket created during deployment step 1.

Conclusion

In this blog post, you learned how to set up a complete CI/CD pipeline for conducting a blue/green deployment on EC2 instances utilizing Amazon EFS file share as mount point to host application source code. The EFS share will be the central location hosting your application content, and it will help reduce your overall deployment time by eliminating the need for deploying a new revision on every EC2 instance local storage. It also helps to preserve any dynamically generated content when the life of an EC2 instance ends.

Author bio

Rakesh Singh

Rakesh is a Senior Technical Account Manager at Amazon. He loves automation and enjoys working directly with customers to solve complex technical issues and provide architectural guidance. Outside of work, he enjoys playing soccer, singing karaoke, and watching thriller movies.

Choosing a Well-Architected CI/CD approach: Open Source on AWS

Post Syndicated from Mikhail Vasilyev original https://aws.amazon.com/blogs/devops/choosing-a-well-architected-ci-cd-approach-open-source-on-aws/

Introduction

When building a CI/CD platform, it is important to make an informed decision regarding every underlying tool. This post explores evaluating the criteria for selecting each tool focusing on a balance between meeting functional and non-functional requirements, and maximizing value.

Your first decision: source code management.

Source code is potentially your most valuable asset, and so we start by choosing a source code management tool. These tools normally have high non-functional requirements in order to protect your assets and to ensure they are available to the organization when needed. The requirements usually include demand for high durability, high availability (HA), consistently high throughput, and strong security with role-based access controls.

At the same time, source code management tools normally have many specific functional requirements as well. For example, the ability to provide collaborative code review in the UI, flexible and tunable merge policies including both automated and manual gates (code checks), and out-of-box UI-level integrations with numerous other tools. These kinds of integrations can include enabling monitoring, CI, chats, and agile project management.

Many teams also treat source code management tools as their portal into other CI/CD tools. They make them shareable between teams, and might prefer to stay within one single context and user interface throughout the entire DevOps cycle. Many source code management tools are actually a stack of services that support multiple steps of your CI/CD workflows from within a single UI. This makes them an excellent starting point for building your CI/CD platforms.

The first decision your need to make is whether to go with an open source solution for managing code or with AWS-managed solutions, such as AWS CodeCommit. Open source solutions include (but are not limited to) the following: Gerrit, Gitlab, Gogs, and Phabricator.

You decision will be influenced by the amount of benefit your team can gain from the flexibility provided through open source, and how well your team can support deploying and managing these solutions. You will also need to consider the infrastructure and management overhead cost.

Engineering teams that have the capacity to develop their own plugins for their CI/CD platforms, or whom even contribute directly to open source projects, will often prefer open source solutions for the flexibility they provide. This will be especially true if they are fluent in designing and supporting their own cloud infrastructure. If the team gets more value by trading the flexibility of open source for not having to worry about managing infrastructure (especially if High Availability, Scalability, Durability, and Security are more critical) an AWS-managed solution would be a better choice.

Source Code Management Solution

When the choice is made in favor of an open-source code management solution (such as Gitlab), the next decision will be how to architect the deployment. Will the team deploy to a single instance, or design for high availability, durability, and scalability? Teams that want to design Gitlab for HA can use the following guide to proceed: Installing GitLab on Amazon Web Services (AWS)

By adopting AWS services (such as Amazon RDS, Amazon ElastiCache for Redis, and Autoscaling Groups), you can lower the management burden of supporting the underlying infrastructure in this self-managed HA scenario.

High level overview of self-managed HA Gitlab deployment

Your second decision: Continuous Integration engine

Selecting your CI engine, you might be able to benefit from additional features of previously selected solutions. Gitlab provides both source control services, as well as built-in CI tools, called Gitlab CI. Gitlab Runners are responsible for running CI jobs, and the actual jobs are described as YML files stored in Gitlab’s git repository along with product code. For security and performance reasons, GitLab Runners should be on resources separate from your GitLab instance.

You could manage those resources or you could use one of the AWS services that can support deploying and managing Runners. The use of an on-demand service removes the expense of implementing and managing a capability that is undifferentiated heavy lifting for you. This provides cost optimization and enables operational excellence. You pay for what you use and the service team manages the underlying service.

Continuous Integration engine Solution

In an architecture example (below), Gitlab Runners are deployed in containers running on Amazon EKS. The team has less infrastructure to manage, can start focusing on development faster by not having to implement the capability, and can provision resources in an optimal way for their on-demand needs.

To further optimize costs, you can use EC2 Spot Instances for your EKS nodes. CI jobs are normally compute intensive and limited in run time. The runner jobs can easily be restarted on a different resource with little impact. This makes them tolerant of failure and the use of EC2 Spot instances very appealing. Amazon EKS and Spot Instances are supported out-of-box in Gitlab. As a result there is no integration to develop, only configuration is required.

To support infrastructure as code best practices, Runners are deployed with Helm and are stored and versioned as Helm charts. All of the infrastructure as code information used to implement the CI/CD platform itself is stored in templates such as Terraform.

High level overview of Infrastructure as Code on Gitlab and Gitlab CI

High level overview of Infrastructure as Code on Gitlab and Gitlab CI

Your third decision: Container Registry

You will be unable to deploy Runners if the container images are not available. As a result, the primary non-functional requirements for your production container registry are likely to include high availability, durability, transparent scalability, and security. At the same time, your functional requirements for a container registry might be lower. It might be sufficient to have a simple UI, and simple APIs supporting basic flows. Customers looking for a managed solution can use Amazon ECR, which is OCI compliant and supports Helm Charts.

Container Registry Solution

For this set of requirements, the flexibility and feature velocity of open source tools does not provide an advantage. Self-supporting high availability and strengthened security could be costly in implementation time and long-term management. Based on [Blog post 1 Diagram 1], an AWS-managed solution provides cost advantages and has no management overhead. In this case, an AWS-managed solution is a better choice for your container registry than an open-source solution hosted on AWS. In this example, Amazon ECR is selected. Customers who prefer to go with open-source container registries might consider solutions like Harbor.

High level overview of Gitlab CI with Amazon ECR

High level overview of Gitlab CI with Amazon ECR

Additional Considerations

Now that the main services for the CI/CD platform are selected, we will take a high level look at additional important considerations. You need to make sure you have observability into both infrastructure and applications, that backup tools and policies are in place, and that security needs are addressed.

There are many mechanisms to strengthen security including the use of security groups. Use IAM for granular permission control. Robust policies can limit the exposure of your resources and control the flow of traffic. Implement policies to prevent your assets leaving your CI environment inappropriately. To protect sensitive data, such as worker secrets, encrypt these assets while in transit and at rest. Select a key management solution to reduce your operational burden and to support these activities such as AWS Key Management Service (AWS KMS). To deliver secure and compliant application changes rapidly while running operations consistently with automation, implement DevSecOps.

Amazon S3 is durable, secure, and highly available by design making it the preferred choice to store EBS-level backups by many customers. Amazon S3 satisfies the non-functional requirements for a backup store. It also supports versioning and tiered storage classes, making it a cost-effective as well.

Your observability requirements may emphasize versatility and flexibility for application-level monitoring. Using Amazon CloudWatch to monitor your infrastructure and then extending your capabilities through an open-source solutions such as Prometheus may be advantageous. You can get many of the benefits of both open-source Prometheus and AWS services with Amazon Managed Service for Prometheus (AMP). For interactive visualization of metrics, many customers choose solutions such as open-source Grafana, available as an AWS service Amazon Managed Service for Grafana (AMG).

CI/CD Platform with Gitlab and AWS

CI/CD Platform with Gitlab and AWS

Conclusion

We have covered how making informed decisions can maximize value and synergy between open-source solutions on AWS, such as Gitlab, and AWS-managed services, such as Amazon EKS and Amazon ECR. You can find the right balance of open-source tools and AWS services that will meet your functional and non-functional requirements, and help maximizing the value you get from those resources.

Pete Goldberg, Director of Partnerships at GitLab: “When aligning your development process to AWS Well Architected Framework, GitLab allows customers to build and automate processes to achieve Operational Excellence. As a single tool designed to facilitate collaboration across the organization, GitLab simplifies the process to follow the Fully Separated Operating Model where Engineering and Operations come together via automated processes that remove the historical barriers between the groups. This gives organizations the ability to efficiently and rapidly deploy new features and applications that drive the business while providing the risk mitigation and compliance they require. By allowing operations teams to define infrastructure as code in the same tool that the engineering teams are storing application code, and allowing your automation bring those together for your CI/CD workflows companies can move faster while having compliance and controls built-in, providing the entire organization greater transparency. With GitLab’s integrations with different AWS compute options (EC2, Lambda, Fargate, ECS or EKS), customers can choose the best type of compute for the job without sacrificing the controls required to maintain Operational Excellence.”

 

Author bio

Mikhail is a Solutions Architect for RUS-CIS. Mikhail supports customers on their cloud journeys with Well-architected best practices and adoption of DevOps techniques on AWS. Mikhail is a fan of ChatOps, Open Source on AWS and Operational Excellence design principles.

Obtaining a short code for sending text messages to US recipients – Part 2

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/obtaining-a-short-code-for-sending-text-messages-to-us-recipients-part-2/

In my last post, I gave an overview of the benefits of short codes. I also covered several important pieces of information that you should have in place before you apply for a short code. In this post, I’ll look at the application process itself. I’ll share tips and information that will help you complete the application process. My goal is to help you obtain your short code as quickly as possible. I recommend that you read part 1 of this series before you proceed. The steps in this post build upon the materials that you should have in place after reading part 1.

As in part 1, the information in this post applies specifically to US short codes. US short codes can only be used to send messages to recipients with US phone numbers. Generally, you can use the guidance in these posts when applying for short codes in other countries. However, some parts of the application process might be different from what you see in this post.

Step 1: Create a case in the AWS Support Center

To start the process of requesting a short code, you must create a short code request case in the AWS Support Center. You can find the steps for opening a case in the Amazon Pinpoint User Guide. Some of the fields on the form that you complete in the AWS Support Center are marked “optional.” However, you should provide thoughtful answers in all of the fields. Be sure to provide a detailed description of your use case and opt-in policies, and provide examples of your message templates.

You have to provide some of this same information again when you complete the application in the next step. Make sure that you provide consistent information in both places. Keep in mind that short codes aren’t a resource that can be handed out at will. In fact, AWS doesn’t hand out short codes at all. To obtain a short code, we have to convince all of the US mobile carriers that you have a use case that complies with their requirements. Providing detailed information in your AWS Support case shows that you’re prepared to meet these requirements.

If all of the required information is present in your request, a member of the AWS Support team will respond to your message within 24 business hours. Their response will discuss the charges associated with the short code, and will ask you to confirm that you approve of those charges.

When you reply to the case stating that you approve the charges, the Support team will send you an application form to complete. In the next section, we take an in-depth look at this application form.

Step 2: Complete the short code application

The short code application form contains all of the information that we send to the carriers to let them know about your use case. For this reason, the form must be filled in completely, and the responses must all be compliant with the requirements of the carriers.

Note: The application form is occasionally revised to clarify or add to existing information. By the time you read this post, the form that you receive might differ slightly from the screenshots that I show in this post.

The first page of the form contains basic information about your company. Most of the fields on this page are straightforward, although there are a couple that customers often ask about:

  • Support webpage: The URL of a page where your customers can go to find information about contacting your Customer Support team.
  • Support email address: The carriers require you to provide an email address that customers can contact if they have questions about your short code messaging program. This address should be a shared mailbox (such as [email protected]) rather than an individual person’s email address.
  • Support phone number: Like the support email address, customers should be able to call this phone number to get support for your short code messaging program. The phone number doesn’t have to be a toll-free number, but it does have to be a US-based phone number.
  • Terms and Conditions webpage: This is the URL where your SMS-specific Terms and Conditions document resides, or where it will reside. You can also include a link to your standard Terms and Conditions page, as long as it includes a section dedicated to SMS messaging. The page that you link to must contain all of the terms and conditions that I listed in the first post of this series. If those terms and conditions aren’t live yet, you must include a copy of the Terms and Conditions that you plan to implement along with your completed application.

The second page of the application contains general questions about the use case that you will use the short code with. Let’s review all of the fields on this page:

  • AWS Region: The AWS Region that you use Amazon Pinpoint in. If you’re not sure, check with the person or team within your organization that is responsible for managing your AWS accounts.
  • Target country: This question is intended to make sure that you’re requesting a short code for the correct countries. Short codes are specific to a single country; US short codes can’t be used to send messages to recipients in Canada, and vice versa.
  • Name of service: A name or phrase that identifies your messages as being from you. Service names typically take the form [Company or brand name] [description of program]. For example, if Example Corp. wants a short code for sending account-related notifications, they could use a service name like “Example Corp. Account Alerts” or “Example Corp. Account Updates.” The carriers require you to put this service name at the beginning of each message.
  • How do you plan to use your short code: Use this space to describe your use case. A 1–3 word description—such as “account alerts,” “one-time passwords,” or “promotional messages”—is sufficient.
  • One-time or subscription: Indicate what type of messaging program you plan to send. If you plan to send messages that go out on a regular basis (such as “deal of the day” messages or weather alerts), then indicate that you have a recurring use case. If you will send messages that are sent based on a request or event (such as one-time passwords, account notifications, or purchase confirmations) then indicate that you have a one-time use case.
  • How can a user sign up to receive messages from your short code? Choose the option that applies to your use case. It’s fine to choose more than one option. However, you must provide mockups of the opt-in workflows for all of the options that you select.
  • Per-user message frequency: State how often recipients will receive messages from you. For recurring promotional messages, you might say something like “One message per day.” For account notifications or informational alerts, you could say “Message frequency varies.” For one-time password and multi-factor authentication use cases, you might say “one message per login attempt.”
  • Will you use your short code for any of the following: The carriers are sensitive to messages related to sweepstakes, and to use cases that involve affiliate marketing or sharing of short codes. If you plan to use the short code for any of these use cases, you must indicate it in this section.

The next page of the application is where you document your opt-in workflow. It contains the following fields:

  • How does a user learn to sign up for this program? In this field, document the steps that your customers take to opt in. It’s important to include the “call-to-action” text that your customers see when they opt in. A call-to-action is the part of the workflow that encourages users to sign up for your service. The carriers want to ensure that your calls-to-action include all of the required disclosures. They also want to ensure that the opt-in workflow is compliant with their requirements. It’s important that you don’t force your customers to accept text messages from you if they don’t want to. Also, you have to be transparent about the types of messages that you plan to send and how often you’ll send them. You have to make sure that customers can easily find the terms and conditions that govern your messaging program. And finally, you have to make sure that customers realize that they might be responsible for paying messaging fees when they receive your messages. This last point varies depending on the recipient’s mobile subscription, but as a sender, you don’t know anything about the recipient’s subscription.
  • What messages do you send to a user to confirm sign up? If you have a recurring messaging use case, you must send your customers an opt-in confirmation message. This message must include the following:
    • The service name that you specified earlier in the application
    • The phrase “message and data rates may apply”
    • Information about how often recipients will receive messages from you (such as “up to 30 messages per month” or “message frequency varies”)
    • Information about getting help (typically, something similar to “Text HELP for more info”)
    • Information about opting out (typically, something similar to “Text STOP to opt out”).

If you have a single-message use case, you don’t have to include a confirmation message.

This page also asks about the messages that you’ll send in response to the keywords HELP and STOP. The carriers require you to include templates for both messages, even for single-message programs. Additionally, there are some specific items that should be present in these messages:

  • For the HELP response, include the service name, and a method of contacting your support organization. Email addresses, websites, and phone numbers are all acceptable methods of communication. I recommend that you include two contact methods in your response (such as a phone number and a website).
  • For the STOP response, include the service name. Also include a confirmation that the customer is unsubscribed, and that they won’t receive any additional messages.

On the next page, you provide your message templates. It’s fine to include variables for content that will be substituted in the actual messages that you send to customers. Include examples of all of the messages that you plan to send. If you send message

After you provide your message templates, you’re done! Save the completed application and proceed to the next section.

Step 3: Submit the application and supporting documents

In step 1 of this post, you created a case in the AWS Support Center. Now that you’ve completed the short code application, open that case again. In the case, attach your completed application. Also include your opt-in mockup images and a copy of your SMS Terms & Conditions document.

Step 4: Resolve follow-up issues

After you submit your completed application and supporting material, we send that information out for approval. It’s important to note that AWS doesn’t control this part of the process. Short codes aren’t a type of universal infrastructure like IP addresses, for example. Rather, each of the mobile carriers has to configure their networks to allow your messages to be sent from your short code. In the United States, this means coordinating with the major carriers (AT&T, Verizon, and T-Mobile/Sprint), plus dozens of smaller regional carriers. The result of this effort will be a common short code that you can use to send messages to all of your customers, regardless of which carrier they use.

At the same time, each carrier has slightly different rules around what is an acceptable short code use case. The carriers don’t implement these rules in a standardized way. Some carriers might review your application and identify parts of the application that don’t comply with their requirements. Others might ask you to clarify certain aspects of your use case.

For this reason, this step is the most critical part of the entire application process. If there are any questions or concerns about your use case, the AWS Support team will present them in your support case. It’s critical that you respond to these questions or concerns. Not responding will delay your request. I recommend that you check your short code request case a few times a week during this process.

The provisioning process

After all of the concerns, questions, and issues related to your short code application have been resolved, the mobile carriers begin setting up (“provisioning”) your short code on their networks. Some carriers complete this provisioning process quickly, while others take longer. Generally, this stage requires around 10 weeks to complete on all of the US carrier networks. However, the carriers don’t make any commitments to meeting this timeline.

Once the provisioning process is complete, the AWS Support team updates your short code request case. At this point, the short code is live in your account and ready to use.

Wrapping up

Short codes can send SMS messages at a high frequency, and with high deliverability rates. At the same time, the mobile carriers have an obligation to protect their customers from spam and abuse. For these reasons, the bar for obtaining a short code is purposely high. The carriers work hard to ensure that only approved use cases are given short codes on their networks.

Many of the customers that I work with aren’t aware of these strict carrier requirements, and assume that short codes can be easily handed out like some other types of resources. As you’ve seen in this post and the post before it, that statement isn’t accurate. My intention in writing these blog posts was to help you understand what makes a short code use case compliant with the expectations of the carriers. Following all of the recommendations in these posts should help you get a short code in the shortest amount of time possible.

Obtaining a short code for sending text messages to US recipients – Part 1

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/obtaining-a-short-code-for-sending-text-messages-to-us-recipients-part-1/

Many of the customers that I work with use short codes for their SMS use cases. This is especially true in the United States, where short codes are a common way to send messages to recipients. Short codes offer high throughput and high deliverability rates. They’re also easier for your customers to remember and identify, because they only contain five or six digits.

This post is the first in a two-part series. In this post, I’ll discuss the things that you must plan for before you request a short code. In the second post, I’ll provide guidance and advice for completing the short code application process itself.

Note: Short codes are available in many countries other than the US. However, the process of obtaining a short code in the US can be more difficult than the process of obtaining one in many other countries. For this reason, I’m only focusing on US short codes in this post.

Do I need a short code?

One of the most common questions I hear from customers is “am I required to use a short code?” Short codes aren’t a strict requirement for sending messages to recipients in the US, but they are useful for meeting specific needs. For example, if you plan to send several messages per second, you probably need to use a short code. Short codes in the US support 100 messages per second by default, and can scale to the tens of thousands of messages per second for an additional monthly fee.

Additionally, short codes offer high deliverability rates for SMS messages. The mobile carriers are far less likely to filter messages sent from short codes than they are to filter messages from other types of phone numbers.

Finally, short codes also have the benefit of being easily recognizable phone numbers. Toll-free and 10DLC numbers consist of 10 digits, whereas short codes are five or six digits. You can even get a specific short code (known as a vanity code) for an additional monthly fee.

Even if your use case doesn’t require all of these capabilities, you can still use a short code. However, you could also save time and money using another solution, such as a toll-free number or 10DLC number.

There are a few drawbacks to consider when thinking about whether to use a short code. First, short codes can only be used to deliver SMS messages. Other number types, such as 10DLC phone numbers and toll-free numbers, can be used to send both SMS and voice messages. Second, carriers consider short codes to be a premium product. For these reasons, some prepaid plans (such as the prepaid plans offered by T-Mobile in the US) don’t allow their users to receive messages from short codes.

If you’ve decided that your use case requires a short code, you have to do some planning before you request one. The next few sections guide you through some of the requirements that must be in place in order to obtain and use a short code.

Understanding consent requirements

The US mobile carriers have strict policies for gathering consent during the opt-in process. The CTIA, a trade organization that represents the US wireless industry, provides additional guidance about the requirements for obtaining a short code. You can find more information about the requirements for several types of short code messaging programs in the CTIA’s Short Monitoring Handbook. However, keep in mind that the CTIA guidelines are recommendations. Carriers impose their own requirements beyond the minimum requirements of US law, and beyond the minimum requirements of the CTIA.

The carriers and the CTIA require several specific pieces of information to be in place and presented to the customer. This section discusses these requirements. If these requirements aren’t met, the carriers won’t accept your short code application. It’s important to plan carefully and design your opt-in workflows around these requirements.

Note: These requirements are defined by the mobile carriers and by the CTIA. These requirements are not defined by AWS, and we can’t grant exceptions to any of these carrier requirements.

As far as the carriers are concerned, there is no such thing as blanket or global consent, regardless of your use case. You’re required to collect consent for each type of message that you send—even one-time password and multi-factor authentication messages. Nor is there a concept of implied consent. Consent must be detailed and explicit. When you collect consent, you must show your customers several things so that they can make an informed decision about whether they want to opt in. Specifically, all of the following must be present:

  • A description of the types of messages that you will send through your short code.
  • The phrase “Message and data rates may apply.”
  • An indication of how often recipients will get messages from you. For example, a recurring messaging program might say “one message per week.” A one-time password or multi-factor authentication use case might say “message frequency varies” or “one message per login attempt.”
  • Links to your Terms & Conditions and Privacy Policy documents. Later in this post, we’ll talk about the specific Terms & Conditions that are required.

There are a few additional things to keep in mind about the consent gathering process:

  • You can’t send a single message to the recipient until you’ve explicitly collected their consent to do so.
  • Using a short code requires you to adopt a use-case-specific consent model. When a customer provides consent to receive one type of message from you, they aren’t giving you consent to send them other types of text messages. For example, if your customer opts to receive multi-factor authentication messages from you, you don’t have their consent to send promotional SMS messages.
  • You can’t make receiving text messages a requirement for signing up for or using your service. If your use case requires that you verify your customer’s phone number, provide them an alternative to receiving text messages. For example, provide the option to receive a voice call or an email.
  • The consent you gather only applies to your company or brand. You can’t transfer consent to another company. Never sell your list of opted-in customers, and never use purchased or rented lists.

Design your opt-in workflows

With these considerations in mind, you can begin to design your opt-in process or modify your existing opt-in process. The carriers require you to provide high-fidelity mockups of your entire opt-in experience. In this case, “high fidelity” means that the mockups closely resemble the opt-in experience that your customers will complete. Your mockups must include all of the required disclosures listed earlier in this section. You’ll use these mockups later in the application process.

The following image shows an example of an opt-in mockup that doesn’t comply with the carriers’ standards. The carriers will reject this mockup—along with the rest of the short code application—adding time to the short code request process. See if you can identify the issues with this example.

There are several problems with the preceding example. First, the image isn’t a faithful representation of what customers would actually see during the opt-in process. It contains placeholder text, and it obviously doesn’t reflect a production use case. Second, it appears that a message will be sent to the recipient, but no consent is explicitly gathered before doing so. Third, it appears that receiving a text message is required to sign up. The form doesn’t provide any alternatives to receiving a text message. And finally, none of the required disclosures (listed earlier in this section) are presented to the recipient at all.

Compare the preceding example to the following example, which complies with the carriers’ requirements for a multi-factor authentication use case.

Even though it might not be a pixel-perfect representation of the final design, this example is a compliant mockup. It contains finalized text and images, and it shows the entire opt-in flow, complete with annotations. In the opt-in flow, the customer has to take distinct, intentional actions to provide their consent to receive text messages. And finally, the call-to-action contains all of the required disclosures.

One important thing to note: if there are multiple methods for opting in to your messaging program, include mockups of all of them. For example, if customers can opt in to your messaging program by sending a keyword to your short code, describe how customers learn about that keyword. If you send them an email that mentions this method of opting in, include a mockup of the email. Note that all of the methods of capturing customer opt-ins must include the disclosures that I mentioned earlier.

Other use cases may require slightly different workflows. For example, if you send recurring promotional messages (such as daily deal alerts), you should abide by the same guidelines shown in the preceding example. However, your call-to-action should also state the number of messages the recipient will receive when they subscribe (such as “Up to 30 messages per month” or “Two messages per day”). For this use case, you should also use a double opt-in process. In a double opt-in, you ask the recipient for their phone number, then send them a message asking them to reply with a keyword (such as “YES”) to confirm their subscription. If the recipient doesn’t reply, then don’t send any further messages.

Create an SMS-specific Terms & Conditions page

The mobile carriers also require that you make a specific set of SMS Terms and Conditions available to your customers. The following terms and conditions comply with the carriers’ requirements. You can copy these terms and modify them to fit your use case:

  1. When you opt in to the service, we will send you {description of the messages that you plan to send}.
  2. You can cancel the SMS service at any time by texting “STOP” to {short code}. When you send the SMS message “STOP” to us, we reply with an SMS message that confirms that you have been unsubscribed. After this, you won’t receive SMS any additional messages from us. If you want to join again, sign up as you did the first time and we will start sending SMS messages to you again.
  3. You can get more information at any time by texting “HELP” to {short code}. When you send the SMS message “HELP” to us, we respond with instructions on how to use our service and how to unsubscribe.
  4. We are able to deliver messages to the following mobile phone carriers: Major carriers: AT&T, Verizon Wireless, Sprint, T-Mobile, MetroPCS, US Cellular, Alltel, Boost Mobile, Nextel, and Virgin Mobile. Minor carriers: Alaska Communications Systems (ACS), Appalachian Wireless (EKN), Bluegrass Cellular, Cellular One of East Central IL (ECIT), Cellular One of Northeast Pennsylvania, Cincinnati Bell Wireless, Cricket, Coral Wireless (Mobi PCS), COX, Cross, Element Mobile (Flat Wireless), Epic Touch (Elkhart Telephone), GCI, Golden State, Hawkeye (Chat Mobility), Hawkeye (NW Missouri), Illinois Valley Cellular, Inland Cellular, iWireless (Iowa Wireless), Keystone Wireless (Immix Wireless/PC Man), Mosaic (Consolidated or CTC Telecom), Nex-Tech Wireless, NTelos, Panhandle Communications, Pioneer, Plateau (Texas RSA 3 Ltd), Revol, RINA, Simmetry (TMP Corporation), Thumb Cellular, Union Wireless, United Wireless, Viaero Wireless, and West Central (WCC or 5 Star Wireless). Carriers are not liable for delayed or undelivered messages.
  5. Message and data rates may apply for any messages sent to you from us and to us from you. You will receive {message frequency} messages per {time period}. Contact your wireless provider for more information about your text plan or data plan. If you have questions about the services provided by this short code, email us at {support email address}.
  6. If you have any questions regarding privacy, read our privacy policy at {link to privacy policy}

If you copy the preceding text, be sure to replace all of the items in {curly braces} with the appropriate values for your use case. Your Legal department might also want to review these Terms before you publish them, so plan accordingly.

Important: If you don’t provide your customers with a copy of these terms, the carriers won’t approve your short code application.

Once these terms have been reviewed, plan to host them in a publicly accessible location. A URL that links to these terms is a required part of every short code application. If this URL isn’t live when you submit your short code request, determine what the URL will be, and include a copy of the Terms & Conditions in a file that you include with your request.

Create your message templates

Your short code application must include all of the message templates that you plan to use. If you have multiple templates, include all of them. If your messages will include variables, it’s fine to use either placeholder values or variables. For example, both of the following are acceptable: “Hello John. Your one-time password is 654321” and “Hello <first name>. Your one-time password is <OTP code>.”

It’s OK to make minor edits (such as correcting typos or clarifying text) to these message templates after you receive your short code. However, if you make substantial changes to these templates after you receive the short code, you should submit your updated message templates to the carriers. Short codes are periodically audited, and deviating from the use case in your application could lead to your short code being suspended. Substantial changes could include the following:

  • Changes to the brand name that appears on your messages (for example, if your company rebrands under a new name, or is acquired by another company).
  • Changes to the use case (for example, if your application specified a one-time password use case, but you start sending account notifications through the same short code). This type of change might require you to re-collect consent from your customers before you start sending the new type of messages.

In these situations, you should open a case with AWS Support. We will work with the carriers to have your short code registration information updated.

What happens if I don’t complete these steps?

Customers sometimes ask me what would happen if they didn’t implement all of the requirements that I mentioned in the preceding sections. If your application for a new short code doesn’t meet these requirements, the answer is simple: the carriers will reject your request for a short code. These carrier-imposed requirements are not optional.

If you submit an application that meets all of the carrier requirements, but your real-world production use case doesn’t meet those requirements, there could also be consequences. The carriers periodically perform audits of short codes to ensure that they are being used in a compliant manner. If they find that your opt-in process differs greatly from what you showed in your short code application, they could pause your short code’s ability to send messages on their networks. When this happens, the carriers typically provide some time to remedy the issue. The CTIA Short Code Monitoring Handbook describes the components that are reviewed during these audits, and lists the consequences for violations that are uncovered during the audit process.

Wrapping up

In this post, we looked at the items that the US mobile carriers require you to have in place before you request a short code. These requirements were implemented by the carriers to protect their customers. As a result, these requirements are strict.

If your use case requires you to use a short code, I recommend that you start thinking about these requirements as soon as possible. These requirements might mean that you have to change your planned designs and workflows. Meeting these requirements can shorten the amount of time that’s required in order to obtain a short code.

In part 2 of this series, we’ll look at the process of actually requesting the short code. That post will look at the application process, and the process of working with AWS Support to track the status of your short code request.

AWS’s Egregious Egress

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/aws-egregious-egress/

AWS’s Egregious Egress

AWS’s Egregious Egress

When web hosting services first emerged in the mid-1990s, you paid for everything on a separate meter: bandwidth, storage, CPU, and memory. Over time, customers grew to hate the nickel-and-dime nature of these fees. The market evolved to a fixed-fee model. Then came Amazon Web Services.

AWS was a huge step forward in terms of flexibility and scalability, but a massive step backward in terms of pricing. Nowhere is that more apparent than with their data transfer (bandwidth) pricing. If you look at the (ironically named) AWS Simple Monthly Calculator you can calculate the price they charge for bandwidth for their typical customer. The price varies by region, which shouldn’t surprise you because the cost of transit is dramatically different in different parts of the world.

Charging for Stocks, Paying for Flows

AWS charges customers based on the amount of data delivered — 1 terabyte (TB) per month, for example. To visualize that, imagine data is water. AWS fills a bucket full of water and then charges you based on how much water is in the bucket. This is known as charging based on “stocks.”

On the other hand, AWS pays for bandwidth based on the capacity of their network. The base unit of wholesale bandwidth is priced as one Megabit per second per month (1 Mbps). Typically, a provider like AWS, will pay for bandwidth on a monthly fee based on the number of Mbps that their network uses at its peak capacity. So, extending the analogy, AWS doesn’t pay for the amount of water that ends up in their customers’ buckets, but rather the capacity based on the diameter of the “hose” that is used to fill them. This is known as paying for “flows.”

Translating Flows to Stocks

You can translate between flow and stock pricing by knowing that a 1 Mbps connection (think of it as the "hose") can transfer 0.3285 TB (328GB) if utilized to its fullest capacity over the course of a month (think of it as running the "hose" at full capacity to fill the "bucket" for a month).1 AWS obviously has more than 1 Mbps of capacity — they can certainly transfer more than 0.3285 TB per month — but you can use this as the base unit of their bandwidth costs, and compare it against what they charge a customer to deliver 1 Terabyte (1TB), in order to figure out the AWS bandwidth markup.

One more subtlety to be as accurate as possible. Wholesale bandwidth is also billed at the 95th percentile. That effectively cuts off the peak hour or so of use every day. That means a 1 Mbps connection running at 100% can actually likely transfer closer to 0.3458 TB (346GB) per month.

Two more factors are important: utilization and regional costs. AWS can’t run all their connections at 100% utilization 24×7 for a month. Instead, they’ll have some average utilization per transit connection in any month. It’s reasonable to estimate that they likely run at between 20% and 40% average utilization. That would be a typical average utilization range for the industry. The higher their utilization, the more efficient they are, the lower their costs, and the higher their effective customer markup will be.

To be conservative, we’ve assumed that AWS’s average utilization is the bottom of that range (20%), but you can download the raw data and adjust the assumptions however you think makes sense.

We have a good sense of the wholesale prices of bandwidth in different regions around the world based on what Cloudflare sees in the market when we buy bandwidth ourselves. We’d imagine AWS gets at least as good of pricing as we do. We’ve included a rough estimate of these prices in the calculation, rounding up on the wholesale price wherever there was a question (which makes AWS look better).

Massive Markups

Based on these assumptions, here’s our best estimate of AWS’s effective markup for egress bandwidth on a per-region basis.

AWS’s Egregious Egress
AWS’s Egregious Egress

Don’t rest easy, South Korea with your merely 357% markup. The general rule of thumb appears to be that the older a market is, the more Amazon wrings from its customers in egregious egress markups — and the Seoul availability zone is only a bit over four years old. Winter, unfortunately, inevitably seems to come to AWS customers.

AWS Stands Alone In Not Passing On Savings to Customers

Remember, this is for the transit bandwidth that AWS is paying for. For the bandwidth that they exchange with a network like Cloudflare, where they are directly connected (settlement-free peered) over a private network interface (PNI), there are no meaningful incremental costs and their effective margins are nearly infinite. Add in the effect of rebates Amazon collects from colocation providers who charge cross connect fees to customers, and the effective markup is likely even higher.

Some other cloud providers take into account that their costs are lower when passing over peering connections. Both Microsoft Azure and Google Cloud will substantially discount egress charges for their mutual Cloudflare customers. Members of the Bandwidth Alliance — Alibaba, Automattic, Backblaze, Cherry Servers, Dataspace, DNS Networks, DreamHost, HEFICED, Kingsoft Cloud, Liquid Web, Scalway, Tencent, Vapor, Vultr, Wasabi, and Zenlayer — waive bandwidth charges for mutual Cloudflare customers.

AWS’s Egregious Egress

At this point, the majority of hosting providers in the industry either substantially discount or entirely waive egress fees when sending traffic from their network to a peer like Cloudflare. AWS is the notable exception in the industry. It’s worth noting that we invited AWS to be a part of the Bandwidth Alliance, and they politely declined.

It seems like a no-brainer that if we’re not paying for the bandwidth costs, and the hosting provider isn’t paying for the bandwidth costs, customers shouldn’t be charged for the bandwidth costs at the same rate as if the traffic was being sent over the public Internet. Unfortunately, Amazon’s supposed obsession over doing the right thing for customers doesn’t extend to egress charges.

Artificially Held High

Amazon’s mission statement is: “We strive to offer our customers the lowest possible prices, the best available selection, and the utmost convenience.” And yet, when it comes to egress, their prices are far from the lowest possible.

During the last ten years, industry wholesale transit prices have fallen an average of 23% annually. Compounded over that time, wholesale bandwidth is 93% less expensive than 10 years ago. However, AWS’s egress fees over that same period have fallen by only 25%.

And, since 2018, the egress fees AWS charges in North America and Europe have not dropped a penny even as wholesale prices in those markets over the same time period have fallen by more than half.

AWS’s Hotel California Pricing

Another oddity of AWS’s pricing is that they charge for data transferred out of their network but not for data transferred into their network. If the only time you’ve paid for bandwidth is with your residential Internet connection, then this may make some sense. Because of some technical limitations of the cable network, download bandwidth is typically higher than upload bandwidth on cable modem connections. But that’s not how wholesale bandwidth is bought or sold.

AWS’s Egregious Egress

Wholesale bandwidth isn’t like your home cable connection. Instead, it’s symmetrical. That means that if you purchase a 1 Mbps (1 Megabit per second) connection, then you have the capacity to send 1 Megabit out and receive another 1 Megabit in every second. If you receive 1 Mbps in and simultaneously 1 Mbps out, you pay the same price as if you receive 1 Mbps in and 0 Mbps out or 0 Mbps in and 1 Mbps out. In other words, ingress (data sent to AWS) doesn’t cost them any more or less than egress (data sent from AWS). And yet, they charge customers more to take data out than put it in. It’s a head scratcher.

We’ve tried to be charitable in trying to understand why AWS would charge this way. Disappointingly, there just doesn’t seem to be an innocent explanation. As we dug in, even things like writes versus reads and the wear they put on storage media, as well as the challenges of capacity planning for storage capacity, suggest that AWS should charge less for egress than ingress.

But they don’t.

The only rationale we can reasonably come up with for AWS’s egress pricing: locking customers into their cloud, and making it prohibitively expensive to get customer data back out. So much for being customer-first.

But… But… But…

AWS may object that this doesn’t take into account the cost of things like metro dark fiber between data centers, amortized optical and other networking equipment, and cross connects. In our experience, those costs amount to a rounding error of less than one cent per Mbps when operating at AWS-like scale. And these prices have been falling at a similar rate to the decline in the price of bandwidth over the past 10 years. Yet AWS’s egress prices have barely budged.

All the data above is derived from what’s published on AWS’s simple pricing calculator. There’s no doubt that some large customers are able to negotiate lower prices. But these are the prices charged to small businesses and startups by default. And, when we’ve reviewed pricing even with large AWS customers, the egress fees remain egregious.

It’s Not Too Late!

We have a lot of mutual customers who use Cloudflare and AWS. They’re a great service, and we want to support our mutual customers and provide services in a way that meets their needs and is always as secure, fast, reliable, and efficient as possible. We remain hopeful that AWS will do the right thing, lower their egress fees, join the Bandwidth Alliance — following the lead of the majority of the rest of the hosting industry — and pass along savings from peering with Cloudflare and other networks to all their customers.

AWS’s Egregious Egress

…….
1Here’s the calculation to convert a 1 Mbps flow into TB stocks: 1 Mbps @ 100% for 1 month = (1 million bits per second) * (60 seconds / minute) * (60 minutes / hour) * (730 hours on average/month) divided by (eight bits / byte) divided by 10^12 (to convert bytes to Terabytes) = 0.3285 TB/month.

Introducing the Amazon Pinpoint SMS sandbox

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/introducing-the-amazon-pinpoint-sms-sandbox/

Amazon Pinpoint now includes a new feature called the SMS sandbox. If you’ve sent email through Amazon Pinpoint (or if you’ve used Amazon SES), the sandbox might be a familiar concept. This new feature helps protect your Amazon Pinpoint account against unauthorized use, accidental sends, and unexpected charges. In this post, I’ll describe the SMS sandbox feature. You’ll learn what the sandbox is and what the benefits are for you. I’ll also talk about how to use your Amazon Pinpoint account when it’s in the sandbox, and how to have your account removed from the sandbox.

About the sandbox

The SMS sandbox is enabled by default on all new Amazon Pinpoint accounts. Also, if you have an existing account that has never had a spending limit increase (that is, if your account has a monthly spending limit of $1), your account is probably still in the SMS sandbox.

The sandbox applies to both Amazon Pinpoint and Amazon SNS. If your account is removed from the sandbox in Amazon SNS, it’s also out of the sandbox in Amazon Pinpoint, and vice-versa.

While your account is in the sandbox, you can still use all of the features of Amazon Pinpoint. However, there are a few important things to keep in mind. First, your monthly spending limit is fixed at $1. You can increase this amount when you remove your account from the sandbox. Second, you can only send messages to destination phone numbers that you’ve verified. Third, country-specific rules may apply to the registration process. For example, in the United States, you’re required to have a dedicated phone number for sending SMS messages.

SMS sandbox benefits

The SMS sandbox is a valuable tool for ensuring the security of your accounts. It protects against messages accidentally being sent to unintended recipients during your development and testing processes. The sandbox also helps protect the SMS ecosystem by preventing bad actors from sending unsolicited messages to arbitrary phone numbers.

Verifying destination phone numbers

One of the biggest changes with this release is the concept of verified phone numbers. When your account is in the sandbox, you can only send SMS messages to verified phone numbers. A verified phone number is a number that you own, or that is owned by somebody who provided permission to receive messages from you.

Note: You only have to verify recipients’ phone numbers when your account is in the sandbox. When your account is out of the sandbox, you can send messages to any phone number, even if that phone number hasn’t been verified.

While your account is in the sandbox, you can have up to 10 verified phone numbers in each AWS Region. After you verify a phone number, you have to wait 24 hours before you can delete it.

The verification process involves two steps. First, you enter the number that you want to verify in the Amazon Pinpoint console. This step is shown in the following image.

When you do this, Amazon Pinpoint sends a verification code to the phone number that you specified.

Next, you enter the verification code in the same section of the Amazon Pinpoint console where you started the verification process. This step is shown in the following image. It’s important to keep in mind that these verification codes are only valid for 15 minutes.

If the code that you enter matches the code that was sent to the phone number, then the phone number becomes verified. The following image shows an example of a phone number that has been successfully verified.

Step-by-step procedures for verifying phone numbers are available in the Amazon Pinpoint User Guide. For this verification process, we waive the standard messaging fees associated with sending the verification code up to five times per phone number.

Using the sandbox

When you complete the verification process for a phone number, you can use that phone number as a destination for your messages.

If you plan to use campaigns or journeys to send SMS messages, you can create them without restrictions. When you launch your campaigns or journeys, Amazon Pinpoint only sends messages to verified recipients. If you try to send a test message during the process of creating a campaign or journey, Amazon Pinpoint asks you to specify an origination phone number, and to select a verified phone number as the recipient. After your campaigns or journeys are sent, the analytics dashboards will indicate the number of recipients that you targeted, and that messages were only delivered to verified recipients, if there were any.

If you use the SendMessages API to send your messages, pass the verified phone number as a key in the Addresses object. You can see a basic example in the following Python code example:

import boto3
from botocore.exceptions import ClientError
 
client = boto3.client('pinpoint',region_name='us-east-1')
try:
    response = client.send_messages(
        ApplicationId='7353f53e6885409fa32d07cedexample',
        MessageRequest={
            'Addresses': {
                '+14255550142': {           # the verified phone number
                    'ChannelType': 'SMS'
                }
            },
            'MessageConfiguration': {
                'SMSMessage': {
                    'Body': 'This is a test',
                    'MessageType': 'TRANSACTIONAL',
                    'OriginationNumber': '+18445550123'
                }
            }
        }
    )
except ClientError as e:
    print(e.response['Error']['Message'])
else:
    print("Message sent! Message ID: "
            + response['MessageResponse']['Result']['+14255550142]['MessageId'])

Moving out of the sandbox

When you move your account out of the SMS sandbox, you can send messages to any phone number, even if you haven’t verified it. When your account is out of the sandbox, you also gain the ability to increase your spending limit to a value higher than $1 per month.

You can determine whether your account is still in the SMS sandbox on the SMS settings page of the Amazon Pinpoint console. The top of the page (shown in the following image) shows the steps that you must take before you can start sending SMS messages. Step 2 of this section tells you whether your account is still in the SMS sandbox.
To create a request to get out of the sandbox, create a Service Limit Increase case in the AWS Support Center. In your case, you have to provide certain details about your use case and about your consent-gathering practices. You can find complete instructions for creating these requests in the Amazon Pinpoint User Guide.

I highly recommend that you fill in all of the fields in the request form, including those that state that they’re optional. Having this information will help the AWS Support team better understand your use case. Incomplete information could result in your request being delayed or denied.

After you submit your request, the AWS Support team responds to your ticket within 24 business hours. However, the response might include additional questions. Be sure to return to the AWS Support Center periodically after you submit your request so that you can answer questions if they arise.

Wrapping up

We designed the SMS sandbox to be flexible enough to enable low-volume development and testing use cases with minimal disruptions. At the same time, the sandbox provides additional security against unintended sending, and it deters malicious senders.

The sandbox is enabled by default for all new AWS accounts. There are no additional costs for using the sandbox, or for having your account removed from it.

Ready to start sending text messages with Amazon Pinpoint? Go to console.aws.amazon.com/pinpoint today to get started!

Getting Started with Push Notifications using AWS Amplify

Post Syndicated from Pauline Kelly original https://aws.amazon.com/blogs/messaging-and-targeting/getting-started-with-push-notifications-using-aws-amplify/

This article was written by Pauline Kelly and Rob Costello, Public Sector Solutions Architects, AWS.

The code in this blog will run on OSX 10.11.6 (El Capitan) and higher, and XCode 8.1 and higher.

Push notifications are a key capability provided by mobile apps to engage with users, providing real-time updates or new information. However, they require integration of several components provided by different vendors. This can be difficult for new mobile developers, or developers who are new to Amazon Web Services (AWS).

As a mobile app developer, building high-quality apps that people want to use requires focus on the front-end components. Still, backend services also need to be considered to help ensure the app provides the level of scalability, reliability, and security that is expected of modern mobile applications. In this post, we will provide an example of how to implement Push Notifications in your mobile apps using AWS Amplify and Amazon Pinpoint, using React Native code to help get you started.

Usage Patterns

There are several use cases where developers benefit from adding Push Notifications into their mobile app, including:

  • Asynchronous actions, such as when an order is placed through an app, and confirmation or status notifications are sent to the user.
  • Scheduled reminders, to provide time-bound notifications to users to prompt engagement, such as “It’s been 7 days since you last checked in, would you like to check in now?”.
  • Instant messaging, where your app provides two-way communication (i.e., chat). This ensures your users can engage with each other in near real-time.

Component Definitions

The components necessary to add push notifications to your app are:

Platform – These are the services provided by mobile device builders (e.g. Apple, Google, Amazon), that transport the messages to the end devices. The Platforms offer device registration, app token creation and management, and the delivery channel for notifications to be delivered to devices. There are several different services that you can use to send push notifications to the users of your applications. The platform you use largely depends on which app store your customers use to obtain your app. The most common platforms are Apple Push Notification service (APNs), Firebase Cloud Messaging (FCM), Baidu Cloud Push, and Amazon Device Messaging (ADM).

Provider – The Provider allows app registration and messaging to be coordinated by your mobile apps backend system, to allow for event or schedule based interactions from your backend with your mobile app. In AWS, we provide both Amazon Simple Notification Service (SNS) and Amazon Pinpoint to fill the role of the Provider.

Client – The client is your mobile application, running on a physical device. On initial launch of the application, two persistent and secure channels are established. First is between the Platform and the Client, which results in the creation of a globally-unique, app-specific device token. Next, the other sets up push notifications between the Provider and the Platform, using the device token. The device token is used to identify the destination for the notifications.

client, platform and provider components with no connections

AWS Options

AWS provides two services that can be used for provide Push Notification services for your mobile apps.

Amazon Pinpoint – Pinpoint enables marketers and developers to deliver customer-centric engagement experiences. It provides a collection of capabilities that enable collection of data from audiences, real-time and historic analytics, and execution of campaigns for direct customer engagement over multiple channels, such as email, SMS, or push notifications. Pinpoint makes it easy to manage device registration, integration with notification channel systems, and integration with other AWS services to create a fully-featured mobile application backend. AWS Amplify includes support for deploying and configuring Pinpoint projects to use in your mobile application.

Amazon Simple Notification Service (SNS) – SNS, or Simple Notification Service, is a publish-subscribe service that accepts incoming messages and delivers them to a variety of destinations, such as using push notifications to mobile devices. Messages are sent from a Publisher (your mobile app backend) to a Subscription, in the form of a Topic. Subscriptions are linked to one or more Platform Applications, which is the Platform you want to connect to (such as APNs or FCM). Each Platform Application has individual Endpoints which are created using the SNS SDK on the device, and passing in the unique device token. To explore fanout patterns with SNS, refer to this blog post.

While both SNS and Pinpoint have Push Notifications capability, Amazon Pinpoint provides a simpler development experience when leveraging AWS Amplify, and a more robust management and operational capability for app owners and developers. The rest of this post will focus on the use of Pinpoint and Amplify.

Registration and Data-flows

In this post we will use the APNs (Apple Push Notification Service) platform for push notifications, but a similar pattern is used for other platforms. The following events take place when a push notification is triggered:

  1. A Device establishes a TLS connection with APNs based on a pre-existing device-specific certificate, and requests an app-specific token to use for push notifications.
  2. The Device passes the token to the AWS provider to register for notifications.
  3. Based on some form of event or trigger, such as a marketer launching a new campaign, a notification can be generated programmatically and sent to the AWS provider (Pinpoint or SNS).
  4. The AWS provider delivers the notification to APNs, along with the associated device token(s) to deliver it.
  5. APNs pushes the notification to the device(s).

client, platform and provider components with connections

For details of how the APNs platform works, please consult the relevant services documentation at https://developer.apple.com/library/archive/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/APNSOverview.html.

Setting up APNs prerequisites

Note: You will need an account with the Apple Developer Program, as an individual or as part of an organization, and you must have agent or admin privileges in that account. Also, Push Notifications in the Apple ecosystem require a physical device, the iOS simulator is not capable of receiving push notifications.

To setup the required APNs components, follow the Amplify guide to create:

  • An app ID.
  • An SSL certificate, which authorizes you to send push notifications to your app through APNs.
  • A registration for your test device, such as an iPhone, with your Apple Developer account.
  • An iOS distribution certificate, which enables you to install your app on your test device.
  • A provisioning profile, which allows your app to run on your test device.

Setting up your development workstation

Note: As this post focuses on using APNs for an iOS device, you must perform the following steps on an Apple device.

On your development workstation, install all React Native CLI prerequisites identified at https://reactnative.dev/docs/environment-setup, including:

  • NodeJS: JavaScript runtime, used to run your application
  • Watchman: watches files and records when they change, used to update your React Native app automatically rather than manually triggering a rebuild while developing
  • Xcode: integrated development environment for creating iOS applications and more, installed on macOS
  • CocoaPods: a dependency manager for Swift and Objective-C Cocoa projects

You will need to set up the Amplify CLI, which is used to configure the application with Amplify. Be sure to configure the Amplify CLI with credentials and other settings following the documentation here before proceeding.

Set up a new React Native App

Create a new React Native App to begin:

npx react-native init MyApp —template react-native-template-typescript
cd MyApp

Firstly, use the Amplify CLI to initialize the new app:

amplify init
Scanning for plugins...
Plugin scan successful
Note: It is recommended to run this command from the root of your app directory
? Enter a name for the project MyApp
? Enter a name for the environment dev
? Choose your default editor: Visual Studio Code
? Choose the type of app that you're building javascript
Please tell us about your project
? What javascript framework are you using react-native
? Source Directory Path:  src
? Distribution Directory Path: /
? Build Command:  npm run-script build
? Start Command: npm run-script start
Using default provider  awscloudformation

For more information on AWS Profiles, see:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html

? Do you want to use an AWS profile? Yes
? Please choose the profile you want to use default

Then install the required React and CocoaPod dependencies:

npm install aws-amplify-react-native aws-amplify @aws-amplify/pushnotification @react-native-community/push-notification-ios @react-native-community/netinfo
npx pod-install

We are now going to add authentication for the user to sign into the app, and send/receive push notifications. Adding authentication with the default configuration creates a Cognito User Pool in the cloud, and allows Amplify applications to use the identity of the authenticated user.

Note that authentication with Cognito User Pools is not strictly required for Amazon Pinpoint Push Notifications to function, you can use Cognito Identity Pools for unauthenticated integration with Pinpoint APIs in your mobile app.

amplify add auth
Do you want to use the default authentication and security configuration? Default configuration
Warning: you will not be able to edit these selections. 
How do you want users to be able to sign in? Username
Do you want to configure advanced settings? No, I am done.

To finish the creation of the Amazon Cognito User Pool, push the changes to AWS:

amplify push
? Are you sure you want to continue? (Y/n)

The Amplify Analytics module is required before Push Notifications can be configured. The Analytics module creates and configures the Amazon Pinpoint endpoint required for the PushNotification library to be able to register for and be a target for notifications.

amplify add analytics
? Select an Analytics provider Amazon Pinpoint
? Provide your Pinpoint resource name: MyAppAnalytics
? Apps need authorization to send analytics events. Do you want to allow guests and unauthenticated users to send analytics events? (we recommend you allow this when getting started) (Y/n) Yes

To finish the creation of the Amazon Pinpoint endpoint by Amplify CLI, push the changes to AWS. Here is a list of the cloud resources that will be created when you push the stack to cloud:

  • Amazon Cognito User Pool
  • Amazon Cognito Federated Identity Pool
  • Amazon Pinpoint Project (linked to User Pool)
  • Associated AWS IAM Roles

For specific details, you can look in the “amplify” directory in your project for the AWS CloudFormation templates created by the cli.

amplify push
? Are you sure you want to continue? (Y/n)

Next, use the Amplify CLI to add notifications to the project. During the setup for notifications, Amplify will ask you for the path to your *.p12 certificate, which is generated in XCode using your Apple Developer Account. Please refer to the instructions here:

Amplify Notifications: https://docs.amplify.aws/lib/push-notifications/getting-started/q/platform/js#setup-for-ios
Pinpoint Push Notification Setup: https://docs.aws.amazon.com/pinpoint/latest/developerguide/apns-setup.html

amplify add notifications
? Choose the push notification channel to enable. APNS
? Provide your Pinpoint resource name: <Choose default or provide a name>
? Choose authentication method used for APNs Certificate
? The certificate file path (.p12): <path to APNs p12 certificate>
? The certificate password (if any):
MAC verified OK
✔ The APNS channel has been successfully enabled.

Note that when the Notifications category is added, Amplify CLI also adds the Auth category, which creates and configures an Amazon Cognito User to allow authenticated access to the Amazon Pinpoint endpoint.

Next, you will need to open the iOS workspace of the application in XCode, found in ios/MyApp.xcworkspace. To enable notifications to function on iOS, the following settings for the project should be changed using Xcode to enable the @react-native-community/push-notifications-ios module to function:

  1. Add the Background Mode – Remote Notifications capability – https://github.com/react-native-push-notification-ios/push-notification-ios#add-capabilities–background-mode—remote-notifications
  2. Add support for the notification and register events that will be used – https://github.com/react-native-push-notification-ios/push-notification-ios#augment-appdelegate

Finally, update the Apps Workspace settings in Xcode by setting the Bundler Identifier you created when configuring your Apple Developer account. Once your Team is specified in Xcode, it will now populate the associated Signing Certificate field.

xcode project view signing tab

Next you can configure your React Native app to send and receive Push Notifications.

Register and Receive Notifications

To register a device with Amazon Pinpoint to receive Push Notifications, the required libraries should be imported:

import Amplify from 'aws-amplify';
import awsconfig from './aws-exports';
import {withAuthenticator} from 'aws-amplify-react-native';
import Analytics from '@aws-amplify/analytics';
import PushNotification from '@aws-amplify/pushnotification';
import Auth from '@aws-amplify/auth';

Note that Pinpoint requires the Analytics category of Amplify to be configured and imported into your app.

With the required libraries loaded, the Amplify components should be configured with the AWS details created by the amplify push command that are stored in the aws-exports.js file.

Amplify.configure(awsconfig);
Auth.configure(awsconfig);
Analytics.configure(awsconfig);
PushNotification.configure(awsconfig);

Next, register functions to be called when the device registers with APNs, and when a notification is received.

The onRegister event will be triggered when iOS registers a new token with the APNs service, allowing the app to retrieve the token and register it with Amazon Pinpoint via the Analytics.updateEndpoint function. Take care to update the userId key in the Analytics configuration.

PushNotification.onRegister((token: any) => {
  _token = token;
  Analytics.updateEndpoint({
    address: token,
    optOut: 'NONE',
    userId: '<userId>',
  })
    .then((data) => {
      console.log('endpoint updated', JSON.stringify(data));
    })
    .catch((error) => {
      console.log('error updating endpoint', error);
    });
});

The onNotification event will be triggered when the open app receives a notification from Amazon Pinpoint via APNs. Here we can execute actions based on the notification if required:

PushNotification.onNotification((notification: any) => {
  // display notification in debug log in XCode
  console.log('in app notification received', notification);
});

If the app is not open when the notification is received, and the user clicks the notification prompt, the onNotificationOpened event will be triggered allow actions to also be executed.

PushNotification.onNotificationOpened((notification: any) => {
  // display notification in debug log in XCode
  console.log('the notification is opened from iOS', notification);
});

Finally, we must prompt the user to allow Push Notifications from the app:

PushNotification.requestIOSPermissions();

Send Push Notifications

To send Push Notifications, we can use the Amazon Pinpoint SDK. Push notifications are ordinarily sent from a back-end process or application, however you can send notifications from anywhere the Amazon Pinpoint SDK is used.

Code examples in Javascript and Python for using the SendMessage API are available in the Amazon Pinpoint documentation. These examples can be executed from a backend server process or via AWS Lambda.

The address token value used in these examples must match that of the target device token assigned by APNs. This will be visible in the console output of the mobile app when the onRegister event is triggered.

The IAM Role used by your backend server process or AWS Lambda must have the following permission policy applied:

{
    "Effect": "Allow",
    "Action": "mobiletargeting:SendMessages", 
    "Resource": "arn:aws:mobiletargeting:<region>:<accountID>:apps/<projectID>/*"
}

Testing Push Notifications

Now run the project locally:

npx react-native run-ios

This will start Metro for React Native in a new Terminal window. Metro is a bundler for React Native which transpiles JavaScript into native code for use on client devices.

command line running metro bundler

You should now be able to connect your test device to your development workstation, change the target device and run the application from Xcode (Push Notifications are not available in the iOS Simulator in XCode).

You can now test sending and receiving notifications.

Adding Pinpoint Features

Now that you have a functioning mobile app able to receive Push Notifications, Amazon Pinpoint can be used by app owners to engage users through customised Campaigns or Journeys. A campaign sends tailored messages on a schedule that you define. With Journeys, you can send messages to your customers based on their attributes, behaviours, and activities.

Thanks to AWS Amplify, the application has already deployed an Amazon Pinpoint Project for you, so your next steps for engaging users will be:

  1. Creating a Segment of your users that you would like to target.
  2. Create a Campaign or Journey to start communicating with your Segment using Push Notifications.

Amazon Pinpoint can be combined with other AWS services for more advanced scenarios, such as Predictive User Engagement. An example solution that integrates Amazon Personalize created by the AWS Solutions team can be found at to https://aws.amazon.com/solutions/implementations/predictive-user-engagement/

Cleanup

To clean up the project use:

amplify delete

to delete the resources created by the Amplify CLI.

Conclusion

This post has explored how Amazon Pinpoint and the Amplify Framework can be used more easily add Push Notifications into your mobile apps. Using the example provided, you can quickly get started with integrating the required components and configuring your AWS account, so you can run campaigns to engage with your users.

Feedback

We hope you like the Push Notification features in Amazon Pinpoint and the Amplify Framework! Let us know how we are doing, and submit any feedback in the Amplify JavaScript GitHub repository. You can read more about this feature on the Amplify Framework website. Also, check out our community site to find the events, posts, and contributors to the Amplify community.

 

Advanced Amazon Pinpoint Templates using Message Template Helpers

Post Syndicated from davelem original https://aws.amazon.com/blogs/messaging-and-targeting/advanced-amazon-pinpoint-templates-using-message-template-helpers/

Personalization of customer messages is a proven way to increase engagement of promotional and transactional communications. In order to make these communications repeatable and scalable, building personalization through templates is often required.

Using the Advanced Template Capabilities feature of Amazon Pinpoint, it’s now possible to create highly customized templates used for email, SMS, and Push Notifications.

Pinpoint templates are personalized using Handlebars.js. The new message template helpers are an expansion on the default Handlebars.js features. Please refer to handlebarsjs.com for more information on the default functionality of Handlebars.js

In this blog we will build an Order Confirmation template that will demonstrate a few helpers from each of the following categories:

Prerequisites

Before creating a template, you need to have an existing Amazon Pinpoint Project with the email channel enabled. The following will walk you through creating a project if you don’t already have one: Create and configure a Pinpoint Project

Architecture Overview

Step 1: Create a CSV file with sample Endpoint and Imported Segment

In Amazon Pinpoint, an endpoint represents a specific method of contacting a customer. This could be their email address (for email messages) or their phone number (for SMS messages) or a custom endpoint type. Endpoints can also contain custom attributes, and you can associate multiple endpoints with a single user. In this step, we create a simple CSV file which we will use to create a Segment in Pinpoint.

The data below contains the sample Order and Product data we will use in our Order Confirmation Email.

  1. Create a .CSV file named AdvancedTemplatesSegment.csv with the following data:
    ChannelType,Address,Id,Attributes.FirstName,Attributes.LastName,Attributes.OrderDate,Attributes.OrderNumber,Attributes.ProductNumber,Attributes.ProductNumber,Attributes.ProductNumber,Attributes.ProductNumber,Attributes.ProductNumber,Attributes.ProductName,Attributes.ProductName,Attributes.ProductName,Attributes.ProductName,Attributes.ProductName,Attributes.Amount,Attributes.Amount,Attributes.Amount,Attributes.Amount,Attributes.Amount,Attributes.ItemCount,Attributes.ItemCount,Attributes.ItemCount,Attributes.ItemCount,Attributes.ItemCount,Attributes.CLVTier,User.UserId,Metrics.Age
    EMAIL,[email protected],287b3858-3097-40e3-9af4-19bd4509a8f2,Mary,Smith,2021-01-15T18:07:13Z,460-ITS-2320,DWG8799598,XTC5517773,XRO7471152,EAT5122843,LNP9056489,non lectus aliquam,sapien placerat ante,semper sapien a libero nam dui,vitae consectetuer eget rutrum,nisl ut volutpat sapien arcu,68.88,32.89,53.19,45.38,47.31,20,76,33,15,53,High,66af7a81-77f2-485f-b115-d8c3a00f7077,84
    EMAIL,[email protected],b42e6c5f-3e15-4fdd-b61c-499508271082,,,2021-01-30T22:33:22Z,296-OZA-6579,VMC0637283,RGM6575767,BTM9430068,XCV9343127,GVU2858284,a libero nam dui,sit amet consectetuer adipiscing,at ipsum,ut dolor morbi,nullam molestie,74.86,83.18,15.42,97.03,37.42,13,94,50,54,84,Low,dadc1be9-daf4-46ce-9069-13565e03eaa0,61

    NOTE: The file above has a few attributes that are the key to personalizing our email and including multiple items in our Order Confirmation table:

    • Attributes.FirstName – This will allow us to personalize with a salutation if available.
    • Attributes.CLVTier – This is an attribute that could be specified from a Machine Learning model to determine the customers CLV Tier. We will be using it to provide coupons specific to a given CLV Tier. See Predictive Segmentation Using Amazon Pinpoint and Amazon SageMaker for an example solution that demonstrates using Machine Learning to analyze information in Pinpoint.
    • Attributes.ProductNumber – Note that we have multiple columns that repeat for the product information in the order.  Pinpoint attributes are actually stored as a list, so if you pass multiple columns with the same name it will add items to the attribute list.This is the key to how we are able to display a table of information, but note that it does require making sure the attributes are aligned in the proper columns. For example, Attributes.ProductNumber[0] needs to align with Attributes.ProductName[0]See Using variables with message template helpers for more details.
  2. Search for [email protected] above and replace with two valid email addresses. Note that if your account is still in the sandbox these will need to be verified email addresses. If you only have access to a single email address you can use labels by adding a plus sign (+) followed by a string of text after the local part of the address and before the at (@) sign. For example: [email protected] and [email protected]
  3. Create a Pinpoint Segment
    1. Open the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint, and then choose the project that you created as part of the Prerequisites.
    2. In the navigation pane, choose Segments, and then choose Create a segment.
    3. Select Import a Segment.
    4. Browse to or Drag and Drop the .CSV file you created in the previous step.
    5. Use the default Segment Name and select Create Segment.

Step 2: Build The Message Template

  1. Open the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint.
  2. In the navigation pane, choose Message templates, and then choose Create template.
  3. Select Email as the Channel.
  4. For Template name use: AdvancedTemplateExample.
  5. For Subject use: AdvancedTemplateExample.
  6. Paste the following code into the HTML Editor. We will take some time later on to dig into the specific Handlebars helpers:
    {{#* inline "salutation"}}
        {{#if Attributes.FirstName.[0]}}
            Dear {{Attributes.FirstName.[0]}},<br />
        {{else}}
            Dear Valued Customer,<br />
        {{/if}}
    {{/inline}}
    
    {{#* inline "clvcoupon"}}
        {{#if Attributes.CLVTier.[0]}}
            {{#eq Attributes.CLVTier.[0] "High"}}
                As a thank-you for your continued support, please use this coupon code for <strong>30%</strong> off your next order: <strong>WELOVEYOU30</strong>
            {{/eq}}
        {{/if}}
    {{/inline}}
    
    {{#* inline "footer"}}
        <hr />	
        Accent Athletics - 1234 Anywhere Ave, Anywhere USA, 12345 - <a href="https://www.example.com/preferences/index.html?pid={{ApplicationId}}&uid={{User.UserId}}&h={{sha256 (join User.UserId "d67c37ed538b751d850de18" "+" prefix="" suffix="")}}">Manage Preferences</a>
        <hr />
    {{/inline}}
    
    
    <!DOCTYPE html>
        <html lang="en">
        <head>
        <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    </head>
    <body>
      {{> salutation}}
      Thank-you for your order! {{> clvcoupon}}<br /><br />
      Order Date: {{now format="d MMM yyyy HH:mm:ss" tz=America/Los_Angeles}}<br /><br />
      <table>
      <thead>
        <tr style="background-color: #f2f2f2;">
          <th style="text-align:left; width:75px">
            Product #
          </th>
          <th style="text-align:left; width:200px;">
            Name
          </th>
          <th style="text-align:center; width:75px;">
            Count
          </th>
          <th style="text-align:center; width:75px;">
            Amount
          </th>
        </tr>
      </thead>
      <tbody>
      {{#each Attributes.ProductNumber}}
        {{#eq (modulo @index 2) "1.0"}}
            <tr style="background-color: #f2f2f2;">
        {{else}}
            <tr>
        {{/eq}}
          <td style="text-align:left;">{{this}}</td>
          <td style="text-align:left;">{{#with (lookup ../Attributes.ProductName @index)}}{{this}}{{/with}}</td>
          <td style="text-align:center;">{{#with (lookup ../Attributes.ItemCount @index)}}{{this}}{{/with}}</td>
          <td style="text-align:center;">${{#with (lookup ../Attributes.Amount @index)}}{{this}}{{/with}}</td>
        </tr>
      {{else}}
        <tr>
          <td style="text-align:left;">{{Attributes.ProductNumber}}</td>
          <td style="text-align:center;">{{Attributes.ProductName}}</td>
          <td style="text-align:center;">{{Attributes.ItemCount}}</td>
          <td style="text-align:center;">{{Attributes.Amount}}</td>
        </tr>
      {{/each}}
      </tbody>
      </table>
      {{> footer}}
    </body>
    </html>

  7. Click Create to finish creating your template

Step 3: Create an Amazon Pinpoint Campaign

By sending a campaign, we can verify that our Amazon Pinpoint project is configured correctly, and that we created the segment and template correctly.

To create the segment and campaign:

  1. Open the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint, and then choose the project that you created in step 1.
  2. In the navigation pane, choose Campaigns, and then choose Create a campaign.
  3. Name the campaign “AdvancedTemplateTest.” Under Choose a channel for this campaign, choose Email, and then choose Next.
  4. On the Choose a segment page, choose the “AdvancedTemplateExample” segment that you just created, and then choose Next.
  5. In Create your message, choose the template we just created, ‘AdvancedTemplateExample. Note: You will see an Alert with: “This template contains a reference to an attribute from another project…” This is expected as Pinpoint is scanning the template for attributes allowing you to specify default values in case the endpoint doesn’t contain a value for the attribute. In this blog post we are using the {{#if}} conditional helper to handle any missing data, i.e. {{#if Attributes.FirstName.[0]}}
  6. On the Choose when to send the campaign page, keep all of the default values, and then choose Next.
  7. On the Review and launch page, choose Launch campaign.

Within a few seconds, you should receive an email:

So what just happened?

Let’s take a deeper dive at each of helpers we included in the template:

{{#* inline "salutation"}}
    {{#if Attributes.FirstName.[0]}}
        Dear {{Attributes.FirstName.[0]}},<br /><br />
    {{else}}
        Dear Valued Customer,<br /><br />
    {{/if}}
{{/inline}}

First you will notice that we are making use of Inline Partials. Using Inline Partials allows you to build a library of frequently used snippets of content. In this case we frequently use a salutation in our communications. You can build and maintain your own frequently used snippets and include them at the beginning of the template.

Later in the message we can simply include: {{> salutation}} to include a salutation in our email.

In this example we also see the {{#if}} helper which is used to evaluate if a first name is available on the endpoint. If the name is found, a greeting is returned that passes the user’s first name in the response. Otherwise, the else statement returns an alternative greeting.

{{#* inline "clvcoupon"}}
    {{#if Attributes.CLVTier.[0]}}
        {{#eq Attributes.CLVTier.[0] "High"}}
            As a thank-you for your continued support, please use this coupon code for <strong>30%</strong> off your next order: <strong>WELOVEYOU30</strong>
        {{/eq}}
    {{/if}}
{{/inline}}

Again, we are using Inline Partials to organize our code. Additionally we are using {{#if}} to see if the user has a CLVTier attribute and if so, we use the {{#eq}} conditional helper to see if their CLVTier is “High” as we only want this coupon to display for customers that fall into that tier.

Note that CLVTier is an attribute that is populated along with the Endpoint when we created the Segment above. You could also use a solution such as Predictive Segmentation using Amazon Pinpoint and Amazon SageMaker to incorporate Machine Learning to classify your existing users.

{{#* inline "footer"}}
    <hr />
    Accent Athletics - 1234 Anywhere Ave, Anywhere USA, 12345 - <a href="https://www.example.com/preferences/index.html?pid={{ApplicationId}}&uid={{User.UserId}}&h={{sha256 (join User.UserId "d67c37ed538b751d850de18" "+" prefix="" suffix="")}}">Manage Preferences</a>
    <hr />
{{/inline}}

In the example above we are using the {{sha256}} and {{join}} helpers to create a secure link to the Preference Center deployed as part of the Amazon Pinpoint Preference Center solution.

{{> salutation}}
Thank-you for your order! {{> clvcoupon}}<br /><br /><hr />
Order Date: {{now format="d MMM yyyy HH:mm:ss" tz=America/Los_Angeles}}<br /><br />

This is where all of our hard work implementing Inline Partials really starts to pay off. To include our salutation and coupon we simply need to specify: {{> salutation}} and {{> clvcoupon}}

The {{now}} string helper allows us to display the current date and time in the format of our choosing. Please reference the following for more details on the date pattern and available timezones:

<tbody>
{{#each Attributes.ProductNumber}}
{{#eq (modulo @index 2) "1.0"}}
    		<tr style="background-color: #f2f2f2;">
{{else}}
    		<tr>
{{/eq}}
    	<td style="text-align:left;">{{this}}</td>
    	<td style="text-align:left;">{{#with (lookup ../Attributes.ProductName @index)}}{{this}}{{/with}}</td>
    	<td style="text-align:center;">{{#with (lookup ../Attributes.ItemCount @index)}}{{this}}{{/with}}</td>
    	<td style="text-align:center;">${{#with (lookup ../Attributes.Amount @index)}}{{this}}{{/with}}</td>
</tr>
{{else}}
<tr>
    		<td style="text-align:left;">{{Attributes.ProductNumber}}</td>
    		<td style="text-align:center;">{{Attributes.ProductName}}</td>
    		<td style="text-align:center;">{{Attributes.ItemCount}}</td>
    		<td style="text-align:center;">{{Attributes.Amount}}</td>
</tr>
{{/each}}
</tbody>

This particular section has a lot going on. We will break each part down for further explanation:

  • {{#each}} – Allows us to loop through each of the values in our attribute. In this case our ProductNumber attribute will contain: [“order#1”, “order#2”,”order#3, etc.]
    Note that if you only have one item in the attribute array. Pinpoint will simplify that into a single string attribute. That is why we have the {{each}} part of the {{#else}} statement. This allows us to reference the attribute as a single string in case we don’t have a collection of values in the attribute.
  • {{#eq (modulo @index 2) “1.0”}} – In order to alternate our background color for even/odd rows, we are making use of the {{modulo}} operator from the Math and encoding helpers which will return the remainder of two given numbers allowing us to determine if this is an odd or even row.
    @index is a native Handlebars.js property that contains the current index we are on in the loop.
  • {{this}} – When iterating through a collection using {{#each}}, {{this}} allows you to reference the current item in the collection
  • {{#with (lookup ../Attributes.ProductName @index)}}{{this}}{{/with}}Lookup is a built-in handlebars helper that allows us to find values in another collection. We are using the index combined with lookup to find the Product Name that goes along with the Product Number we are currently on. The same pattern is used for the remaining columns of the table.The ability to lookup values in another attribute collection is the key to how we are able to display a table of information, but note that it does require making sure the attributes are aligned in the proper columns. For example, Attributes.ProductNumber[0] needs to align with Attributes.ProductName[0]See Using variables with message template helpers for more details.
{{> footer}}

And just to wrap things up, let’s pull in the footer we defined with Inline Partials.

Next Steps

Using the techniques above, you can create sophisticated and personalized communications using Amazon Pinpoint.

Think about your existing communications to see if you can use personalization to increase customer engagement for your promotional and transactional messages.

Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service. Learn more here: https://aws.amazon.com/pinpoint/

Complying with DMARC across multiple accounts using Amazon SES

Post Syndicated from Brendan Paul original https://aws.amazon.com/blogs/messaging-and-targeting/complying-with-dmarc-across-multiple-accounts-using-amazon-ses/

Introduction

For enterprises of all sizes, email is a critical piece of infrastructure that supports large volumes of communication from an organization. As such, companies need a robust solution to deal with the complexities this may introduce. In some cases, companies have multiple domains that support several different business units and need a distributed way of managing email sending for those domains. For example, you might want different business units to have the ability to send emails from subdomains, or give a marketing company the ability to send emails on your behalf. Amazon Simple Email Service (Amazon SES) is a cost-effective, flexible, and scalable email service that enables developers to send mail from any application. One of the benefits of Amazon SES is that you can configure Amazon SES to authorize other users to send emails from addresses or domains that you own (your identities) using their own AWS accounts. When allowing other accounts to send emails from your domain, it is important to ensure this is done securely. Amazon SES allows you to send emails to your users using popular authentication methods such as DMARC. In this blog, we walk you through 1/ how to comply with DMARC when using Amazon SES and 2/ how to enable other AWS accounts to send authenticated emails from your domain.

DMARC: what is it, why is it important?

DMARC stands for “Domain-based Message Authentication, Reporting & Conformance”, and it is an email authentication protocol (DMARC.org). DMARC gives domain owners and email senders a way to protect their domain from being used by malicious actors in phishing or spoofing attacks. Email spoofing can be used as a way to compromise users’ financial or personal information by taking advantage of their trust of well-known brands. DMARC makes it easier for senders and recipients to determine whether or not an email was actually sent by the domain that it claims to have been sent by.

Solution Overview

In this solution, you will learn how to set up DKIM signing on Amazon SES, implement a DMARC Policy, and enable other accounts in your organization to send emails from your domain using Sending Authorization. When you set up DKIM signing, Amazon SES will attach a digital signature to all outgoing messages, allowing recipients to verify that the email came from your domain. You will then set your DMARC Policy, which tells an email receiver what to do if an email is not authenticated. Lastly, you will set up Sending Authorization so that other AWS accounts can send authenticated emails from your domain.

Prerequisites

In order to complete the example illustrated in this blog post, you will need to have:

  1. A domain in an Amazon Route53 Hosted Zone or third-party provider. Note: You will need to add/update records for the domain. For this blog we will be using Route53.
  2. An AWS Organization
  3. A second AWS account to send Amazon SES Emails within a different AWS Organizations OU. If you have not worked with AWS Organizations before, review the Organizations Getting Started Guide

How to comply with DMARC (DKIM and SPF) in Amazon SES

In order to comply with DMARC, you must authenticate your messages with either DKIM (DomainKeys Identified Mail), SPF (Sender Policy Framework), or both. DKIM allows you to send email messages with a cryptographic key, which enables email providers to determine whether or not the email is authentic. SPF defines what servers are allowed to send emails for their domain. To use SPF for DMARC compliance you need to set up a custom MAIL FROM domain in Amazon SES. To authenticate your emails with DKIM in Amazon SES, you have the option of:

In this blog, you will be setting up a sending identity.

Setting up DKIM Signing in Amazon SES

  1. Navigate to the Amazon SES Console 
  2. Select Verify a New Domain and type the name of your domain in
  3. Select Generate DKIM Settings
  4. Choose Verify This Domain
    1. This will generate the DNS records needed to complete domain verification, DKIM signing, and routing incoming mail.
    2. Note: When you initiate domain verification using the Amazon SES console or API, Amazon SES gives you the name and value to use for the TXT record. Add a TXT record to your domain’s DNS server using the specified Name and Value. Amazon SES domain verification is complete when Amazon SES detects the existence of the TXT record in your domain’s DNS settings.
  5. If you are using Route 53 as your DNS provider, choose the Use Route 53 button to update the DNS records automatically
    1. If you are not using Route 53, go to your third-party provider and add the TXT record to verify the domain as well as the three CNAME records to enable DKIM signing. You can also add the MX record at the end to route incoming mail to Amazon SES.
    2. A list of common DNS Providers and instructions on how to update the DNS records can be found in the Amazon SES documentation
  6. Choose Create Record Sets if you are using Route53 as shown below or choose Close after you have added the necessary records to your third-party DNS provider.

 

Note: in the case that you previously verified a domain, but did NOT generate the DKIM settings for your domain, follow the steps below. Skip these steps if this is not the case:

  1. Go to the Amazon SES Console, and select your domain
  2. Select the DKIM dropdown
  3. Choose Generate DKIM Settings and copy the three values in the record set shown
    1. You may also download the record set as a CSV file
  4. Navigate to the Route53 console or your third-party DNS provider. Instructions on how to update the DNS records in your third-party can be found in the Amazon SES documentation
  5. Select the domain you are using
  6. Choose Create Record

  1. Enter the values that Amazon SES has generated for you, and add the three CNAME records to your domain
  2. Wait a few minutes, and go back to your domain in the Amazon SES Console
  3. Check that the DKIM status is verified

You also want to set up a custom MAIL FROM domain that you will use later on. To do so, follow the steps in the documentation.

Setting up a DMARC policy on your domain

DMARC policies are TXT records you place in DNS to define what happens to incoming emails that don’t align with the validations provided when setting up DKIM and SPF. With this policy, you can choose to allow the email to pass through, quarantine the email into a folder like junk or spam, or reject the email.

As a best practice, you should start with a DMARC policy that doesn’t reject all email traffic and collect reports on emails that don’t align to determine if they should be allowed. You can also set a percentage on the DMARC policy to perform filtering on a subset of emails to, for example, quarantine only 50% of the emails that don’t align. Once you are in a state where you can begin to reject non-compliant emails, flip the policy to reject failed authentications. When you set the DMARC policy for your domain, any subdomains that are authorized to send on behalf of your domain will inherit this policy and the same rule will apply. For more information on setting up a DMARC policy, see our documentation.

In a scenario where you have multiple subdomains sending emails, you should be setting the DMARC policy for the organizational domain that you own. For example, if you own the domain example.com and also want to use the sub-domain sender.example.com to send emails you can set the organizational DMARC policy (as a DNS TXT record) to:

Name Type Value
1 _dmarc.example.com TXT “v=DMARC1;p=quarantine;pct=50;rua=mailto:[email protected]

This DMARC policy states that 50% of emails coming from example.com that fail authentication should be quarantined and you want to send a report of those failures to [email protected]. For your sender.example.com sub-domain, this policy will be inherited unless you specify another DMARC policy for our sub-domain. In the case where you want to be stricter on the sub-domain you could add another DMARC policy like you see in the following table.

 

Name Type Value
1 _dmarc.sender.example.com TXT “v=DMARC1;p=reject;pct=100;rua=mailto:[email protected];ruf=mailto:[email protected]

This policy would apply to emails coming from sender.example.com and would reject any email that fails authentication. It would also send aggregate feedback to [email protected] and detailed message-specific failure information to [email protected] for further analysis.

Sending Authorization in Amazon SES – Allowing Other Accounts to Send Authenticated Emails

Now that you have configured Amazon SES to comply with DMARC in the account that owns your identity, you may want to allow other accounts in your organization the ability to send emails in the same way. Using Sending Authorization, you can authorize other users or accounts to send emails from identities that you own and manage. An example of where this could be useful is if you are an organization which has different business units in that organization. Using sending authorization, a business unit’s application could send emails to their customers from the top-level domain. This application would be able to leverage the authentication settings of the identity owner without additional configuration. Another advantage is that if the business unit has its own subdomain, the top-level domain’s DKIM settings can apply to this subdomain, so long as you are using Easy DKIM in Amazon SES and have not set up Easy DKIM for the specific subdomains.

Setting up sending authorization across accounts

Before you set up sending authorization, note that working across multiple accounts can impact bounces, complaints, pricing, and quotas in Amazon SES. Amazon SES documentation provides a good understanding of the impacts when using multiple accounts. Specifically, delegated senders are responsible for bounces and complaints and can set up notifications to monitor such activities. These also count against the delegated senders account quotas. To set up Sending Authorization across accounts:

  1. Navigate to the Amazon SES Console from the account that owns the Domain
  2. Select Domains under Identity Management
  3. Select the domain that you want to set up sending authorization with
  4. Select View Details
  5. Expand Identity Policies and Click Create Policy
  6. You can either create a policy using the policy generator or create a custom policy. For the purposes of this blog, you will create a custom policy.
  7. For the custom policy, you will allow a particular Organization Unit (OU) from our AWS Organization access to our domain. You can also limit access to particular accounts or other IAM principals. Use the following policy to allow a particular OU to access the domain:

{
  “Version”: “2012-10-17”,
  “Id”: “AuthPolicy”,
  “Statement”: [
    {
      “Sid”: “AuthorizeOU”,
      “Effect”: “Allow”,
      “Principal”: “*”,
      “Action”: [
        “SES:SendEmail”,
        “SES:SendRawEmail”
      ],
      “Resource”: “<Arn of Verified Domain>”,
      “Condition”: {
        “ForAnyValue:StringLike”: {
          “aws:PrincipalOrgPaths”: “<Organization Id>/<Root OU Id>/<Organizational Unit Id>”
        }
      }
    }
  ]
}

9. Make sure to replace the escaped values with your Verified Domain ARN and the Org path of the OU you want to limit access to.

 

You can find more policy examples in the documentation. Note that you can configure sending authorization such that all accounts under your AWS Organization are authorized to send via a certain subdomain.

Testing

You can now test the ability to send emails from your domain in a different AWS account. You will do this by creating a Lambda function to send a test email. Before you create the Lambda function, you will need to create an IAM role for the Lambda function to use.

Creating the IAM Role:

  1. Log in to your separate AWS account
  2. Navigate to the IAM Management Console
  3. Select Role and choose Create Role
  4. Under Choose a use case select Lambda
  5. choose Next: Permissions
  6. In the search bar, type SES and select the check box next to AmazonSESFullAccess
  7. Choose Next:Tags and Review
  8. Give the role a name of your choosing, and choose Create Role

Navigate to Lambda Console

  1. Select Create Function
  2. Choose the box marked Author from Scratch
  3. Give the function a name of your choosing (Ex: TestSESfunction)
  4. In this demo, you will be using Python 3.8 runtime, but feel free to modify to your language of choice
  5. Select the Change default execution role dropdown, and choose the Use an existing role radio button
  6. Under Existing Role, choose the role that you created in the previous step, and create the function

Edit the function

  1. Navigate to the Function Code portion of the page and open the function python file
  2. Replace the default code with the code shown below, ensuring that you put your own values in based on your resources
  3. Values needed:
    1. Test Email Address: an email address you have access to
      1. NOTE: If you are still operating in the Amazon SES Sandbox, this will need to be a verified email in Amazon SES. To verify an email in Amazon SES, follow the process here. Alternatively, here is how you can move out of the Amazon SES Sandbox
    2. SourceArn: The arn of your domain. This can be found in Amazon SES Console → Domains → <YourDomain> → Identity ARN
    3. ReturnPathArn: The same as your Source ARN
    4. Source: This should be your Mail FROM Domain @ your domain
      1. Your Mail FROM Domain can be found under Domains → <YourDomain> → Mail FROM Domain dropdown
      2. Ex: [email protected]
    5. Use the following function code for this example

import json
import boto3
from botocore.exceptions import ClientError

client = boto3.client('ses')
def lambda_handler(event, context):
    # Try to send the email.
    try:
        #Provide the contents of the email.
        response = client.send_email(
            Destination={
                'ToAddresses': [
                    '<[email protected]>',
                ],
            },
            Message={
                'Body': {
                    'Html': {
                        'Charset': 'UTF-8',
                        'Data': 'This email was sent with Amazon SES.',
                    },
                },
                'Subject': {
                    'Charset': 'UTF-8',
                    'Data': 'Amazon SES Test',
                },
            },
            SourceArn='<your-ses-identity-ARN>',
            ReturnPathArn='<your-ses-identity-ARN>',
            Source='<[email protected]>',
             )
    # Display an error if something goes wrong.
    except ClientError as e:
        print(e.response['Error']['Message'])
    else:
        print("Email sent! Message ID:"),
        print(response['ResponseMetadata']['RequestId'])

  1. Once you have replaced the appropriate values, choose the Deploy button to deploy your changes

Run a Test invocation

  1. After you have deployed your changes, select the “Test” Panel above your function code

  1. You can leave all of these keys and values as default, as the function does not use any event parameters
  2. Choose the Invoke button in the top right corner
  3. You should see this above the test event window:

Verifying that the Email has been signed properly

Depending on your email provider, you may be able to check the DKIM signature directly in the application. As an example, for Outlook, right click on the message, and choose View Source from the menu. You should see line that shows the Authentication Results and whether or not the DKIM/SPF signature passed. For Gmail, go to your Gmail Inbox on the Gmail web app. Choose the message you wish to inspect, and choose the More Icon. Choose View Original from the drop-down menu. You should then see the SPF and DKIM “PASS” Results.

Cleanup

To clean up the resources in your account,

  1. Navigate to the Route53 Console
  2. Select the Hosted Zone you have been working with
  3. Select the CNAME, TXT, and MX records that you created earlier in this blog and delete them
  4. Navigate to the SES Console
  5. Select Domains
  6. Select the Domain that you have been working with
  7. Click the drop down Identity Policies and delete the one that you created in this blog
  8. If you verified a domain for the sake of this blog: navigate to the Domains tab, select the domain and select Remove
  9. Navigate to the Lambda Console
  10. Select Functions
  11. Select the function that you created in this exercise
  12. Select Actions and delete the function

Conclusion

In this blog post, we demonstrated how to delegate sending and management of your sub-domains to other AWS accounts while also complying with DMARC when using Amazon SES. In order to do this, you set up a sending identity so that Amazon SES automatically adds a DKIM signature to your messages. Additionally, you created a custom MAIL FROM domain to comply with SPF. Lastly, you authorized another AWS account to send emails from a sub-domain managed in a different account, and tested this using a Lambda function. Allowing other accounts the ability to manage and send email from your sub-domains provides flexibility and scalability for your organization without compromising on security.

Now that you have set up DMARC authentication for multiple accounts in your enviornment, head to the AWS Messaging & Targeting Blog to see examples of how you can combine Amazon SES with other AWS Services!

If you have more questions about Amazon Simple Email Service, check out our FAQs or our Developer Guide.

If you have feedback about this post, submit comments in the Comments section below.

Forwarding emails automatically based on content with Amazon Simple Email Service

Post Syndicated from Murat Balkan original https://aws.amazon.com/blogs/messaging-and-targeting/forwarding-emails-automatically-based-on-content-with-amazon-simple-email-service/

Introduction

Email is one of the most popular channels consumers use to interact with support organizations. In its most basic form, consumers will send their email to a catch-all email address where it is further dispatched to the correct support group. Often, this requires a person to inspect content manually. Some IT organizations even have a dedicated support group that handles triaging the incoming emails before assigning them to specialized support teams. Triaging each email can be challenging, and delays in email routing and support processes can reduce customer satisfaction. By utilizing Amazon Simple Email Service’s deep integration with Amazon S3, AWS Lambda, and other AWS services, the task of categorizing and routing emails is automated. This automation results in increased operational efficiencies and reduced costs.

This blog post shows you how a serverless application will receive emails with Amazon SES and deliver them to an Amazon S3 bucket. The application uses Amazon Comprehend to identify the dominant language from the message body.  It then looks it up in an Amazon DynamoDB table to find the support group’s email address specializing in the email subject. As the last step, it forwards the email via Amazon SES to its destination. Archiving incoming emails to Amazon S3 also enables further processing or auditing.

Architecture

By completing the steps in this post, you will create a system that uses the architecture illustrated in the following image:

Architecture showing how to forward emails by content using Amazon SES

The flow of events starts when a customer sends an email to the generic support email address like [email protected]. This email is listened to by Amazon SES via a recipient rule. As per the rule, incoming messages are written to a specified Amazon S3 bucket with a given prefix.

This bucket and prefix are configured with S3 Events to trigger a Lambda function on object creation events. The Lambda function reads the email object, parses the contents, and sends them to Amazon Comprehend for language detection.

Amazon DynamoDB looks up the detected language code from an Amazon DynamoDB table, which includes the mappings between language codes and support group email addresses for these languages. One support group could answer English emails, while another support group answers French emails. The Lambda function determines the destination address and re-sends the same email address by performing an email forward operation. Suppose the lookup does not return any destination address, or the language was not be detected. In that case, the email is forwarded to a catch-all email address specified during the application deployment.

In this example, Amazon SES hosts the destination email addresses used for forwarding, but this is not a requirement. External email servers will also receive the forwarded emails.

Prerequisites

To use Amazon SES for receiving email messages, you need to verify a domain that you own. Refer to the documentation to verify your domain with Amazon SES console. If you do not have a domain name, you will register one from Amazon Route 53.

Deploying the Sample Application

Clone this GitHub repository to your local machine and install and configure AWS SAM with a test AWS Identity and Access Management (IAM) user.

You will use AWS SAM to deploy the remaining parts of this serverless architecture.

The AWS SAM template creates the following resources:

  • An Amazon DynamoDB mapping table (language-lookup) contains information about language codes and associates them with destination email addresses.
  • An AWS Lambda function (BlogEmailForwarder) that reads the email content parses it, detects the language, looks up the forwarding destination email address, and sends it.
  • An Amazon S3 bucket, which will store the incoming emails.
  • IAM roles and policies.

To start the AWS SAM deployment, navigate to the root directory of the repository you downloaded and where the template.yaml AWS SAM template resides. AWS SAM also requires you to specify an Amazon Simple Storage Service (Amazon S3) bucket to hold the deployment artifacts. If you haven’t already created a bucket for this purpose, create one now. You will refer to the documentation to learn how to create an Amazon S3 bucket. The bucket should have read and write access by an AWS Identity and Access Management (IAM) user.

At the command line, enter the following command to package the application:

sam package --template template.yaml --output-template-file output_template.yaml --s3-bucket BUCKET_NAME_HERE

In the preceding command, replace BUCKET_NAME_HERE with the name of the Amazon S3 bucket that should hold the deployment artifacts.

AWS SAM packages the application and copies it into this Amazon S3 bucket.

When the AWS SAM package command finishes running, enter the following command to deploy the package:

sam deploy --template-file output_template.yaml --stack-name blogstack --capabilities CAPABILITY_IAM --parameter-overrides FromEmailAddress=info@ YOUR_DOMAIN_NAME_HERE CatchAllEmailAddress=catchall@ YOUR_DOMAIN_NAME_HERE

In the preceding command, change the YOUR_DOMAIN_NAME_HERE with the domain name you validated with Amazon SES. This domain also applies to other commands and configurations that will be introduced later.

This example uses “blogstack” as the stack name, you will change this to any other name you want. When you run this command, AWS SAM shows the progress of the deployment.

Configure the Sample Application

Now that you have deployed the application, you will configure it.

Configuring Receipt Rules

To deliver incoming messages to Amazon S3 bucket, you need to create a Rule Set and a Receipt rule under it.

Note: This blog uses Amazon SES console to create the rule sets. To create the rule sets with AWS CloudFormation, refer to the documentation.

  1. Navigate to the Amazon SES console. From the left navigation choose Rule Sets.
  2. Choose Create a Receipt Rule button at the right pane.
  3. Add info@YOUR_DOMAIN_NAME_HERE as the first recipient addresses by entering it into the text box and choosing Add Recipient.

 

 

Choose the Next Step button to move on to the next step.

  1. On the Actions page, select S3 from the Add action drop-down to reveal S3 action’s details. Select the S3 bucket that was created by the AWS SAM template. It is in the format of your_stack_name-inboxbucket-randomstring. You will find the exact name in the outputs section of the AWS SAM deployment under the key name InboxBucket or by visiting the AWS CloudFormation console. Set the Object key prefix to info/. This tells Amazon SES to add this prefix to all messages destined to this recipient address. This way, you will re-use the same bucket for different recipients.

Choose the Next Step button to move on to the next step.

In the Rule Details page, give this rule a name at the Rule name field. This example uses the name info-recipient-rule. Leave the rest of the fields with their default values.

Choose the Next Step button to move on to the next step.

  1. Review your settings on the Review page and finalize rule creation by choosing Create Rule

  1. In this example, you will be hosting the destination email addresses in Amazon SES rather than forwarding the messages to an external email server. This way, you will be able to see the forwarded messages in your Amazon S3 bucket under different prefixes. To host the destination email addresses, you need to create different rules under the default rule set. Create three additional rules for catchall@YOUR_DOMAIN_NAME_HERE , english@ YOUR_DOMAIN_NAME_HERE and french@YOUR_DOMAIN_NAME_HERE email addresses by repeating the steps 2 to 5. For Amazon S3 prefixes, use catchall/, english/, and french/ respectively.

 

Configuring Amazon DynamoDB Table

To configure the Amazon DynamoDB table that is used by the sample application

  1. Navigate to Amazon DynamoDB console and reach the tables view. Inspect the table created by the AWS SAM application.

language-lookup table is the table where languages and their support group mappings are kept. You need to create an item for each language, and an item that will hold the default destination email address that will be used in case no language match is found. Amazon Comprehend supports more than 60 different languages. You will visit the documentation for the supported languages and add their language codes to this lookup table to enhance this application.

  1. To start inserting items, choose the language-lookup table to open table overview page.
  2. Select the Items tab and choose the Create item From the dropdown, select Text. Add the following JSON content and choose Save to create your first mapping object. While adding the following object, replace Destination attribute’s value with an email address you own. The email messages will be forwarded to that address.

{

  “language”: “en”,

  “destination”: “english@YOUR_DOMAIN_NAME_HERE”

}

Lastly, create an item for French language support.

{

  “language”: “fr”,

  “destination”: “french@YOUR_DOMAIN_NAME_HERE”

}

Testing

Now that the application is deployed and configured, you will test it.

  1. Use your favorite email client to send the following email to the domain name info@ email address.

Subject: I need help

Body:

Hello, I’d like to return the shoes I bought from your online store. How can I do this?

After the email is sent, navigate to the Amazon S3 console to inspect the contents of the Amazon S3 bucket that is backing the Amazon SES Rule Sets. You will also see the AWS Lambda logs from the Amazon CloudWatch console to confirm that the Lambda function is triggered and run successfully. You should receive an email with the same content at the address you defined for the English language.

  1. Next, send another email with the same content, this time in French language.

Subject: j’ai besoin d’aide

Body:

Bonjour, je souhaite retourner les chaussures que j’ai achetées dans votre boutique en ligne. Comment puis-je faire ceci?

 

Suppose a message is not matched to a language in the lookup table. In that case, the Lambda function will forward it to the catchall email address that you provided during the AWS SAM deployment.

You will inspect the new email objects under english/, french/ and catchall/ prefixes to observe the forwarding behavior.

Continue experimenting with the sample application by sending different email contents to info@ YOUR_DOMAIN_NAME_HERE address or adding other language codes and email address combinations into the mapping table. You will find the available languages and their codes in the documentation. When adding a new language support, don’t forget to associate a new email address and Amazon S3 bucket prefix by defining a new rule.

Cleanup

To clean up the resources you used in your account,

  1. Navigate to the Amazon S3 console and delete the inbox bucket’s contents. You will find the name of this bucket in the outputs section of the AWS SAM deployment under the key name InboxBucket or by visiting the AWS CloudFormation console.
  2. Navigate to AWS CloudFormation console and delete the stack named “blogstack”.
  3. After the stack is deleted, remove the domain from Amazon SES. To do this, navigate to the Amazon SES Console and choose Domains from the left navigation. Select the domain you want to remove and choose Remove button to remove it from Amazon SES.
  4. From the Amazon SES Console, navigate to the Rule Sets from the left navigation. On the Active Rule Set section, choose View Active Rule Set button and delete all the rules you have created, by selecting the rule and choosing Action, Delete.
  5. On the Rule Sets page choose Disable Active Rule Set button to disable listening for incoming email messages.
  6. On the Rule Sets page, Inactive Rule Sets section, delete the only rule set, by selecting the rule set and choosing Action, Delete.
  7. Navigate to CloudWatch console and from the left navigation choose Logs, Log groups. Find the log group that belongs to the BlogEmailForwarderFunction resource and delete it by selecting it and choosing Actions, Delete log group(s).
  8. You will also delete the Amazon S3 bucket you used for packaging and deploying the AWS SAM application.

 

Conclusion

This solution shows how to use Amazon SES to classify email messages by the dominant content language and forward them to respective support groups. You will use the same techniques to implement similar scenarios. You will forward emails based on custom key entities, like product codes, or you will remove PII information from emails before forwarding with Amazon Comprehend.

With its native integrations with AWS services, Amazon SES allows you to enhance your email applications with different AWS Cloud capabilities easily.

To learn more about email forwarding with Amazon SES, you will visit documentation and AWS blogs.

Create a serverless feedback collector application using Amazon Pinpoint’s two-way SMS functionality

Post Syndicated from Murat Balkan original https://aws.amazon.com/blogs/messaging-and-targeting/create-a-serverless-feedback-collector-application-by-using-amazon-pinpoints-two-way-sms-functionality/

Introduction

Two-way SMS communication is used by many companies to create interactive engagements with their customers. Traditional SMS notifications are one-way. While this is valid for many different use cases like one-time passwords (OTP) notifications and security notifications or reminders, some other use-cases may benefit from collecting information from the same channel. Two-way SMS allows customers to create this feedback mechanism and enhance business interactions and overall customer experience.

SMS is chosen for its simplicity and availability across different sets of devices. By combining the two-way SMS mechanism with the vast breadth of services Amazon Web Services (AWS) offers, companies can create effective architectures to better interact and serve their customers.

This blog post shows you how a serverless online appointment application can use Amazon Pinpoint’s two-way SMS functionality to collect customer feedback for completed appointments. You will learn how Amazon Pinpoint interacts with other AWS serverless services with its out-of-the-box integrations to create a scalable messaging application.

Architecture

By completing the steps in this post, you can create a system that uses the architecture illustrated in the following image:

The architecture of a feedback collector application that is composed of serverless AWS services

The flow of events starts when a Amazon DynamoDB table item, representing an online appointment, changes its status to COMPLETED. An AWS Lambda function which is subscribed to these changes over DynamoDB Streams detects this change and sends an SMS to the customer by using Amazon Pinpoint API’s sendMessages operation.

Amazon Pinpoint delivers the SMS to the recipient and generates a unique message ID to the AWS Lambda function. The Lambda function then adds this message ID to a DynamoDB table called “message-lookup”. This table is used for tracking different feedback requests sent during a multi-step conversation and associate them with the appointment ids. At this stage, the Lambda function also populates another table “feedbacks” which will hold the feedback responses that will be sent as SMS reply messages.

Each time a recipient replies to an SMS, Amazon Pinpoint publishes this reply event to an Amazon SNS topic which is subscribed by an Amazon SQS queue. Amazon Pinpoint will also add a messageId to this event which allows you to bind it to a sendMessages operation call.

A second AWS Lambda function polls these reply events from the Amazon SQS queue. It checks whether the reply is in the correct format (i.e. a number) and also associated with a previous request. If all conditions are met, the AWS Lambda function checks the ConversationStage attribute’s value from its message-lookup table. According to the current stage and the SMS answer received, AWS Lambda function will determine the next step.

For example, if the feedback score received is less than 5, a follow-up SMS is sent to the user asking if they’ll be happy to receive a call from the customer support team.

All SMS replies from the users are reflected to “feedbacks” table for further analysis.

Deploying the Sample Application

  1. Clone this GitHub repository to your local machine and install and configure AWS SAM with a test AWS IAM user.

You will use AWS SAM to deploy the remaining parts of this serverless architecture.

The AWS SAM template creates the following resources:

    • An Amazon DynamoDB table (appointments) that contains information about appointments, customers and their appointment status.
    • An Amazon DynamoDB table (feedbacks) that holds the received feedbacks from customers.
    • An Amazon DynamoDB table (message-lookup) that holds the Amazon Pinpoint message ids and associate them to appointments to track a multi-step conversation.
    • Two AWS Lambda functions (FeedbackSender and FeedbackReceiver)
    • An Amazon SNS topic that collects state change events from Amazon Pinpoint.
    • An Amazon SQS queue that queues the incoming messages.
    • An Amazon Pinpoint Application with an associated SMS channel.

This architecture consists of two Lambda functions, which are represented as two different apps in the AWS SAM template. These functions are named FeedbackSender and FeedbackReceiver. The FeedbackSender function listens the Amazon DynamoDB Stream associated with the appointments table and sends the SMS message requesting a feedback. Second Lambda function, FeedbackReceiver, polls the Amazon SQS queue and updates the feedbacks table in Amazon DynamoDB. (pinpoint-two-way-sms)

          Note: You’ll incur some costs by deploying this stack into your account.

  1. To start the SAM deployment, navigate to the root directory of the repository you downloaded and where the template.yaml AWS SAM template resides. AWS SAM also requires you to specify an Amazon Simple Storage Service (Amazon S3) bucket to hold the deployment artifacts. If you haven’t already created a bucket for this purpose, create one now. The bucket should have read and write access by an AWS Identity and Access Management (IAM) user.

At the command line, enter the following command to package the application:

sam package --template template.yaml --output-template-file output_template.yaml --s3-bucket BUCKET_NAME_HERE

In the preceding command, replace BUCKET_NAME_HERE with the name of the Amazon S3 bucket that should hold the deployment artifacts.

AWS SAM packages the application and copies it into this Amazon S3 bucket.

When the AWS SAM package command finishes running, enter the following command to deploy the package:

sam deploy --template-file output_template.yaml --stack-name BlogStackPinpoint --capabilities CAPABILITY_IAM

When you run this command, AWS SAM shows the progress of the deployment. When the deployment finishes, navigate to the Amazon Pinpoint console and choose the project named “BlogApplication”. This example uses “BlogStackPinpoint” as the stack name, you can change this to any other name you want.

  1. From the left navigation, choose Settings, SMS and voice. On the SMS and voice settings page, choose the Request phone number button under Number settings

Screenshot of request phone number screen

  1. Choose a target country. Set the Default message type as Transactional, and click on the Request long codes button to buy a long code.

Note: In United States, you can also request a Toll Free Number(TFN)

Screenshot showing long code additio

A long code will be added to the Number settings list.

  1. Choose the newly added number to reach the SMS Settings page and enable the option Enable two-way-SMS. At the Incoming messages destination, select Choose an existing SNS topic, and from the drop down select the Amazon SNS topic that was created by the BlogStackPinpoint stack.

Choose Save to save your SMS settings.

 

Testing the Sample Application

Now that the application is deployed and configured, test it by creating sample records in the Amazon DynamoDB table. Navigate to Amazon DynamoDB console and reach the tables view. Inspect the tables that were created by the AWS SAM application.

Here, appointments table is the table where the appointments and their statuses are kept. It tracks the appointment lifecycle events with items identified by unique ids. In this sample scenario, we are assuming that an appointment application creates a record with ‘CREATED’ status when a new appointment is planned. After the appointment is finished, same application updates the status to ‘COMPLETED’ which will trigger the feedback collection process. Feedback results are collected in the feedbacks table. Amazon Pinpoint message id’s, conversation stage and appointment id’s are kept in the message-lookup table.

  1. To start testing the end-to-end flow, choose the appointments table to open table overview page.
  2. Next, select the Items tab and choose the Create item From the dropdown, select Text. Add the following and choose Save to create your first appointment object. While adding the following object, replace CustomerPhone attribute’s value with a phone number you own. The feedback request messages will be delivered to that number. Note: This number should match the country number for the long code you provisioned.

{

"CustomerName": "Customer A",

"CustomerPhone": "+12345678900",

"AppointmentStatus":"CREATED",

"id": "1"

}

  1. To trigger sending the feedback SMS, you need to set an existing item’s status to “COMPLETED” To do this, select the item and click Edit from the Actions menu.

Replace the item’s current JSON with the following.

{

"AppointmentStatus": "COMPLETED",

"CustomerName": "Customer A",

"CustomerPhone": "+12345678900",

"id": "1"

}

  1. Before choosing the Save button, double check that you have set CustomerPhone attribute’s value to a valid phone number.

After the change, you should receive an SMS message asking for a feedback. Provide a numeric reply of that is less than five to this message. This will trigger a follow up question asking for a consent to receive an in-person callback.

 

During your SMS conversation with the application, inspect the feedbacks table. The feedback you have given over this two-way SMS channel should have been reflected into the table.

If you want to repeat the process, make sure to increment the AppointmentId field for any additional appointment records.

Cleanup

To clean up the resources you used in your account, simply navigate to AWS Cloudformation console and delete the stack named “BlogStackPinpoint”.

After the stack is deleted, you also need to delete the Long code from the Pinpoint Console by choosing the number and pressing Remove phone number button. You can also delete the Amazon S3 bucket you used for packaging and deploying the AWS SAM application.

Conclusion

This architecture shows how Amazon Pinpoint can be used to make two-way SMS communication with your customers. You can implement Two-way SMS functionality in other use cases such as appointment reminders, polls, Q&A services, and more.

To learn more about Pinpoint and it’s two-way SMS mechanism, you can visit the Pinpoint documentation.

 

Send SMS at scale to Indian recipients using Amazon Pinpoint

Post Syndicated from Meng Kang original https://aws.amazon.com/blogs/messaging-and-targeting/send-sms-at-scale-to-indian-recipients-using-amazon-pinpoint/

SMS has one of the highest open rates of all customer communications channels, and is popular with application builders for both transactional use cases like appointment reminders or asynchronous use cases like a SMS chatbot. Amazon Pinpoint supports SMS in over 200 countries and territories, but SMS sending requirements can vary by recipient destination. SMS sending requirements, depending on locale, can include restrictions on origination identity used, messages content, or the routes used to deliver to recipients. Amazon Pinpoint is making it easier for you to send application-to-person (A2P) SMS to Indian recipients using domestic routes. Amazon Pinpoint now supports submitting the Principal Entity ID (PEID) and Template ID using the Send Message API.

Introduction to new regulations and DLT platform

In 2018, the Telecom Regulatory Authority of India (TRAI) released The Telecom Commercial Communication Customer Preference Regulation (TCCCPR) to regulate text messaging in India. To implement this, TRAI adopted a block-chain technology called the Distributed Ledger Technology (DLT) platform. Every business entity who wants to send Application-to-Person (A2P) SMS to their end users in India using domestic routes will need to register their business and use case on the DLT platform. DLT registration is a concept that is unique to the SMS industry in India. If you don’t send text messages to recipients in India then DLT registration doesn’t apply to you. You will not be able to send A2P SMS to Indian recipients using domestic routes if you do not register on the DLT platform. Please note that if you are an international enterprise and you would like to send A2P SMS to recipients in India, you can leverage Amazon Pinpoint’s International Long Distance Operator (ILDO) routes. See more information here.

Changes in sending A2P SMS

DLT has brought many changes to the way SMS is sent in India. These include a new registration process, new message categorization, as well as restrictions around how to send messages. See below for an overview of this process.

Registration with TRAI: Prior to DLT, only service providers/telemarketers were required to register with TRAI. With the updated regulations, business owners/principal entity who wants to send SMS to their customers in India have to sign-up and complete the registration process on DLT platform.

Sender ID & Template Registration: Prior to DLT, bulk SMS service providers used to approve Sender IDs and templates. With the updated regulations, business owners/principal entity have to register Sender ID and Templates on the DLT platform and get those approved.

Customer Preference and Consent: Prior to DLT, customers were either on National Do Not Disturb Registry (DND) or not. The new regulation gave control to consumers/mobile subscribers by offering a time-window where they can manage their preferences based on a specific time or day and allow receipt of certain kinds of promotional messages. It means that customers can choose to receive promotional text from a company even if they have activated DND.

Types of Message: With the new regulation, the DLT-defined domestic routes are as follows.

  • Promotional SMS: these include offers, discounts to non-opt in users, and SMS delivered to Non-DND numbers where the customer has not explicitly opted in. These can only be delivered between 10.00AM to 21:00PM IST. Operators have also stopped supporting delivery notification and receipts for promotional SMS. Promotional messages are sent using 6-digit numeric Sender IDs.
  • Transactional SMS: this is reserved for financial organizations to send One-Time-Passwords (OTPs). Transactional messages use 6-digit alpha Sender IDs.
  • Service Implicit: these include service-related informative messages other than OTPs. Amazon Pinpoint classifies these under the “transactional” route type, so customers will continue to select the “transactional” route type to send these messages.
  • Service Explicit: these include promotional messages customers have opted to receive from a particular business. Amazon Pinpoint classifies these under the “transactional” route type, so customers will continue to select the “transactional” route type to send these messages.

Validation (Scrubbing) Functionality: With DLT, customers’ mobile numbers are filtered in real-time to match the desired criteria set by the customer. This means that if a customer gave consent to receive SMS on Monday, but withdraws the permission on Wednesday, then the SMS will not be delivered on Thursday. Customer preferences are updated in real-time and the results are immediately available on DLT. When sending, the DLT platform will validate the PEID customers used to send against the registered Sender ID, and validate the SMS Template against the registered Brand name, so that only approved businesses using approved SMS Sender are able to send SMS to the end recipient.

DLT registration timeline

TRAI is the entity enforcing DLT implementation in India. TRAI has designated these changes to take place over several phases and months, with each phase increasing the level of restriction on message sending:

Phase 1 (Initiated June 2020): The first phase is to complete Principal Entity ID (PEID) & the SMS Header or Sender ID registration on the DLT platform. Only registered SMS Header or Sender ID can be used to send SMS to India using domestic routes.

Phase 2 (Initiated Nov 2020): The second phase is to register Template ID on the DLT platform. For each SMS send, the PEID and Template ID are validated against the registered Sender ID.

Phase 3 (Initiated April 2021): The third phase is to validate that the content of the message template matches exactly what was registered on the DLT platform. Brand Name is a mandatory field to be included in content for template registration.

Using Pinpoint to send SMS to India

To send SMS messages to India from Amazon Pinpoint, you’ll first need to complete DLT registration. To do so, follow the steps described on the Special requirements for sending SMS messages to recipients in India documentation.

Next, to make sure your SMS messages are delivered successfully using local routes, you need to do the following when using Amazon Pinpoint sending the SMS message.

  • Use a Sender ID which has been registered on the DLT platform that matches your message content.
  • In the Pinpoint Send Message API, provide values for the following parameters:
    • EntityId – The entity ID or Principal Entity (PE) ID received from the regulatory body for sending SMS in your country.
    • TemplateId – The template ID received from the regulatory body for sending SMS in your country.
  • Choose one of the following route types:
    • Promotional – Choose this type for promotional messages, which use a numeric sender ID.
    • Transactional – Choose this type for transactional messages, which use a case-sensitive alphanumeric sender ID.

When adding the content to your message, thoroughly review your content to ensure that it exactly matches the content in the DLT registered template. If you include additional character returns, spaces, punctuation, or mismatched sentence case, carriers will block your SMS. Variables in a template – for example, {#var#} – cannot exceed 30 characters for each variable. The following are some common use cases for message rejection:

No template was found that matched the content sent.
Content sent: <#> 12345 is your OTP to verify mobile number. Your OTP is valid for 15 minutes — ABC Pvt. Ltd.
Matched template: None
Issue: There are no DLT templates that include <#> or {#var#} at the beginning of the DLT registered template.

The value of a variable exceeds 30 characters.
Content sent: 12345 is your OTP code for ABC (ABC Company – India Private Limited) – (ABC 123456789). Share with your agent only. – ABC Pvt. Ltd.
Matched template: {#var#} is your OTP code for {#var#} ({#var#}) – ({#var#} {#var#}). Share with your agent only. – ABC Pvt. Ltd.
Issue: The value of “ABC Company – India Private Limited” in the content sent exceeds a single {#var#} character limit of 30.

The message sentence case does not match the sentence case in the template.
Content sent: 12345 is your OTP code for ABC (ABC Company – India Private Limited) – (ABC 123456789). Share with your agent only. – ABC Pvt. Ltd.
Matched template: {#var#} is your OTP code for {#var#} ({#var#}) – ({#var#} {#var#}). Share with your agent only. – ABC PVT. LTD.
Issue: The company name appended to the DLT matched template is capitalized while the content sent has changed parts of the name to lowercase — “ABC Pvt. Ltd.” vs. “ABC PVT. LTD.”

Start using Amazon Pinpoint to send SMS messages to Indian recipients by following the steps described on the Special requirements for sending SMS messages to recipients in India documentation.

Maintain consistency in emails with custom content using Amazon SES templates

Post Syndicated from Seth Theeke original https://aws.amazon.com/blogs/messaging-and-targeting/maintain-consistency-in-emails-with-custom-content-with-amazon-ses-templates/

When sending emails, content creators often want to add custom content such as images or videos while maintaining consistency in their messages. They also want to send those emails automatically once new content is ready. In this blog, we will show you how to create templates for emails with a common theme by combining Amazon Simple Email Service (Amazon SES) templates, AWS Lambda, Amazon Simple Storage Service (Amazon S3), and Amazon SES templates.

Promotional content (such as logos, images, videos, and more) can be stored, managed, and hosted in Amazon S3. You can then embed this content into promotional emails without making any changes to email templates or email processing. You can trigger a Lambda function to send promotional emails with the newly added content through using the Amazon SES SDK.

This post shows readers how to:

  • Create an Amazon SES email template with tags to be replaced by image URLs
  • Upload those templates to Amazon SES
  • Setup an AWS CloudFormation stack using the AWS Cloud Development Kit(AWS CDK)
  • Create a Lambda function and Amazon S3 bucket to send emails using the AWS SDK for Javascript

Solution Architecture

The proceeding image shows the architecture you will build as part of this post. You will use the AWS CDK to provision an Amazon S3 bucket, Lambda, and AWS Identity and Access Management(IAM) permissions. You will also use the AWS Command Line Interface(AWS CLI) to upload and manage our Amazon SES templates.

The architecture allows you to upload content to S3 which will trigger a Lambda function. That Lambda will form an Amazon SES request using the template you have uploaded and embed the S3 content as a parameter which will be sent to the user and render in their email client.

Metadata

Time to Read: ~ 20 minutes

Time to Complete: ~ 15 minutes

Cost to Complete: Free Tier

Learning Level: Intermediate

Services Used: Amazon Simple Email Service, Amazon Simple Storage Service, AWS Lambda

Prerequisites

For this walkthrough, you should have the following prerequisites:

Solution Overview

You will walk through creating Amazon SES templates and then a CloudFormation stack using the AWS CDK. You will then create a template file, a lambda directory, and a CDK application directory which should all be in the same level in your package structure in order to follow these steps explicitly.

Step 1: Create an Amazon SES template in JSON with a tag representing your image URL and upload via the CLI

Step 2: Initialize an AWS CDK package using the AWS CDK CLI and add necessary dependencies

Step 3: Initialize a NodeJS AWS Lambda package

Step 4: Provision an Amazon S3 bucket and Lambda in your AWS CDK app

Step 5: Configure your Lambda to be triggered when objects are added to S3

Step 6: Configure your Lambda’s IAM role to allow sending emails via Amazon SES

Step 7: Write necessary code for AWS Lambda to send emails via Amazon SES

Step 8: Deploy and Test!

Step 1: Create an Amazon SES Email template

Amazon SES Email templates are defined as a JSON object containing:

  • TemplateName – name of the template, must be unique across email templates and will be used by our Lambda to pass to Amazon SES
  • SubjectPart – represents the subject of the email
  • HtmlPart – represents the body of the email
  • TextPart – when email clients cannot render HTML, this is displayed instead of HtmlPart

More detailed information about email templates can be found in the Amazon SES Developer Guide.

1.     Open your text editor and save the empty file as email-template.json

2.     Paste the following into your json file and save your changes

{
  "Template": {
    "TemplateName": "MyTemplate",
    "SubjectPart": "Greetings Customer",
    "HtmlPart": "<img src={{imageURL}} alt=\"logo\" width=\"100\" height=\"100\">",
    "TextPart": "Dear Customer,\r\nCheck out our website for new promotional content."
  }
}

This template has a single tag called imageURL which will be replaced during execution with our content’s S3 URL.

3.     Run the following AWS CLI command to upload your template to Amazon SES

aws ses create-template --cli-input-json file://email-template.json

4.     Once your template has been uploaded, you can confirm its creation by logging into the AWS Console, navigating to Amazon SES, and then selecting Email Templates

Step 2: Initialize an AWS CDK package and add necessary dependencies

In this section, you will be using the AWS CDK CLI to initialize a new code package and add the dependencies for Lambda and Amazon S3.

1.     Create a new directory at the same level as email-template.json called email-infrastructure

2.     Navigate to the promotional-email-infrastructure directory and run the following CDK CLI command to generate the skeleton for your cdk application

cdk init app --language=typescript

3.     Add dependencies for Amazon S3, Lambda, and IAM by adding the following lines to your dependencies section of your package.json and then run npm install

"@aws-cdk/aws-lambda": "1.86.0"
"@aws-cdk/aws-lambda-event-sources": "1.86.0"
"@aws-cdk/aws-s3": "1.86.0"
"@aws-cdk/aws-iam": "1.86.0"

Make sure to install the version of these dependencies that matches the version of the aws-cdk stack so you don’t run into compatibility issues.

Step 3: Create a NodeJS Lambda package

In this step, you will create the barebones for our Lambda function that will call Amazon SES and revisit in step 7 to implement the handler

1.     Create a new directory at the same level as email-template.json called email-lambda

2.     Add a package.json file in the email-lambda directory that looks like the following

{
    "name": "email-lambda",
    "version": "1.0.0",
    "main": "index.js",
    "dependencies": {
        "aws-sdk": "2.831.0"
    }
}

3.     Add a file called index.js, this will be our Lambda handler and will look like the following for now. Make sure to insert your verified email address into the testAddress variable, this will be used later as both your to and from address for testing.

var AWS = require("aws-sdk");
var ses = new AWS.SES({apiVersion: "2010-12-01"});
var testAddress = "INSERT_VERIFIED_EMAIL_HERE";

exports.handler = async function(event) { 
    console.log(JSON.stringify(event));
    return "200";
}

4.     Finish this step by running npm install in the email-lambda directory to install the aws-sdk dependencies you will use in a later step. Your top-level directory structure should look like the following:

  • email-template.json – contains your email template from step 1
  • email-infrastructure – contains your CDK stack from step 2
  • email-lambda – contains your email lambda function code from step 3

Step 4: Provision an Amazon S3 bucket and Lambda function in your CDK app

In this step, you will add an Amazon S3 bucket and a NodeJS Lambda function into our CDK application based on what you setup in previous steps. After this, you will connect all the pieces together.

1.     Import all the services you need into our CDK stack construct. The imports you will need are listed below, copy them into your editor in the imports section.

import * as s3 from '@aws-cdk/aws-s3';
import * as lambda from '@aws-cdk/aws-lambda';
import * as lambdaEventSource from '@aws-cdk/aws-lambda-event-sources';
import * as iam from '@aws-cdk/aws-iam';
import path = require('path');

2.     Add S3 bucket to CDK App by adding an instance of the Bucket construct

const promotionalContentBucket = new s3.Bucket(this, "DOC-EXAMPLE-BUCKET");

3.     Similarly, add your Lambda function by creating an instance of the Function construct referencing your Lambda function package by path

const emailLambda = new lambda.Function(this, "EmailLambda", {
    code: lambda.Code.fromAsset(path.join(__dirname, "../../email-lambda")),
    handler: "index.handler",
    runtime: lambda.Runtime.NODEJS_12_X
});

4.     Execute npm run build in your CDK directory to ensure you’ve setup the package correctly. You can get additional help from the Troubleshooting Guide for CDK

Step 5: Configure Lambda with an S3 Event Source

In this step, you will configure your Lambda function to be triggered when objects are added to your Amazon S3 bucket by using the Lambda event sources module for CDK.

1.     Create an instance of the S3EventSource construct in your stack for OBJECT_CREATED events only because you don’t want to trigger a lambda invocation when an object is removed for this post

const s3EventSource = new lambdaEventSource.S3EventSource(promotionalContentBucket, {
    events: [s3.EventType.OBJECT_CREATED]
});

2.     Now that you have an event source defined, you need to add the event source to your Lambda function

emailLambda.addEventSource(s3EventSource);

3.     Add the Amazon S3 bucket’s domain name as an environment variable so you can reference objects by URL by adding the environment property to your Lambda function like below.

environment: {
    "BUCKET_DOMAIN_NAME": promotionalContentBucket.bucketDomainName
}

Step 6: Configure the email Lambda IAM role

In this step, you will add an IAM policy statement to your email Lambda’s execution role so it can call Amazon SES.

1.     Add an IAM PolicyStatement construct with effect ALLOW on all resources with SendTemplatedEmail action

const emailPolicyStatement = new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["ses:SendTemplatedEmail"],
    resources: ["*"]
});

2.     Finish this step by adding your newly created policy to the Lambda execution role

emailLambda.addToRolePolicy(emailPolicyStatement)

Step 7: Add Lambda implementation to send templated emails

You will use the AWS NodeJS Amazon SES SDK to send emails using the SendTemplatedEmail API. Our implementation will assume a batch of size 1 for each Lambda invocation for simplicity.

1.     Replace your function handler in the email Lambda function with the code below. This will read the S3Event’s first record, prepare parameters for the Amazon SES SDK call and invoke the sendTemplatedEmail function with the imageURL embedded into your previously created template.

exports.handler =  async function(event) {  
    console.log(JSON.stringify(event));
    let s3Object = event.Records[0];
    let sendEmailParams = {
        Destination: {
            ToAddresses: [testAddress]
        },
        Template: 'MyTemplate',
        TemplateData: JSON.stringify({
            "imageURL": process.env.BUCKET_DOMAIN_NAME + "/" + s3Object.s3.object.key,
        }),
        Source: testAddress
    };
    let response = await ses.sendTemplatedEmail(sendEmailParams).promise();
    return response;
}

2.     Deploy your stack with the CDK CLI by running cdk deploy, this may take a couple minutes. If you run into problems, see the Troubleshooting Guide for CDK.

Step 8: Test your System

At this point, you should have an Amazon SES template uploaded to your account as well as a CloudFormation stack that contains an Amazon S3 bucket and a Lambda function that is triggered when objects are added to that bucket and had permissions to invoke Amazon SES APIs. Now you will test the system by adding an image into our Amazon S3 bucket.

1.     Log in to the AWS Console

2.     Navigate to Amazon S3

3.     Select your promotional content bucket from the list of buckets

4.     Click Upload on the right-hand side of the screen

5.     Add an image from your computer by clicking Add files

6.     Scroll down to the bottom and expand Additional Upload Options

7.     Scroll down to Access Control List

8.     Select the check boxes for Read for Everyone(public access) so the images are accessible when the user opens their email

9.     Scroll down to the bottom and select Upload

10.     Done! You should have an email in your inbox shortly that renders the image you just uploaded. Check the Lambda logs and errors in case you don’t see your email.

Cleaning up

To avoid incurring future charges, delete your CloudFormation stack by running cdk destroy or manually through the AWS Console. Keep in mind, by default Amazon S3 buckets won’t be deleted so you will need to navigate to Amazon S3 in the AWS Console, clear the bucket of any objects and then manually delete the resource.

Conclusion

Congratulations! You now have an understanding of how to combine Amazon SES templates with Amazon S3 and Lambda to inject custom images into emails without the need for any servers and have launched this stack using the AWS Cloud Development Kit.

Author Bio

My name is Seth Theeke, I work as a Software Development Engineer in Amazon Freight. I’ve been working with AWS since 2016 and hold a Developer Associate Certification. I love soccer and I love software engineering, the simplest things in life!