Tag Archives: Amazon Simple Email Service (SES)

Amazon Prime Day 2022 – AWS for the Win!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-prime-day-2022-aws-for-the-win/

As part of my annual tradition to tell you about how AWS makes Prime Day possible, I am happy to be able to share some chart-topping metrics (check out my 2016, 2017, 2019, 2020, and 2021 posts for a look back).

My purchases this year included a first aid kit, some wood brown filament for my 3D printer, and a non-stick frying pan! According to our official news release, Prime members worldwide purchased more than 100,000 items per minute during Prime Day, with best-selling categories including Amazon Devices, Consumer Electronics, and Home.

Powered by AWS
As always, AWS played a critical role in making Prime Day a success. A multitude of two-pizza teams worked together to make sure that every part of our infrastructure was scaled, tested, and ready to serve our customers. Here are a few examples:

Amazon Aurora – On Prime Day, 5,326 database instances running the PostgreSQL-compatible and MySQL-compatible editions of Amazon Aurora processed 288 billion transactions, stored 1,849 terabytes of data, and transferred 749 terabytes of data.

Amazon EC2 – For Prime Day 2022, Amazon increased the total number of normalized instances (an internal measure of compute power) on Amazon Elastic Compute Cloud (Amazon EC2) by 12%. This resulted in an overall server equivalent footprint that was only 7% larger than that of Cyber Monday 2021 due to the increased adoption of AWS Graviton2 processors.

Amazon EBS – For Prime Day, the Amazon team added 152 petabytes of EBS storage. The resulting fleet handled 11.4 trillion requests per day and transferred 532 petabytes of data per day. Interestingly enough, due to increased efficiency of some of the internal Amazon services used to run Prime Day, Amazon actually used about 4% less EBS storage and transferred 13% less data than it did during Prime Day last year. Here’s a graph that shows the increase in data transfer during Prime Day:

Amazon SES – In order to keep Prime Day shoppers aware of the deals and to deliver order confirmations, Amazon Simple Email Service (SES) peaked at 33,000 Prime Day email messages per second.

Amazon SQS – During Prime Day, Amazon Simple Queue Service (SQS) set a new traffic record by processing 70.5 million messages per second at peak:

Amazon DynamoDB – DynamoDB powers multiple high-traffic Amazon properties and systems including Alexa, the Amazon.com sites, and all Amazon fulfillment centers. Over the course of Prime Day, these sources made trillions of calls to the DynamoDB API. DynamoDB maintained high availability while delivering single-digit millisecond responses and peaking at 105.2 million requests per second.

Amazon SageMaker – The Amazon Robotics Pick Time Estimator, which uses Amazon SageMaker to train a machine learning model to predict the amount of time future pick operations will take, processed more than 100 million transactions during Prime Day 2022.

Package Planning – In North America, and on the highest traffic Prime 2022 day, package-planning systems performed 60 million AWS Lambda invocations, processed 17 terabytes of compressed data in Amazon Simple Storage Service (Amazon S3), stored 64 million items across Amazon DynamoDB and Amazon ElastiCache, served 200 million events over Amazon Kinesis, and handled 50 million Amazon Simple Queue Service events.

Prepare to Scale
Every year I reiterate the same message: rigorous preparation is key to the success of Prime Day and our other large-scale events. If you are preparing for a similar chart-topping event of your own, I strongly recommend that you take advantage of AWS Infrastructure Event Management (IEM). As part of an IEM engagement, my colleagues will provide you with architectural and operational guidance that will help you to execute your event with confidence!

Jeff;

Analyzing Amazon SES event data with AWS Analytics Services

Post Syndicated from Oscar Mendoza original https://aws.amazon.com/blogs/messaging-and-targeting/analyzing-amazon-ses-event-data-with-aws-analytics-services/

In this post, we will walk through using AWS Services, such as, Amazon Kinesis Firehose, Amazon Athena and Amazon QuickSight to monitor Amazon SES email sending events with the granularity and level of detail required to get insights from your customers engage with the emails you send.

Nowadays, email Marketers rely on internal applications to create their campaigns or any communications requirements, such us newsletters or promotional content. From those activities, they need to collect as much information as possible to analyze and improve their pipeline to get better interaction with the customers. Data such us bounces, rejections, success reception, delivery delays, complaints or open rate can be a powerful tool to understand the customers. Usually applications work with high-level data points without detailed logging or granular information that could help improve even better the effectiveness of their campaigns.

Amazon Simple Email Service (SES) is a smart tool for companies that wants a cost-effective, flexible, and scalable email service solution to easily integrate with their own products. Amazon SES provides methods to control your sending activity with built-in integration with Amazon CloudWatch Metrics and also provides a mechanism to collect the email sending events data.

In this post, we propose an architecture and step-by-step guide to track your email sending activities at a granular level, where you can configure several types of email sending events, including sends, deliveries, opens, clicks, bounces, complaints, rejections, rendering failures, and delivery delays. We will use the configuration set feature of Amazon SES to send detailed logging to our analytics services to store, query and create dashboards for a detailed view.

Overview of solution

This architecture uses Amazon SES built-in features and AWS analytics services to provide a quick and cost-effective solution to address your mail tracking requirements. The following services will be implemented or configured:

The following diagram shows the architecture of the solution:

Serverless Architecture to Analyze Amazon SES events

Figure 1. Serverless Architecture to Analyze Amazon SES events

The flow of the events starts when a customer uses Amazon SES to send an email. Each of those send events will be capture by the configuration set feature and forward the events to a Kinesis Firehose delivery stream to buffer and store those events on an Amazon S3 bucket.

After storing the events, it will be required to create a database and table schema and store it on AWS Glue Data Catalog in order for Amazon Athena to be able to properly query those events on S3. Finally, we will use Amazon QuickSight to create interactive dashboard to search and visualize all your sending activity with an email level of detailed.

Prerequisites

For this walkthrough, you should have the following prerequisites:

Walkthrough

Step 1: Use AWS CloudFormation to deploy some additional prerequisites

You can get started with our sample AWS CloudFormation template that includes some prerequisites. This template creates an Amazon S3 Bucket, an IAM role needed to access from Amazon SES to Amazon Kinesis Data Firehose.

To download the CloudFormation template, run one of the following commands, depending on your operating system:

In Windows:

curl https://raw.githubusercontent.com/aws-samples/amazon-ses-analytics-blog/main/SES-Blog-PreRequisites.yml -o SES-Blog-PreRequisites.yml

In MacOS

wget https://raw.githubusercontent.com/aws-samples/amazon-ses-analytics-blog/main/SES-Blog-PreRequisites.yml

To deploy the template, use the following AWS CLI command:

aws cloudformation deploy --template-file ./SES-Blog-PreRequisites.yml --stack-name ses-dashboard-prerequisites --capabilities CAPABILITY_NAMED_IAM

After the template finishes creating resources, you see the IAM Service role and the Delivery Stream on the stack Outputs tab. You are going to use these resources in the following steps.

IAM Service role and Delivery Stream created by CloudFormation template

Figure 2. CloudFormation template outputs

Step 2: Creating a configuration set in SES and setting the default configuration set for a verified identity

SES can track the number of send, delivery, open, click, bounce, and complaint events for each email you send. You can use event publishing to send information about these events to other AWS service. In this case we are going to send the events to Kinesis Firehose. To do this, a configuration set is required.

To create a configuration set, complete the following steps:

  1. On the AWS Console, choose the Amazon Simple Email Service.
  2. Choose Configuration sets.
  3. Click on Create set.

    Create a configuration set in Amazon SES

    Figure 3. Amazon SES Create Configuration Set

  4. Set a Configuration set name.
  5. Leave the other configurations by default.

    Write a name for your configuration set

    Figure 4. Configuration Set Name

  6. Once the configuration set is created, select Event destinations

    Configuration set created successfully

    Figure 5. Configuration set created successfully

  7. Click on Add destination
  8. Select the event types you would like to analyze and then click on next.

    Sending Events to analyze

    Figure 6. Sending Events to analyze

  9. Select Amazon Kinesis Data Firehose as the destination, choose the delivery stream and the IAM role created previously, click on next and in the review page, click on Add destination.

    Destination for Amazon SES sending events

    Figure 7. Destination for Amazon SES sending events

  10. Once you have created the configuration set and added the event destination, you can define the Default configuration set for the verified identity (domain or email address). In the SES console, choose Verified identities.

    Amazon SES Verified Identity

    Figure 8 Amazon SES Verified Identity

  11. Choose the verified identity from which you want to collect events and select Configuration set. Click on Edit.

    Edit Configuration Set for Verified Identity

    Figure 9. Edit Configuration Set for Verified Identity

  12. Click on the checkbox Assign a default configuration set and choose the configuration set created previously.

    Assign default configuration set

    Figure 10. Assign default configuration set

  13. Once you have completed the previous steps, your events will be sent to Amazon S3. Due to the buffer’s configuration on the Kinesis Delivery Stream, the data will be loaded every 5 minutes or every 5 MiB to Amazon S3. You can check the structure created on the bucket and see json logs with SES events data.

    Amazon S3 bucket structure

    Figure 11. Amazon S3 bucket structure

Step 3: Using Amazon Athena to query the SES event logs

Amazon SES publishes email sending event records to Amazon Kinesis Data Firehose in JSON format. The top-level JSON object contains an eventType string, a mail object, and either a Bounce, Complaint, Delivery, Send, Reject, Open, Click, Rendering Failure, or DeliveryDelay object, depending on the type of event.

  1. In order to simplify the analysis of email sending events, create the sesmaster table by running the following script in Amazon Athena. Don’t forget to change the location in the following script with your own bucket containing the data of email sending events.
    CREATE EXTERNAL TABLE sesmaster (
    eventType string,
    complaint struct < arrivaldate: string,
    complainedrecipients: array < struct < emailaddress: string >>,
    complaintfeedbacktype: string,
    feedbackid: string,
    `timestamp`: string,
    useragent: string >,
    bounce struct < bouncedrecipients: array < struct < action: string,
    diagnosticcode: string,
    emailaddress: string,
    status: string >>,
    bouncesubtype: string,
    bouncetype: string,
    feedbackid: string,
    reportingmta: string,
    `timestamp`: string >,
    mail struct < timestamp: string,
    source: string,
    sourcearn: string,
    sendingaccountid: string,
    messageid: string,
    destination: string,
    headerstruncated: boolean,
    headers: array < struct < name: string,
    value: string >>,
    commonheaders: struct < `from`: array < string >,
    to: array < string >,
    messageid: string,
    subject: string >,
    tags: struct < ses_source_tls_version: string,
    ses_operation: string,
    ses_configurationset: string,
    ses_source_ip: string,
    ses_outgoing_ip: string,
    ses_from_domain: string,
    ses_caller_identity: string >>,
    send string,
    delivery struct < processingtimemillis: int,
    recipients: array < string >,
    reportingmta: string,
    smtpresponse: string,
    `timestamp`: string >,
    open struct < ipaddress: string,
    `timestamp`: string,
    userAgent: string >,
    reject struct < reason: string >,
    click struct < ipAddress: string,
    `timestamp`: string,
    userAgent: string,
    link: string >
    )
    ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
    WITH SERDEPROPERTIES (
    "mapping.ses_caller_identity" = "ses:caller-identity",
    "mapping.ses_configurationset" = "ses:configuration-set",
    "mapping.ses_from_domain" = "ses:from-domain",
    "mapping.ses_operation" = "ses:opeation",
    "mapping.ses_outgoing_ip" = "ses:outgoing-ip",
    "mapping.ses_source_ip" = "ses:source-ip",
    "mapping.ses_source_tls_version" = "ses:source-tls-version"
    )
    LOCATION 's3://aws-s3-ses-analytics-<aws-account-number>/'
    

    The sesmaster table uses the org.openx.data.jsonserde.JsonSerDe SerDe library to deserialize the JSON data.

    We have leveraged the support for JSON arrays and maps and the support for nested data structures. Those features ease the process of preparation and visualization of data.

    In the sesmaster table, the following mappings were applied to avoid errors due to name of JSON fields containing colons.

    • “mapping.ses_configurationset”=”ses:configuration-set”
    • “mapping.ses_source_ip”=”ses:source-ip”
    • “mapping.ses_from_domain”=”ses:from-domain”
    • “mapping.ses_caller_identity”=”ses:caller-identity” “mapping.ses_outgoing_ip”=”ses:outgoing-ip”
  2. Once the sesmaster table is ready, it is a good strategy to create curated views of its data. The first view called vwSESMaster contains all the records of email sending events and all the fields which are unique on each event. Create the vwSESMaster view by running the following script in Amazon Athena.
    CREATE OR REPLACE VIEW vwSESMaster AS
    SELECT
    eventtype as eventtype
    , mail.messageId as mailmessageid
    , mail.timestamp as mailtimestamp
    , mail.source as mailsource
    , mail.sendingAccountId as mailsendingAccountId
    , mail.commonHeaders.subject as mailsubject
    , mail.tags.ses_configurationset as mailses_configurationset
    , mail.tags.ses_source_ip as mailses_source_ip
    , mail.tags.ses_from_domain as mailses_from_domain
    , mail.tags.ses_outgoing_ip as mailses_outgoing_ip
    , delivery.processingtimemillis as deliveryprocessingtimemillis
    , delivery.reportingmta as deliveryreportingmta
    , delivery.smtpresponse as deliverysmtpresponse
    , delivery.timestamp as deliverytimestamp
    , delivery.recipients[1] as deliveryrecipient
    , open.ipaddress as openipaddress
    , open.timestamp as opentimestamp
    , open.userAgent as openuseragent
    , bounce.bounceType as bouncebounceType
    , bounce.bouncesubtype as bouncebouncesubtype
    , bounce.feedbackid as bouncefeedbackid
    , bounce.timestamp as bouncetimestamp
    , bounce.reportingMTA as bouncereportingmta
    , click.ipAddress as clickipaddress
    , click.timestamp as clicktimestamp
    , click.userAgent as clickuseragent
    , click.link as clicklink
    , complaint.timestamp as complainttimestamp
    , complaint.userAgent as complaintuseragent
    , complaint.complaintFeedbackType as complaintcomplaintfeedbacktype
    , complaint.arrivalDate as complaintarrivaldate
    , reject.reason as rejectreason
    FROM
    sesmaster

    The sesmaster table contains some fields which are represented by nested arrays, so it is necessary to flatten them into multiples rows. Following you can see the event types and the fields which need to be flatten.

    • Event type SEND: field mail.commonHeaders
    • Event type BOUNCE: field bounce.bouncedrecipients
    • Event type COMPLAINT: field complaint.complainedrecipients

    To flatten those arrays into multiple rows, we used the CROSS JOIN in conjunction with the UNNEST operator using the following strategy for all the three events:

    • Create a temporal view with the mail.messageID and the field to be flattened.
    • Create another temporal view with the array flattened into multiple rows.
    • Create the final view joining the sesmaster table with the second temporal view by event type and mail.messageID.

    To create those views, follow the next steps.

  3. Run the following scripts in Amazon Athena to flat the mail.commonHeaders array in the SEND event type
    CREATE OR REPLACE VIEW vwSendMailTmpSendTo AS 
    SELECT
    mail.messageId as messageid
    , mail.commonHeaders.to as recipients
    FROM
    sesmaster
    WHERE 
    eventtype='Send'
    
    CREATE OR REPLACE VIEW vwsendmailrecipients AS 
    SELECT
    messageid
    , recipient
    FROM
    ("vwSendMailTmpSendTo"
    CROSS JOIN UNNEST(recipients) t (recipient))
    
    CREATE OR REPLACE VIEW vwSentMails AS
    SELECT 
    eventtype as eventtype
    , mail.messageId as mailmessageid
    , mail.timestamp as mailtimestamp
    , mail.source as mailsource
    , mail.sendingAccountId as mailsendingAccountId
    , mail.commonHeaders.subject as mailsubject
    , mail.tags.ses_configurationset as mailses_configurationset
    , mail.tags.ses_source_ip as mailses_source_ip
    , mail.tags.ses_from_domain as mailses_from_domain
    , mail.tags.ses_outgoing_ip as mailses_outgoing_ip
    , dest.recipient as mailto
    FROM
    sesmaster as sm
    ,vwsendmailrecipients as dest
    WHERE
    sm.eventtype = 'Send'
    and sm.mail.messageid = dest.messageid
  4. Run the following scripts in Amazon Athena to flat the bounce.bouncedrecipients array in the BOUNCE event type
    CREATE OR REPLACE VIEW vwbouncemailtmprecipients AS 
    SELECT
    mail.messageId as messageid
    , bounce.bouncedrecipients
    FROM
    sesmaster
    WHERE (eventtype = 'Bounce')
    
    CREATE OR REPLACE VIEW vwbouncemailrecipients AS 
    SELECT
    messageid
    , recipient.action
    , recipient.diagnosticcode
    , recipient.emailaddress
    FROM
    (vwbouncemailtmprecipients
    CROSS JOIN UNNEST(bouncedrecipients) t (recipient))
    
    CREATE OR REPLACE VIEW vwBouncedMails AS
    SELECT
    eventtype as eventtype
    , mail.messageId as mailmessageid
    , mail.timestamp as mailtimestamp
    , mail.source as mailsource
    , mail.sendingAccountId as mailsendingAccountId
    , mail.commonHeaders.subject as mailsubject
    , mail.tags.ses_configurationset as mailses_configurationset
    , mail.tags.ses_source_ip as mailses_source_ip
    , mail.tags.ses_from_domain as mailses_from_domain
    , mail.tags.ses_outgoing_ip as mailses_outgoing_ip
    , bounce.bounceType as bouncebounceType
    , bounce.bouncesubtype as bouncebouncesubtype
    , bounce.feedbackid as bouncefeedbackid
    , bounce.timestamp as bouncetimestamp
    , bounce.reportingMTA as bouncereportingmta
    , bd.action as bounceaction
    , bd.diagnosticcode as bouncediagnosticcode
    , bd.emailaddress as bounceemailaddress
    FROM
    sesmaster as sm
    ,vwbouncemailrecipients as bd
    WHERE
    sm.eventtype = 'Bounce'
    and sm.mail.messageid = bd.messageid
    
  5. Run the following scripts in Amazon Athena to flat the complaint.complainedrecipients array in the COMPLAINT event type
    CREATE OR REPLACE VIEW vwcomplainttmprecipients AS 
    SELECT
    mail.messageId as messageid
    , complaint.complainedrecipients
    FROM
    sesmaster
    WHERE (eventtype = 'Complaint')
    
    CREATE OR REPLACE VIEW vwcomplainedrecipients AS 
    SELECT
    messageid
    , recipient.emailaddress
    FROM
    (vwcomplainttmprecipients 
    CROSS JOIN UNNEST(complainedrecipients) t (recipient))
    

    At the end we have one table and four views which can be used in Amazon QuickSight to analyze email sending events:

    • Table sesmaster
    • View vwSESMaster
    • View vwSentMails
    • View vwBouncedMails
    • View vwComplainedemails

Step 4: Analyze and visualize data with Amazon QuickSight

 In this blog post, we use Amazon QuickSight to analyze and to visualize email sending events from the sesmaster and the four curated views created previously. Amazon QuickSight can directly access data through Athena. Its pay-per-session pricing enables you to put analytical insights into the hands of everyone in your organization.

Let’s set this up together. We first need to select our table and our views to create new data sources in Athena and then we use these data sources to populate the visualization. We are creating just an example of visualization. Feel free to create your own visualization based on your information needs.

Before we can use the data in Amazon QuickSight, we need to first grant access to the underlying S3 bucket. If you haven’t done so already for other analyses, see our documentation on how to do so.

  1. On the Amazon QuickSight home page, choose Datasets from the menu on the left side, then choose New dataset from the upper-right corner, set and pick Athena as data source. In the following dialog box, give the data source a descriptive name and choose Create data source.

    Create New Athena Data Source

    Figure 12. Create New Athena Data Source

  2. In the following dialog box, select the Catalog and the Database containing your sesmaster and curated views. Let’s select the sesmaster table in order to create some basic Key Performance Indicators. Select the table sesmaster and click on the Select

    Select Sesmaster Table

    Figure 13. Select Sesmaster Table

  3. Our sesmaster table now is a data source for Amazon QuickSight and we can turn to visualizing the data.

    QuickSight Visualize Data

    Figure 14. QuickSight Visualize Data

  4. You can see the list fields on the left. The canvas on the right is still empty. Before we populate it with data, let’s select Key Performance Indicator from the available visual types.

    QuickSight Visual Types

    Figure 15. QuickSight Visual Types

  5. To populate the graph, drag and drop the fields from the field list on the left onto their respective destinations. In our case, we put the field send onto the value well and use count as aggregation.

    Add Send field to visualization

    Figure 16. Add Send field to visualization

  6. Add another visual from the left-upper side and select Key Performance Indicator as visual type.
    Add a new visual

    Figure 17. Add a new visual

    Key Performance Indicator Visual Type

    Figure 18. Key Performance Indicator Visual Type

  7. Put the field Delivery onto the value well and use count as aggregation.

    Add Delivery Field to visualization

    Figure 19. Add Delivery Field to visualization

  8. Repeat the same procedure, (steps 1 to 4) to count the number of Open, Click, Bounce, Complaint and Reject Events. At the end, you should see something similar to the following visualization. After resizing and rearranging the visuals, you should get an analysis like the shown in the image below.

    Preview of Key Performance Indicators

    Figure 20. Preview of Key Performance Indicators

  9. Let´s add another dataset by clicking the pencil on the right of the current Dataset.

    Add a New Dataset

    Figure 21. Add a New Dataset

  10. On the following dialog box, select Add Dataset.

    Add a New Dataset

    Figure 22. Add a New Dataset

  11. Select the view called vwsesmaster and click Select.
    Add vwsesmaster dataset

    Figure 23. Add vwsesmaster dataset

    Now you can see all the available fields of the vwsesmaster view.

    New fields from vwsesmaster dataset

    Figure 24. New fields from vwsesmaster dataset

  12. Let’s create a new visual and select the Table visual type.

    QuickSight Visual Types

    Figure 25. QuickSight Visual Types

  13. Drag and drop the fields from the field list on the left onto their respective destinations. In our case, we put the fields eventtype, mailmessageid, and mailsubject onto the Group By well, but you can add as many fields as you need.

    Add eventtype, mailmessageid and mailsubject fields

    Figure 26. Add eventtype, mailmessageid and mailsubject fields

  14. Now let’s create a filter for this visual in order to filter by type of event. Be sure you select the table and then click on Filter on the left menu.

    Add a Filter

    Figure 27. Add a Filter

  15. Click on Create One and select the field eventtype on the popup window. Now select the eventtype filter to see the following options.

    Create eventtype filter

    Figure 28. Create eventtype filter

  16. Click on the dots on the right of the eventtype filter and select Add to Sheet.

    Add filter to sheet

    Figure 29. Add filter to sheet

  17. Leave all the default values, scroll down and select Apply

    Apply filters with default values

    Figure 30. Apply filters with default values

  18. Now you can filter the vwsesmaster view by eventtype.

    Filter vwsesmasterview by eventtype

    Figure 31. Filter vwsesmasterview by eventtype

  19. You can continue customizing your visualization with all the available data in the sesmaster table, the vwsesmaster view and even add more datasets to include data from the vwSentMails, vwBouncedMails, and vwComplainedemails views. Below, you can see some other visualizations created from those views.
    Final visualization 1

    Figure 32. Final visualization 1

    Final visualization 2

    Figure 33. Final visualization 2

    Final visualization 3

    Figure 34. Final visualization 3

Clean up

To avoid ongoing charges, clean up the resources you created as part of this post:

  1. Delete the visualizations created in Amazon Quicksight.
  2. Unsubscribe from Amazon QuickSight if you are not using it for other projects.
  3. Delete the views and tables created in Amazon Athena.
  4. Delete the Amazon SES configuration set.
  5. Delete the Amazon SES events stored in S3.
  6. Delete the CloudFormation stack in order to delete the Amazon Kinesis Delivery Stream.

Conclusion

In this blog we showed how you can use AWS native services and features to quickly create an email tracking solution based on Amazon SES events to have a more detailed view on your sending activities. This solution uses a full serverless architecture without having to manage the underlying infrastructure and giving you the flexibility to use the solution for small, medium or intense Amazon SES usage, without having to take care of any servers.

We showed you some samples of dashboards and analysis that can be built for most of customers requirements, but of course you can evolve this solution and customize it according to your needs, adding or removing charts, filters or events to the dashboard. Please refer to the following documentation for the available Amazon SES Events, their structure and also how to create analysis and dashboards on Amazon QuickSight:

From a performance and cost efficiency perspective there are still several configurations that can be done to improve the solution, for example using a columnar file formant like parquet, compressing with snappy or setting your S3 partition strategy according to your email sending usage. Another improvement could be importing data into SPICE to read data in Amazon Quicksight. Using SPICE results in the data being loaded from Athena only once, until it is either manually refreshed or automatically refreshed using a schedule.

You can use this walkthrough to configure your first SES dashboard and start visualizing events detail. You can adjust the services described in this blog according to your company requirements.

About the authors

Oscar Mendoza AWS Solutions Architect Oscar Mendoza is a Solutions Architect at AWS based in Bogotá, Colombia. Oscar works with our customers to provide guidance in architectural best practices and to build Well Architected solutions on the AWS platform. He enjoys spending time with his family and his dog and playing music.
Luis Eduardo Torres AWS Solutions Architect Luis Eduardo Torres is a Solutions Architect at AWS based in Bogotá, Colombia. He helps companies to build their business using the AWS cloud platform. He has a great interest in Analytics and has been leading the Analytics track of AWS Podcast in Spanish.
Santiago Benavidez AWS Solutions Architect Santiago Benavídez is a Solutions Architect at AWS based in Buenos Aires, Argentina, with more than 13 years of experience in IT, currently helping DNB/ISV customers to achieve their business goals using the breadth and depth of AWS services, designing highly available, resilient and cost-effective architectures.

Analyze Amazon SES events at scale using Amazon Redshift

Post Syndicated from Manash Deb original https://aws.amazon.com/blogs/big-data/analyze-amazon-ses-events-at-scale-using-amazon-redshift/

Email is one of the most important methods for business communication across many organizations. It’s also one of the primary methods for many businesses to communicate with their customers. With the ever-increasing necessity to send emails at scale, monitoring and analysis has become a major challenge.

Amazon Simple Email Service (Amazon SES) is a cost-effective, flexible, and scalable email service that enables you to send and receive emails from your applications. You can use Amazon SES for several use cases, such as transactional, marketing, or mass email communications.

An important benefit of Amazon SES is its native integration with other AWS services, such as Amazon CloudWatch and Amazon Redshift, which allows you to monitor and analyze your emails sending at scale seamlessly. You can store your email events in Amazon Redshift, which is a widely used, fast, and fully managed cloud data warehouse. You can then analyze these events using SQL to gain business insights such as marketing campaign success, email bounces, complaints, and so on.

In this post, you will learn how to implement an end-to-end solution to automate this email analysis and monitoring process.

Solution overview

The following architecture diagram highlights the end-to-end solution, which you can provision automatically with an AWS CloudFormation template.

In this solution, you publish Amazon SES email events to an Amazon Kinesis Data Firehose delivery stream that publishes data to Amazon Redshift. You then connect to the Amazon Redshift database and use a SQL query tool to analyze Amazon SES email events that meet the given criteria. We use the Amazon Redshift SUPER data type to store the event (JSON data) in Amazon Redshift. The SUPER data type handles semi-structured data, which can have varying table attributes and types.

The alarm system uses Amazon CloudWatch logs that Kinesis Data Firehose generates when a data load to Amazon Redshift fails. We have set up a metric filter that pattern matches the CloudWatch log events to determine the error condition and triggers a CloudWatch alarm. This in turn sends out email notifications using Amazon Simple Notification Service (Amazon SNS).

Prerequisites

As a prerequisite for deploying the solution in this post, you need to set up Amazon SES in your account. For more information, see Getting Started with Amazon Simple Email Service.

Solution resources and features

The architecture built by AWS CloudFormation supports AWS best practices for high availability and security. The CloudFormation template takes care of the following key resources and features:

  • Amazon Redshift cluster – An Amazon Redshift cluster with encryption at rest enabled using an AWS Key Management Service (AWS KMS) customer managed key (CMK). This cluster acts as the destination for Kinesis Data Firehose and stores all the Amazon SES email sending events in the table ses, as shown in the following screenshot.
  • Kinesis Data Firehose configuration – A Kinesis Data Firehose delivery stream that acts as the event destination for all Amazon SES email sending metrics. The delivery stream is set up with Amazon Redshift as the destination. Server-side encryption is enabled using an AWS KMS CMK, and destination error logging has been enabled as per best practices.
  • Amazon SES configuration – A configuration set in Amazon SES that is used to map Kinesis Data Firehose as the event destination to publish email metrics.

To use the configuration set when sending emails, you can specify a default configuration set for your verified identity, or include a reference to the configuration set in the headers of the email.

  • Exploring and analyzing the data – We use Amazon Redshift query editor v2 for exploring and analyzing the data.
  • Alarms and notifications for ingestion failures – A data load error notification system using CloudWatch and Amazon SNS generates email-based notifications in the event of a failure during data load from Kinesis Data Firehose to Amazon Redshift. The setup creates a CloudWatch log metric filter, as shown in the following screenshot.

A CloudWatch alarm based on the metric filter triggers an SNS notification when in alarm state. For more information, see Using Amazon CloudWatch alarms.

Deploy the CloudFormation template

The provided CloudFormation template automatically creates all the required resources for this solution in your AWS account. For more information, see Getting started with AWS CloudFormation.

  1. Sign in to the AWS Management Console.
  2. Choose Launch Stack to launch AWS CloudFormation in your AWS account:
  3. For Stack name, enter a meaningful name for the stack, for example, ses_events.
  4. Provide the following values for the stack parameters:
    1. ClusterName – The name of the Amazon Redshift cluster.
    2. DatabaseName – The name of the first database to be created when the Amazon Redshift cluster is created.
    3. DeliveryStreamName – The name of the Firehose delivery stream.
    4. MasterUsername – The user name that is associated with the primary user account for the Amazon Redshift cluster.
    5. NodeType – The type of node to be provisioned. (Default dc2.large)
    6. NotificationEmailId – The email notification list that is used to configure an SNS topic for sending CloudWatch alarm and event notifications.
    7. NumberofNodes – The number of compute nodes in the Amazon Redshift cluster. For multi-node clusters, the NumberofNodes parameter must be greater than 1.
    8. OnPremisesCIDR – IP range (CIDR notation) for your existing infrastructure to access the target and replica Amazon Redshift clusters.
    9. SESConfigSetName – Name of the Amazon SES configuration set.
    10. SubnetId – Subnet ID where source Amazon Redshift cluster is created.
    11. Vpc – VPC in which Amazon Redshift cluster is launched.
  5. Choose Next.
  6. Review all the information and select I acknowledge that AWS CloudFormation might create IAM resources.
  7. Choose Create stack.

You can track the progress of the stack creation on the Events tab. Wait for the stack to complete and show the status CREATE_COMPLETE.

Test the solution

To send a test email, we use the Amazon SES mailbox simulator. Set the configuration-set header to the one created by the CloudFormation template.

We use the Amazon Redshift query editor V2 to query the Amazon Redshift table (created by the CloudFormation template) and see if the events have shown up.

If the data load of the event stream fails from Kinesis Data Firehose to Amazon Redshift, the failure notification system is triggered, and you receive an email notification via Amazon SNS.

Clean up

Some of the AWS resources deployed by the CloudFormation stacks in this post incur a cost as long as you continue to use them.

You can delete the CloudFormation stack to delete all AWS resources created by the stack. To clean up all your stacks, use the AWS CloudFormation console to remove the stacks that you created in reverse order.

  1. On the Stacks page on the AWS CloudFormation console, choose the stack to delete.
  2. In the stack details pane, choose Delete.
  3. Choose Delete stack when prompted.

After stack deletion begins, you can’t stop it. The stack proceeds to the DELETE_IN_PROGRESS state. When the stack deletion is complete, the stack changes to the DELETE_COMPLETE state. The AWS CloudFormation console doesn’t display stacks in the DELETE_COMPLETE state by default. To display deleted stacks, you must change the stack view filter. For more information, see Viewing deleted stacks on the AWS CloudFormation console.

If the delete fails, the stack enters the DELETE_FAILED state. For solutions, see Delete stack fails.

Conclusion

In this post, we walked through the process of setting up Amazon SES and Amazon Redshift to deploy an email reporting service that can scale to support millions of events. We used Amazon Redshift to store semi-structured messages using the SUPER data type in database tables to support varying message sizes and formats. With this solution, you can easily run analytics at scale and analyze your email event data for deliverability-related issues such as bounces or complaints.

Use the CloudFormation template provided to speed up provisioning of the cloud resources required for the solution (Amazon SES, Kinesis Data Firehose, and Amazon Redshift) in your account while following security best practices. Then you can analyze Amazon SES events at scale using Amazon Redshift.


About the Authors

Manash Deb is a Software Development Engineer in the AWS Directory Service team. He has worked on building end-to-end applications in different database and technologies for over 15 years. He loves to learn new technologies and solving, automating, and simplifying customer problems on AWS.

Arnab Ghosh is a Solutions Architect for AWS in North America helping enterprise customers build resilient and cost-efficient architectures. He has over 13 years of experience in architecting, designing, and developing enterprise applications solving complex business problems.

Sanjoy Thanneer is a Sr. Technical Account Manager with AWS based out of New York. He has over 20 years of experience working in Database and Analytics Domains.  He is passionate about helping enterprise customers build scalable , resilient and cost efficient Applications.

Justin Morris is a Email Deliverability Manager for the Simple Email Service team. With over 10 years of experience in the IT industry, he has developed a natural talent for diagnosing and resolving customer issues and continuously looks for growth opportunities to learn new technologies and services.

How to set up Amazon Quicksight dashboard for Amazon Pinpoint and Amazon SES engagement events

Post Syndicated from satyaso original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-set-up-amazon-quicksight-dashboard-for-amazon-pinpoint-and-amazon-ses-events/

In this post, we will walk through using Amazon Pinpoint and Amazon Quicksight to create customizable messaging campaign reports. Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service that allows customers to connect with users over channels like email, SMS, push, or voice. Amazon QuickSight is a scalable, serverless, embeddable, machine learning-powered business intelligence (BI) service built for the cloud. This solution allows event and user data from Amazon Pinpoint to flow into Amazon Quicksight. Once in Quicksight, customers can build their own reports that shows campaign performance on a more granular level.

Engagement Event Dashboard

Customers want to view the results of their messaging campaigns in ever increasing levels of granularity and ensure their users see value from the email, SMS or push notifications they receive. Customers also want to analyze how different user segments respond to different messages, and how to optimize subsequent user communication. Previously, customers could only view this data in Amazon Pinpoint analytics, which offers robust reporting on: events, funnels, and campaigns. However, does not allow analysis across these different parameters and the building of custom reports. For example, show campaign revenue across different user segments, or show what events were generated after a user viewed a campaign in a funnel analysis. Customers would need to extract this data themselves and do the analysis in excel.

Prerequisites

  • Digital user engagement event database solution must be setup at 1st.
  • Customers should be prepared to purchase Amazon Quicksight because it has its own set of costs which is not covered within Amazon Pinpoint cost.

Solution Overview

This Solution uses the Athena tables created by Digital user engagement events database solution. The AWS CloudFormation template given in this post automatically sets up the different architecture components, to capture detailed notifications about Amazon Pinpoint engagement events and log those in Amazon Athena in the form of Athena views. You still need to manually configure Amazon Quicksight dashboards to link to these newly generated Athena views. Please follow the steps below in order for further information.

Use case(s)

Event dashboard solutions have following use cases: –

  • Deep dive into engagement insights. (eg: SMS events, Email events, Campaign events, Journey events)
  • The ability to view engagement events at the individual user level.
  • Data/process mining turn raw event data into useful marking insights.
  • User engagement benchmarking and end user event funneling.
  • Compute campaign conversions (post campaign user analysis to show campaign effectiveness)
  • Build funnels that shows user progression.

Getting started with solution deployment

Prerequisite tasks to be completed before deploying the logging solution

Step 1 – Create AWS account, Pinpoint Project, Implement Event-Database-Solution.
As part of this step customers need to implement DUE Event database solution as the current solution (DUE event dashboard) is an extension of DUE event database solution. The basic assumption here is that the customer has already configured Amazon Pinpoint project or Amazon SES within the required AWS region before implementing this step.

The steps required to implement an event dashboard solution are as follows.

a/Follow the steps mentioned in Event database solution to implement the complete stack. Prior installing the complete stack copy and save the name Athena events database name as shown in the diagram. For my case it is due_eventdb. Database name is required as an input parameter for the current Event Dashboard solution.

b/Once the solution is deployed, navigate to the output page of the cloud formation stack, and copy, and save the following information, which will be required as input parameters in step 2 of the current Event Dashboard solution.

Step 2 – Deploy Cloud formation template for Event dashboard solution
This step generates a number of new Amazon Athena views that will serve as a data source for Amazon Quicksight. Continue with the following actions.

  • Download the cloud formation template(“Event-dashboard.yaml”) from AWS samples.
  • Navigate to Cloud formation page in AWS console, click up right on “Create stack” and select the option “With new resources (standard)”
  • Leave the “Prerequisite – Prepare template” to “Template is ready” and for the “Specify template” option, select “Upload a template file”. On the same page, click on “Choose file”, browse to find the file “Event-dashboard.yaml” file and select it. Once the file is uploaded, click “Next” and deploy the stack.

  • Enter following information under the section “Specify stack details”:
    • EventAthenaDatabaseName – As mentioned in Step 1-a.
    • S3DataLogBucket- As mentioned in Step 1-b
    • This solution will create additional 5 Athena views which are
      • All_email_events
      • All_SMS_events
      • All_custom_events (Custom events can be Mobile app/WebApp/Push Events)
      • All_campaign_events
      • All_journey_events

Step 3 – Create Amazon Quicksight engagement Dashboard
This step walks you through the process of creating an Amazon Quicksight dashboard for Amazon Pinpoint engagement events using the Athena views you created in step-2

  1. To Setup Amazon Quicksight for the 1st time please follow this link (this process is not needed if you have already setup Amazon Quicksight). Please make sure you are an Amazon Quicksight Administrator.
  2. Go/search Amazon Quicksight on AWS console.
  3. Create New Analysis and then select “New dataset”
  4. Select Athena as data source
  5. As a next step, you need to select what all analysis you need for respective events. This solution provides option to create 5 different set of analysis as mentioned in Step 2. They are a/All email events, b/All SMS Events, c/All Custom Events (Mobile/Web App, web push etc), d/ All Campaign events, e/All Journey events. Dashboard can be created from Quicksight analysis and same can be shared among the organization stake holders. Following are the steps to create analysis and dashboards for different type of events.
  6. Email Events –
    • For all email events, name the analysis “All-emails-events” (this can be any kind of customer preferred nomenclature), select Athena workgroup as primary, and then create a data source.
    • Once you create the data source Quicksight lists all the views and tables available under the specified database (in our case it is:-  due_eventdb). Select the email_all_events view as data source.
    • Select the event data location for analysis. There are mainly two options available which are a/ Import to Spice quicker analysis b/ Directly query your data. Please select the preferred options and then click on “visualize the data”.
    • Import to Spice quicker analysis – SPICE is the Amazon QuickSight Super-fast, Parallel, In-memory Calculation Engine. It’s engineered to rapidly perform advanced calculations and serve data. In Enterprise edition, data stored in SPICE is encrypted at rest. (1 GB of storage is available for free for extra storage customer need to pay extra, please refer cost section in this document )
    • Directly query your data – This process enables Quicksight to query directly to the Athena or source database (In the current case it is Athena) and Quicksight will not store any data.
    • Now that you have selected a data source, you will be taken to a blank quick sight canvas (Blank analysis page) as shown in the following Image, please drag and drop what visualization type you need to visualize onto the auto-graph pane. Please note that Amazon QuickSight is a Busines intelligence platform, so customers are free to choose the desired visualization types to observe the individual engagement events.
    • As part of this blog, we have displayed how to create some simple analysis graphs to visualize the engagement events.
    • As an initial step please Select tabular Visualization as shown in the Image.
    • Select all the event dimensions that you want to put it as part of the Table in X axis. Amazon Quicksight table can be extended to show as many as tables columns, this completely depends upon the business requirement how much data marketers want to visualize.
    • Further filtering on the table can be done using Quicksight filters, you can apply the filter on specific granular values to enable further filtering. For Eg – If you want to apply filtering on the destination email Id then 1/Select the filter from left hand menu 2/Add destination field as the filtering criterion 3/ Tick on the destination field you are trying to filter or search for the Destination email ID that 4/ All the result in the table gets further filtered as per the filter criterion
    • As a next step please add another visual from top left corner “Add -> Add Visual”, then select the Donut Chart from Visual types pane. Donut charts are always used for displaying aggregation.
    • Then select the “event_type” as the Group to visualize the aggregated events, this helps marketers/business users to figure out how many email events occurred and what are the aggregated success ratio, click ratio, complain ratio or bounce ratio etc for the emails/Campaign that’s sent to end users.
    • To create a Quicksight dashboards from the Quicksight analysis click Share menu option at the top right corner then select publish dashboard”. Provide required dashboard name while publishing the dashboard”. Same dashboard can be shared with multiple audiences in the Organization.
    • Following is the final version of the dashboard. As mentioned above Quicksight dashboards can be shared with other stakeholders and also complete dashboard can be exported as excel sheet.
  7. SMS Events-
    • As shown above SMS events can be analyzed using Quicksight and dash boards can be created out of the analysis. Please repeat all of the sub-steps listed in step 6. Following is a sample SMS dashboard.
  8. Custom Events-
    • After you integrate your application (app) with Amazon Pinpoint, Amazon Pinpoint can stream event data about user activity, different type custom events, and message deliveries for the app. Eg :- Session.start, Product_page_view, _session.stop etc. Do repeat all of the sub-steps listed in step 6 create a custom event dashboards.
  9. Campaign events
    • As shown before campaign also can be included in the same dashboard or you can create new dashboard only for campaign events.

Cost for Event dashboard solution
You are responsible for the cost of the AWS services used while running this solution. As of the date of publication, the cost for running this solution with default settings in the US West (Oregon) Region is approximately $65 a month. The cost estimate includes the cost of AWS Lambda, Amazon Athena, Amazon Quicksight. The estimate assumes querying 1TB of data in a month, and two authors managing Amazon Quicksight every month, four Amazon Quicksight readers witnessing the events dashboard unlimited times in a month, and a Quicksight spice capacity is 50 GB per month. Prices are subject to change. For full details, see the pricing webpage for each AWS service you will be using in this solution.

Clean up

When you’re done with this exercise, complete the following steps to delete your resources and stop incurring costs:

  1. On the CloudFormation console, select your stack and choose Delete. This cleans up all the resources created by the stack,
  2. Delete the Amazon Quicksight Dashboards and data sets that you have created.

Conclusion

In this blog post, I have demonstrated how marketers, business users, and business analysts can utilize Amazon Quicksight dashboards to evaluate and exploit user engagement data from Amazon SES and Pinpoint event streams. Customers can also utilize this solution to understand how Amazon Pinpoint campaigns lead to business conversions, in addition to analyzing multi-channel communication metrics at the individual user level.

Next steps

The personas for this blog are both the tech team and the marketing analyst team, as it involves a code deployment to create very simple Athena views, as well as the steps to create an Amazon Quicksight dashboard to analyse Amazon SES and Amazon Pinpoint engagement events at the individual user level. Customers may then create their own Amazon Quicksight dashboards to illustrate the conversion ratio and propensity trends in real time by integrating campaign events with app-level events such as purchase conversions, order placement, and so on.

Extending the solution

You can download the AWS Cloudformation templates, code for this solution from our public GitHub repository and modify it to fit your needs.


About the Author


Satyasovan Tripathy works at Amazon Web Services as a Senior Specialist Solution Architect. He is based in Bengaluru, India, and specialises on the AWS Digital User Engagement product portfolio. He likes reading and travelling outside of work.

Amazon Simple Email Service Celebrates 50 Years of Email

Post Syndicated from Matt Strzelecki original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-simple-email-service-celebrates-50-years-of-email/

Email as we know it turns 50 years old this month (October 2021). The first email sent over a network — the beginning of email as we use it today — was sent in October 1971, by MIT graduate Ray Tomlinson (April 23, 1941–March 5, 2016). Tomlinson was the first to use the @ symbol to identify a message recipient on a remote computer system. Using this address format, he became the first person to send an email between two computers. That first email traveled 10 feet between two computers in Cambridge, Massachusetts. Tomlinson stated when interviewed that the first email was “something like QWERTYUIOP”.

Tomlinson leveraged existing software at the time, including SNDMSG and CPYNET, which allowed people to send messages to others who used the same computer, to send the first email over a network – back then multiple users would share computers, rather than having their own dedicated computers. His work enabled the exchange of messages between computers for the first time. Creating email was a side project at work for Tomlinson, and when he showed his work to another employee for the first time, he reportedly said: “Don’t tell anyone! This isn’t what we’re supposed to be working on.”

Ray Tomlinson was inducted into the Internet Hall of Fame in 2012, and his work is ranked fourth in Boston Globe’s top 150 MIT-related “Ideas, Inventions, and Innovators”.

According to the Guinness Book of Records, the first unsolicited email was sent in May 1978 to 397 recipients advertising an upcoming a product demonstration of computers. That’s right—spam is almost as old as email itself! In 1991, the first email was sent from space by astronauts on the NASA shuttle Atlantis. That message began with “Hello Earth!” and was delivered to Mission Control at the Johnson Space Center in Houston, Texas.

Over the past 50 years, there’s been a lot of firsts in email. For us at Amazon Simple Email Service (Amazon SES), our email first was when we launched our service back in January 2011. We initially started as a service that delivered email for Amazon.com, and grew over time into launching as a public service in Amazon Web Services (AWS).

Customers told us that building large-scale email solutions to send marketing and transactional messages was often a complex and costly challenge for businesses. Amazon SES eliminates these challenges and enables businesses to benefit from the years of experience and sophisticated email infrastructure Amazon.com has built to serve its own large-scale customer base. With Amazon.com being our first customer, from day one – scalability, reliability, and deliverability have been our highest priorities. This same service has also powered the email sending capabilities of Amazon Pinpoint since 2017, as well as email-related features in several other AWS services.

Today, Amazon SES is a cost-effective, flexible, and scalable email service that enables developers to send mail from within any application – supporting multiple email use cases, including transactional, marketing, or mass email communications, as well as inbound email.

We encourage our readers to share their own stories of their email firsts, or any other interesting email anecdotes. #QWERTYUIOP #50yrsofemail

Replace traditional email mailbox polling with real-time reads using Amazon SES and Lambda

Post Syndicated from agardezi original https://aws.amazon.com/blogs/messaging-and-targeting/replace-traditional-email-mailbox-polling-with-real-time-reads-using-amazon-ses-and-lambda/

Integrating emails into an automated workflow for automated processing can be challenging. Traditionally, applications have had to use the POP protocol to connect to mail servers and poll for emails to arrive in a mailbox and then process the messages inline and perform actions on the message. This can be an inefficient mechanism and prone to errors that result in the workflow missing messages. Since this method requires polling it’s not great if you need real-time processing of messages and introduces inefficiencies in the design. Amazon Simple Email Service (Amazon SES) is a cost effective, scalable and flexible email service with support for different workflows including the ability to perform spam checks and virus scans. In this blog you will see how to use Amazon SES with AWS Lambda and Amazon S3 in order to automate the processing of emails in real time and integrate with an application without the need for polling.

The use case explored in this blog focuses on automation for CRM or order processing platforms and processing of email related to customer contact or direct email requests. An example of this use case is copying a client engagement email to Salesforce (or any other database) where it is recorded and can later be categorized or attached to the appropriate client account or opportunity. When designing an application that needs to read emails from a mailbox, a developer would traditionally have to use a mail library (like JavaMail if using Java) to make a call to the mailbox, authenticate and then pull messages into an application object. This would mean polling the mailbox every 10 – 15 minutes to check for new messages, handle errors when the mailbox is unavailable and maintaining a fully functioning mailbox. This solution can help you implement automated processing of emails arriving in a mailbox without the need to poll the mailbox. The entire solution will be implemented in a serverless fashion.

Solution

This blog post shows how to use SES to perform automated processing of email in an application workflow. I will use the option in SES to save received emails in S3 and trigger a Lambda function to process the message without having to poll a mailbox. This sample application demo is using email to receive simple orders which get automatically processed and the details stored in DynamoDB. The following diagram shows the high-level architecture:

Step 1: Create an S3 Bucket for Email Storage

Start by creating an S3 bucket where received emails will be stored in order for the full email to be processed by the lambda. The bucket must have a policy attached so SES can put objects in the bucket on your behalf:

{
  "Version":"2012-10-17",
  "Statement":[
    {
      "Sid":"AllowSESPuts",
      "Effect":"Allow",
      "Principal":{
        "Service":"ses.amazonaws.com"
      },
      "Action":"s3:PutObject",
      "Resource":"arn:aws:s3:::myBucket/*",
      "Condition":{
        "StringEquals":{
          "aws:Referer":"111122223333"
        }
      }
    }
  ]
}

Make the following changes to the preceding policy example:

  1. Replace myBucket with the name of the Amazon S3 bucket that you want to write to.
  2. Replace 111122223333 with your AWS account ID.

You can find out more about the policy here.

Step 2: Create DynamoDB Table to Simulate Application

Next, add a DynamoDB table. The DynamoDB table will store the incoming order info. For this sample we will keep it simple and have a table with email as a partition key. Here is the data model:

{   
    "email_order_received": {
        "email": "string",
        "itemname": "string",
        "quantity": "number"
    }   
}

Step 3: Create Lambda Function triggered by SES to Process Email

Now the DynamoDB table is ready, create the Lambda function to process the email and send data to the DynamoDB table. The lambda function needs an execution role that has permissions to access the S3 bucket, the DynamoDB table and create the CloudWatch log group. It also needs a Resource-based Policy so SES can invoke the Lambda function. In the final step when we configure SES to call the lambda, SES automatically adds the necessary permissions to the function as detailed here.  This is a sample policy statement:

{
  "Version": "2012-10-17",
  "Id": "default",
  "Statement": [
    {
      "Sid": "allowSesInvoke",
      "Effect": "Allow",
      "Principal": {
        "Service": "ses.amazonaws.com"
      },
      "Action": "lambda:InvokeFunction",
      "Resource": "arn:aws:lambda:eu-west-1:111122223333:function:email-event-ses",
      "Condition": {
        "StringEquals": {
          "AWS:SourceAccount": "111122223333"
        }
      }
    }
  ]
}

Sample Lambda code in python:

import boto3
import email


def lambda_handler(event, context):
    s3 = boto3.client("s3")
    dynamodb = boto3.resource("dynamodb")
    table = dynamodb.Table('email_order_received')
    
    print("Spam filter")
    # Check the SES spam and virus filter settings
    if (
        event["Records"][0]["ses"]["receipt"]["spfVerdict"]["status"] == "FAIL" or
        event["Records"][0]["ses"]["receipt"]["dkimVerdict"]["status"] == "FAIL" or
        event["Records"][0]["ses"]["receipt"]["spamVerdict"]["status"] == "FAIL" or
        event["Records"][0]["ses"]["receipt"]["virusVerdict"]["status"] == "FAIL"
       ):
        print("Dropping Spam")
    else:
        print("Not Spam")
        email_bucket = "email-handling-test"
        bucketkey = "monitor/" + event["Records"][0]["ses"]["mail"]["messageId"]
    
        fileObj = s3.get_object(Bucket = email_bucket, Key=bucketkey)
    
        msg = email.message_from_bytes(fileObj['Body'].read())
        From = msg['From']
        itemname = msg['Subject']
        body = ""
        if msg.is_multipart():
            for part in msg.walk():
                type = part.get_content_type()
                disp = str(part.get('Content-Disposition'))
                # look for plain text parts, but skip attachments
                if type == 'text/plain' and 'attachment' not in disp:
                    charset = part.get_content_charset()
                    # decode the base64 unicode bytestring into plain text
                    body = part.get_payload(decode=True).decode(encoding=charset, errors="ignore")
                    # if we've found the plain/text part, stop looping thru the parts
                    break
        else:
            # not multipart - i.e. plain text, no attachments
            charset = msg.get_content_charset()
            body = msg.get_payload(decode=True).decode(encoding=charset, errors="ignore")
            
        table.put_item(
            Item={
                'email': From,
                'itemname': itemname,
                'quantity': body
            }
        )
        print("inserted data into dynamodb")

When you add a Lambda action to a receipt rule, Amazon SES sends an event record to Lambda every time it receives an incoming message. This event contains information about the email headers for the incoming message, as well as the results of tests (spam filtering and virus scanning) that Amazon SES performs on incoming messages, however it omits the body of the incoming email. This is why the lambda has to process the body form the email stored in S3. You can see details of the event here. In this demo app we assume the item name is in the subject and the body of the email has the quantity of the items and this data is written to the DynamoDB table.

Step 4: Configure SES to Send Emails to S3 and Trigger Lambda Function

The final step is to configure Amazon SES. Start by verifying a domain so SES can use it to send and receive emails. Domain verification helps ensure you are the owner of the domain and are thus authorised to manage the sending and receiving of the emails from addresses in the domain. To verify your domain:

  1. In the SES console in the navigation pane under Identity Management, choose Domains.
  2. Choose Verify new Domain
  3. In the Verify new Domain dialog enter your domain name
  4. Choose Verify This Domain
  5. In the dialogue box you will see a Domain verification record set. You need to add this record to your domain DNS server. You will also have to add the email receiving record (MX Record) to you domain DNS server.
  6. If your DNS server is Route53 and it is registered under the same account then SES also gives you the option to update your DNS server from within the SES console.

Once the domain is verified its status goes from “pending verification” to “verified” and now it can used it to send and receive emails.

Next, create a recipient rule set. The Rule Set lets you specify what SES does with emails it receives on domains you own. You can create rules for individual addresses or any address under the domain. To create the Rule Set:

  1. In the left navigation pane, under Email Receiving, choose Rule Sets.
  2. Choose Create Rule.
  3. Enter the recipient email address you want to configure the rule for. You can add up to a maximum of 100 recipient addresses or just set it up for any address in the domain using just the domain name as a wildcard.
  4. Once the addresses have been added, add the actions for the rule. Add two actions:
    1. First one is of type S3, this is to save a copy of the email to the S3 bucket created in step 1. Select the bucket name created in step 1 from the drop-down list. You can add a prefix to the filename as well to categorise the output of different rules.
    2. Second is of type Lambda to trigger the lambda for processing the email. Select the lambda created in step 3 from the drop-down list.

Once the SES Rule is configured, we have the full workflow in place. Now any email sent to the [email protected] address will be processed by the Lambda. In this way you can configure email processing to be part of your application workflow without having to perform polling.

Clean-up

To clean up the resources used in your account:

  1. Navigate to Amazon S3 and delete the contents of the bucket you created where your emails are stored.
  2. Once the bucket is empty, delete the bucket.
  3. Navigate to the DynamoDB console and delete the table you created above. Make sure you select the option to “Delete all CloudWatch alarms for this table”
  4. Remove the domain from Amazon SES. To do this, navigate to the Amazon SES Console and choose Domains from the left navigation. Select the domain you want to remove and choose Remove button to remove it from Amazon SES.
  5. From the Amazon SES Console, navigate to the Rule Sets from the left navigation. On the Active Rule Set section, choose View Active Rule Set button and delete all the rules you have created, by selecting the rule and choosing Action, Delete.
  6. On the Rule Sets page choose Disable Active Rule Set button to disable listening for incoming email messages.
  7. On the Rule Sets page, Inactive Rule Sets section, delete the only rule set, by selecting the rule set and choosing Action, Delete.
  8. Navigate to the Lambda console and delete the Lambda you created earlier. Select the Lambda and choose Delete from the Actions menu.
  9. Navigate to CloudWatch console and from the left navigation choose Logs, Log groups. Find the log group that belongs to the resources and delete it by selecting it and choosing Actions, Delete log group(s).

Conclusion

In this post, we have shown you how to integrate email processing into an application workflow without having to resort to polling a mail box.

By using SES to receive emails you can create a modular serverless architecture that allows emails to be processed and checked for spam plus viruses and the output can then be sent to any downstream system or stored in a database for application use.


About the Author

Syed Ali Abbas Gardezi is a Sr. Solution Architect for AWS based in London, United Kingdom. He works with AWS GSI Partners architecting, designing and implementing various large-scale IT solution. Before joining AWS he worked in several Architecture roles in a tier 1 financial organisation in London.

How to use domain with Amazon SES in multiple accounts or regions

Post Syndicated from Leonardo Azize original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-use-domain-with-amazon-ses-in-multiple-accounts-or-regions/

Sometimes customers want to use their email domain with Amazon Simples Email Service (Amazon SES) across multiple accounts, or the same account but across multiple regions.

For example, AnyCompany is an insurance company with marketing and operations business units. The operations department sends transactional emails every time customers perform insurance simulations. The marketing department sends email advertisements to existing and prospective customers. Since they are different organizations inside AnyCompany, they want to have their own Amazon SES billing. At the same time, they still want to use the same AnyCompany domain.

Other use-cases include customers who want to setup multi-region redundancy, need to satisfy data residency requirements, or need to send emails on behalf of several different clients. In all of these cases, customers can use different regions, in the same or across different accounts.

This post shows how to verify and configure your domain on Amazon SES across multiple accounts or multiple regions.

Overview of solution

You can use the same domain with Amazon SES across multiple accounts or regions. Your options are: different accounts but the same region, different accounts and different regions, and the same account but different regions.

In all of these scenarios, you will have two SES instances running, each sending email for example.com domain – let’s call them SES1 and SES2. Every time you configure a domain in Amazon SES it will generate a series of DNS records you will have to add on your domain authoritative DNS server, which is unique for your domain. Those records are different for each SES instance.

You will need to modify your DNS to add one TXT record, with multiple values, for domain verification. If you decide to use DomainKeys Identified Mail (DKIM), you will modify your DNS to add six CNAME records, three records from each SES instance.

When you configure a domain on Amazon SES, you can also configure a MAIL FROM domain. If you decide to do so, you will need to modify your DNS to add one TXT record for Sender Policy Framework (SPF) and one MX record for bounce and complaint notifications that email providers send you.

Furthermore, your domain can be configured to support DMAC for email spoofing detection. It will rely on SPF or DKIM configured above. Below we walk you through these steps.

  • Verify domain
    You will take TXT values from both SES1 and SES2 instances and add them in DNS, so SES can validate you own the domain
  • Complying with DMAC
    You will add a TXT value with DMAC policy that applies to your domain. This is not tied to any specific SES instance
  • Custom MAIL FROM Domain and SPF
    You will take TXT and MX records related from your MAIL FROM domain from both SES1 and SES2 instances and add them in DNS, so SES can comply with DMARC

Here is a sample matrix of the various configurations:

Two accounts, same region Two accounts, different regions One account, two regions
TXT records for domain verification*

1 record with multiple values

_amazonses.example.com = “VALUE FROM SES1”
“VALUE FROM SES2”

CNAMES for DKIM verification

6 records, 3 from each SES instance

record1-SES1._domainkey.example.com = VALUE FROM SES1
record2-SES1._domainkey.example.com = VALUE FROM SES1
record3-SES1._domainkey.example.com = VALUE FROM SES1
record1-SES2._domainkey.example.com = VALUE FROM SES2
record2-SES2._domainkey.example.com = VALUE FROM SES2
record3-SES2._domainkey.example.com = VALUE FROM SES2

TXT record for DMARC

1 record. It is not related to SES instance or region

_dmarc.example.com = DMARC VALUE

MAIL FROM MX record to define message sender for SES

1 record for entire region

mail.example.com = 10 feedback-smtp.us-east-1.amazonses.com

2 records, one for each region

mail1.example.com = 10 feedback-smtp.us-east-1.amazonses.com
mail2.example.com = 10 feedback-smtp.eu-west-1.amazonses.com

MAIL FROM TXT record for SPF

1 record for entire region

mail.example.com = “v=spf1 include:amazonses.com ~all”

2 records, one for each region

mail1.example.com = “v=spf1 include:amazonses.com ~all”
mail2.example.com = “v=spf1 include:amazonses.com ~all”

* Considering your DNS supports multiple values for a TXT record

Setup SES1 and SES2

In this blog, we call SES1 your primary or existing SES instance. We assume that you have already setup SES, but if not, you can still follow the instructions and setup both at the same time. The settings on SES2 will differ slightly, and therefore you will need to add new DNS entries to support the two-instance setup.

In this document we will use configurations from the “Verification,” “DKIM,” and “Mail FROM Domain” sections of the SES Domains screen and configure SES2 and setup DNS correctly for the two-instance configuration.

Verify domain

Amazon SES requires that you verify, in DNS, your domain, to confirm that you own it and to prevent others from using it. When you verify an entire domain, you are verifying all email addresses from that domain, so you don’t need to verify email addresses from that domain individually.

You can instruct multiple SES instances, across multiple accounts or regions to verify your domain.  The process to verify your domain requires you to add some records in your DNS provider. In this post I am assuming Amazon Route 53 is an authoritative DNS server for example.com domain.

Verifying a domain for SES purposes involves initiating the verification in SES console, and adding DNS records and values to confirm you have ownership of the domain. SES will automatically check DNS to complete the verification process. We assume you have done this step for SES1 instance, and have a _amazonses.example.com TXT record with one value already in your DNS. In this section you will add a second value, from SES2, to the TXT record. If you do not have SES1 setup in DNS, complete these steps twice, once for SES1 and again for SES2. This will prove to both SES instances that you own the domain and are entitled to send email from them.

Initiate Verification in SES Console

Just like you have done on SES1, in the second SES instance (SES2) initiate a verification process for the same domain; in our case example.com

  1. Sign in to the AWS Management Console and open the Amazon SES console.
  2. In the navigation pane, under Identity Management, choose Domains.
  3. Choose Verify a New Domain.
  4. In the Verify a New Domain dialog box, enter the domain name (i.e. example.com).
  5. If you want to set up DKIM signing for this domain, choose Generate DKIM Settings.
  6. Click on Verify This Domain.
  7. In the Verify a New Domain dialog box, you will see a Domain Verification Record Set containing a Name, a Type, and a Value. Copy Name and Value and store them for the step below, where you will add this value to DNS.
    (This information is also available by choosing the domain name after you close the dialog box.)

To complete domain verification, add a TXT record with the displayed Name and Value to your domain’s DNS server. For information about Amazon SES TXT records and general guidance about how to add a TXT record to a DNS server, see Amazon SES domain verification TXT records.

Add DNS Values for SES2

To complete domain verification for your second account, edit current _amazonses TXT record and add the Value from the SES2 to it. If you do not have an _amazonses TXT record create it, and add the Domain Verification values from both SES1 and SES2 to it. We are showing how to add record to Route 53 DNS, but the steps should be similar in any DNS management service you use.

  1. Sign in to the AWS Management Console and open the Amazon Route 53 console.
  2. In the navigation pane, choose Hosted zones.
  3. Choose the domain name you are verifying.
  4. Choose the _amazonses TXT record you created when you verified your domain for SES1.
  5. Under Record details, choose Edit record.
  6. In the Value box, go to the end of the existing attribute value, and then press Enter.
  7. Add the attribute value for the additional account or region.
  8. Choose Save.
  9. To validate, run the following command:
    dig TXT _amazonses.example.com +short
  10. You should see the two values returned:
    "4AjLMzUu4nSjrz4QVqDD8rXq8X2AHr+JhGSl4foiMmU="
    "abcde12345Sjrz4QVqDD8rXq8X2AHr+JhGSl4foiMmU="

Please note:

  1. if your DNS provider does not allow underscores in record names, you can omit _amazonses from the Name.
  2. to help you easily identify this record within your domain’s DNS settings, you can optionally prefix the Value with “amazonses:”.
  3. some DNS providers automatically append the domain name to DNS record names. To avoid duplication of the domain name, you can add a period to the end of the domain name in the DNS record. This indicates that the record name is fully qualified and the DNS provider need not append an additional domain name.
  4. if your DNS server does not support two values for a TXT record, you can have one record named _amazonses.example.com and another one called example.com.

Finally, after some time SES will complete its validation of the domain name and you should see the “pending validation” change to “verified”.

Verify DKIM

DomainKeys Identified Mail (DKIM) is a standard that allows senders to sign their email messages with a cryptographic key. Email providers then use these signatures to verify that the messages weren’t modified by a third party while in transit.

An email message that is sent using DKIM includes a DKIM-Signature header field that contains a cryptographically signed representation of the message. A provider that receives the message can use a public key, which is published in the sender’s DNS record, to decode the signature. Email providers then use this information to determine whether messages are authentic.

When you enable DKIM it generates CNAME records you need to add into your DNS. As it generates different values for each SES instance, you can use DKIM with multiple accounts and regions.

To complete the DKIM verification, copy the three (3) DKIM Names and Values from SES1 and three (3) from SES2 and add them to your DNS authoritative server as CNAME records.

You will know you are successful because, after some time SES will complete the DKIM verification and the “pending verification” will change to “verified”.

Configuring for DMARC compliance

Domain-based Message Authentication, Reporting and Conformance (DMARC) is an email authentication protocol that uses Sender Policy Framework (SPF) and/or DomainKeys Identified Mail (DKIM) to detect email spoofing. In order to comply with DMARC, you need to setup a “_dmarc” DNS record and either SPF or DKIM, or both. The DNS record for compliance with DMARC is setup once per domain, but SPF and DKIM require DNS records for each SES instance.

  1. Setup “_dmarc” record in DNS for your domain; one time per domain. See instructions here
  2. To validate it, run the following command:
    dig TXT _dmarc.example.com +short
    "v=DMARC1;p=quarantine;pct=25;rua=mailto:[email protected]"
  3. For DKIM and SPF follow the instructions below

Custom MAIL FROM Domain and SPF

Sender Policy Framework (SPF) is an email validation standard that’s designed to prevent email spoofing. Domain owners use SPF to tell email providers which servers are allowed to send email from their domains. SPF is defined in RFC 7208.

To comply with Sender Policy Framework (SPF) you will need to use a custom MAIL FROM domain. When you enable MAIL FROM domain in SES console, the service generates two records you need to configure in your DNS to document who is authorized to send messages for your domain. One record is MX and another TXT; see screenshot for mail.example.com. Save these records and enter them in your DNS authoritative server for example.com.

Configure MAIL FROM Domain for SES2

  1. Open the Amazon SES console at https://console.aws.amazon.com/ses/.
  2. In the navigation pane, under Identity Management, choose Domains.
  3. In the list of domains, choose the domain and proceed to the next step.
  4. Under MAIL FROM Domain, choose Set MAIL FROM Domain.
  5. On the Set MAIL FROM Domain window, do the following:
    • For MAIL FROM domain, enter the subdomain that you want to use as the MAIL FROM domain. In our case mail.example.com.
    • For Behavior if MX record not found, choose one of the following options:
      • Use amazonses.com as MAIL FROM – If the custom MAIL FROM domain’s MX record is not set up correctly, Amazon SES will use a subdomain of amazonses.com. The subdomain varies based on the AWS Region in which you use Amazon SES.
      • Reject message – If the custom MAIL FROM domain’s MX record is not set up correctly, Amazon SES will return a MailFromDomainNotVerified error. Emails that you attempt to send from this domain will be automatically rejected.
    • Click Set MAIL FROM Domain.

You will need to complete this step on SES1, as well as SES2. The MAIL FROM records are regional and you will need to add them both to your DNS authoritative server.

Set MAIL FROM records in DNS

From both SES1 and SES2, take the MX and TXT records provided by the MAIL FROM configuration and add them to the DNS authoritative server. If SES1 and SES2 are in the same region (us-east-1 in our example) you will publish exactly one MX record (mail.example.com in our example) into DNS, pointing to endpoint for that region. If SES1 and SES2 are in different regions, you will create two different records (mail1.example.com and mail2.example.com) into DNS, each pointing to endpoint for specific region.

Verify MX record

Example of MX record where SES1 and SES2 are in the same region

dig MX mail.example.com +short
10 feedback-smtp.us-east-1.amazonses.com.

Example of MX records where SES1 and SES2 are in different regions

dig MX mail1.example.com +short
10 feedback-smtp.us-east-1.amazonses.com.

dig MX mail2.example.com +short
10 feedback-smtp.eu-west-1.amazonses.com.

Verify if it works

On both SES instances (SES1 and SES2), check that validations are complete. In the SES Console:

  • In Verification section, Status should be “verified” (in green color)
  • In DKIM section, DKIM Verification Status should be “verified” (in green color)
  • In MAIL FROM Domain section, MAIL FROM domain status should be “verified” (in green color)

If you have it all verified on both accounts or regions, it is correctly configured and ready to use.

Conclusion

In this post, we explained how to verify and use the same domain for Amazon SES in multiple account and regions and maintaining the DMARC, DKIM and SPF compliance and security features related to email exchange.

While each customer has different necessities, Amazon SES is flexible to allow customers decide, organize, and be in control about how they want to uses Amazon SES to send email.

Author bio

Leonardo Azize Martins is a Cloud Infrastructure Architect at Professional Services for Public Sector.

His background is on development and infrastructure for web applications, working on large enterprises.

When not working, Leonardo enjoys time with family, read technical content, watch movies and series, and play with his daughter.

Contributor

Daniel Tet is a senior solutions architect at AWS specializing in Low-Code and No-Code solutions. For over twenty years, he has worked on projects for Franklin Templeton, Blackrock, Stanford Children’s Hospital, Napster, and Twitter. He has a Bachelor of Science in Computer Science and an MBA. He is passionate about making technology easy for common people; he enjoys camping and adventures in nature.

 

Amazon SES configuration for an external SMTP provider with Auth0

Post Syndicated from Raghavarao Sodabathina original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-ses-configuration-for-an-external-smtp-provider-with-auth0/

Many organizations are using an external identity provider to manage user identities. With an identity provider (IdP), customers can manage their user identities outside of AWS and give these external user identities permissions to use AWS resources in customer AWS accounts. The most common requirement when setting up an external identity provider is sending outgoing emails, such as verification e-mails using a link or code, welcome e-mails, MFA enrollment, password changes and blocked account e-mails. This said, most external identity providers’ existing e-mail infrastructure is limited to testing e-mails only and customers need to set up an external SMTP provider for outgoing e-mails.

Managing and running e-mail servers on-premises or deploying an EC2 instance dedicated to run a SMTP server is costly and complex. Customers have to manage operational issues such as hardware, software installation, configuration, patching, and backups.

In this blog post, we will provide step-by-step guidance showing how you can set up Amazon SES as an external SMTP provider with Auth0 to take advantage of Amazon SES capabilities like sending email securely, globally, and at scale.

Amazon Simple Email Service (SES) is a cost-effective, flexible, and scalable email service that enables developers to send email from within any application. You can configure Amazon SES quickly to support several email use cases, including transactional, marketing, or mass email communications.

Auth0 is an identity provider that provides flexible, drop-in solution to add authentication and authorization services (Identity as a Service, or IDaaS) to customer applications. Auth0’s built-in email infrastructure should be used for testing emails only. Auth0 allows you to configure your own SMTP email provider so you can more completely manage, monitor, and troubleshoot your email communications.

Overview of solution

In this blog post, we’ll show you how to perform the below steps to complete the integration between Amazon SES and Auth0

  • Amazon SES setup for sending emails with SMTP credentials and API credentials
  • Auth0 setup to configure Amazon SES as an external SMTP provider
  • Testing the Configuration

The following diagram shows the architecture of the solution.

Prerequisites

Amazon SES Setup

As first step, you must configure a “Sandbox” account within Amazon SES and verify a sender email address for initial testing. Once all the setup steps are successful, you can convert this account into Production and the SES service will be accepting all emails and for more details on this topic, please see the Amazon SES documentation.

1. Log in to the Amazon SES console and choose the Verify a New Email Address button.

2. Once the verification is completed, the Verification Status will change to green under Verification Status  

3. You need to create SMTP credentials which will be used by Auth0 for sending emails.  To create the credentials, click on SMTP settings from left menu and press the Create My SMTP Credentials button.

Please note down the Server Name as it will be required during Auth0 setup.

4. Enter a meaningful username like autho-ses-user and click on Create bottom in the bottom-right page

5. You can see the SMTP username and password on the screen and also, you can download SMTP credentials into a csv file as shown below.

Please note the SMTP User name and SMTP Password as it will be required during Auth0 setup.

6. You need Access key ID and Secret access key of the SES IAM user autho-ses-user as created in step 3 for configuring Amazon SES with API credentials in Auth0.

  • Navigate to the AWS IAM console and click on Users in left menu
  • Double click on autho-ses-user IAM user and then, click on Security credentials

  • Choose on Create access key button to create new Access key ID and Secret access key. You can see the Access key ID and Secret access key on the screen and also, you can download them into a csv file as shown below.

Please note down the Access key ID and Secret access key as it will be required during Auth0 setup.

Auth0 Setup

To ensure that emails can be sent from Auth0 to your Amazon SES SMTP, you need to configure Amazon SES details into Auth0. There are two ways you can use Amazon SES credentials with Auth0, one with SMTP and the other with API credentials.

1. Navigate to auth0 Dashboard, Select Branding and then, Email Provider from left menu. Enable Use my own email provider button as shown below.

2. Let us start with Auth0 configuration with Amazon SES SMTP credentials.

  • Click on SMTP Provider option as shown below

  • Provide below SMTP Provider settings as shown below and then, click on Save button complete the setup.
    • From: Your from email address.
    • Host: Your Amazon SES Server name as created in step 2 of Amazon SES setup. For this example, it is email-smtp.us-west-1.amazonaws.com
    • Port: 465
    • User Name: Your Amazon SES SMTP user name as created in step 4 of Amazon SES setup.
    • Password: Your Amazon SES SMTP password as created in step 4 of Amazon SES setup.

  • Choose on Send test email button to test Auth0 configuration with Amazon SES SMTP credentials.
  • You can look at Autho logs to validate your test as shown below.

  • If you have configured it successfully, you should receive an email from auth0 as shown below.

3. Now, complete Auth0 configuration with Amazon SES API credentials.

  • Click on Amazon SES as shown below

  • Provide Amazon SES settings as shown below and then, click on Save button complete the setup.
    • From: Your from email address.
    • KeyKey Id: Your autho-ses-user IAM user’s Access key ID as created in step 5 of Amazon SES setup.
    • Secret access key: Your autho-ses-user IAM user’s Secret access key as created in step 5 of Amazon SES setup.
    • Region: For this example, choose us-west-1.

  • Click on the Send test email button to test Auth0 configuration with Amazon SES API credentials.
  • You can look at Auth0 logs and If you have configured successfully, you should receive an email from auth0 as illustrated in Auth0 configuration with Amazon SES SMTP credentials section.

Conclusion

In this blog post, we have demonstrated how to setup Amazon SES as an external SMTP email provider with Auth0 as Auth0’s built-in email infrastructure is limited for testing emails. We have also demonstrated how quickly and easily you can setup Amazon SES with SMTP credentials and API credentials. With this solution you can setup your own Amazon SES with Auth0 as an email provider. You can also get a JumpStart by checking the Amazon SES Developer guide, which provides guidance on Amazon SES that provides an easy, cost-effective way for you to send and receive email using your own email addresses and domains.

About the authors

Raghavarao Sodabathina

Raghavarao Sodabathina

Raghavarao Sodabathina is an Enterprise Solutions Architect at AWS. His areas of focus are Data Analytics, AI/ML, and the Serverless Platform. He engages with customers to create innovative solutions that address customer business problems and accelerate the adoption of AWS services. In his spare time, Raghavarao enjoys spending time with his family, reading books, and watching movies.

 

Pawan Matta

Pawan Matta is a Boston-based Gametech Solutions Architect for AWS. He enjoys working closely with customers and supporting their digital native business. His core areas of focus are management and governance and cost optimization. In his free time, Pawan loves watching cricket and playing video games with friends.

Apple Mail’s iOS15 Privacy Protection Impact to Senders

Post Syndicated from Matt Strzelecki original https://aws.amazon.com/blogs/messaging-and-targeting/apple-mails-ios15-privacy-protection-impact-to-senders-2/

On June 7th at Apple’s Worldwide Developer’s Conference (WWDC 2021) Apple announced that Apple Mail users can now choose to use Apple Mail Privacy Protection. Apple Mail Privacy Protection will allow iOS to privately load remote message content which will hide recipient’s mail activity information like IP and user agent information, including geolocation and device(s) used to engage with the message. Apple Mail Privacy Protection will eliminate the open as being a reliable metric to evaluate user engagement on the sender’s side as all tracking pixels and images will be cached and fired as it hits Apple Mail. Apple is doing this in order to protect user information and increase privacy while also helping to facilitate a richer user experience as Apple Mail user can confidently open, read and engage with messages without all their email interactions are being tracked through remote images and tracking pixels. This will result in all messages that have the Apple Mail Privacy Protection enabled to register an open regardless of whether the recipient has read the email message or not. The end user will also have more confidence in the security of the message including its links.

When a user starts Apple Mail on their iOS device, emails to that user are initiated for download to their device but are first cached by Apple including all images and pixels, to a proxy server that does not expose individual recipient IP addresses but rather a generic IP of the Apple Cache. This happens regardless of if the user actually opens the mail at that time or not. If the user opens the email it pulls the message from the Apple Cache rather than from the original sending source, typically an email service provider (ESP). As a result, senders will not have open tracking insight as all tracking images and pixels will fire as the messages are downloaded to the Apple Cache.

Apple Mail Privacy Protection will apply to email opened on the Apple Mail app. If a user engages messages through another mail application such as the Gmail app, Apple Mail Privacy Protection will not be applied. Apple Mail Privacy Protection is not enabled by default but as you launch the Apple Mail app in iOS 15 initially, the user will be prompted to enable privacy protection which most users will choose to turn on.

Impact to Marketers

There will be a major impact to marketers who rely heavily on open rates as a conversion metric for user engagement as open data will be skewed as messages containing tracking links will fire regardless of if a recipient actually engages with the message or not. However, other data points and user activity will still be available such as click-through rates, onsite activity, and conversion history. These types of metrics will need to be relied upon to supplement open tracking data. Additionally, email deliverability best practices will be more important than ever to help maintain healthy lists and a responsive user base. Best practices such as confirmed opt-in list building, list maintenance & hygiene, consistent sending patterns and cadence, and honoring opt-outs and complaints will be even more important for marketers to adhere to as they adjust to the new Mail Privacy Protection feature.

While Mail Privacy Protection reduces visibility of open rates there are benefits to the user experience as user trust increases in the messages received through Apple Mail. For example, previous users who chose to receive text-only based messages to protect their privacy will now receive the more rich content of the full message providing a better user experience while engaging with the message. Full load of images and content will be sent to the recipients who will have a much higher sense of security in reading/ingesting/actioning the email and its content. Prior to Apple Mail Privacy Protection there could be skepticism of URLs and links within the messages leading to more deletes or false positive, potentially also resulting in more complaints and/or unsubscribes.

There are other benefits of Apple’s Mail Privacy Protection to marketers such as validation of email addresses. Since emails are cached as the messages are initiated for download to a device, and as a result it is downloaded to the Apple Cache and the tracking image or pixel is fired, it validates the existence of that email address. This does not mean you should use this feature as a validation tool as mailbox providers such as Gmail will still evaluate senders in part on list hygiene and high invalid requests will still lead to negative sender reputation with those providers. Confirmed opt-in practices are going to be even more crucial for managing healthy and long-term lists for marketers than it was prior to Apple Mail Privacy Protection. If a marketer is unsure about opt-in status, look into creating a re-confirmation campaign and only add back in recipients that re-confirm the opt-in by clicking a confirmation link in the message.

Conclusion

Email is still the most used tool to communicate whether that’s business-to-business, business-to-consumer or peer-to-peer, especially when it comes to marketing. Marketers need to continue to evolve and be creative when sending messages to their recipients because email, as it it relates to privacy & security, will continue evolve and leave marketers who don’t keep pace behind. While Apple’s Mail Privacy Protection reduces open rate visibility it does provide its user base with more security and confidence in messages passed to their devices. That confidence can allow marketers to focus on developing richer content for a better user experience and drive conversions rather than just opens.

Developing and managing a list with proper confirmed opt-in methods are crucial to developing long-term email lists and the trust of your recipients. The implementation of Apple Mail Privacy Protection reinforces this principle.

Lastly, email privacy & security will continue to advance forward and marketers along with email service providers should not be trying to “get around” these privacy features, rather they need to understand that these features are intended to help the end user and your customers. Work within the ideology of providing the customers what they want to receive and nothing more or less, and you can help your emails thrive. Stay tuned for more updates as they become available.

Building well-architected serverless applications: Building in resiliency – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-building-in-resiliency-part-2/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

Reliability question REL2: How do you build resiliency into your serverless application?

This post continues part 1 of this reliability question. Previously, I cover managing failures using retries, exponential backoff, and jitter. I explain how DLQs can isolate failed messages. I show how to use state machines to orchestrate long running transactions rather than handling these in application code.

Required practice: Manage duplicate and unwanted events

Duplicate events can occur when a request is retried or multiple consumers process the same message from a queue or stream. A duplicate can also happen when a request is sent twice at different time intervals with the same parameters. Design your applications to process multiple identical requests to have the same effect as making a single request.

Idempotency refers to the capacity of an application or component to identify repeated events and prevent duplicated, inconsistent, or lost data. This means that receiving the same event multiple times does not change the result beyond the first time the event was received. An idempotent application can, for example, handle multiple identical refund operations. The first refund operation is processed. Any further refund requests to the same customer with the same payment reference should not be processes again.

When using AWS Lambda, you can make your function idempotent. The function’s code must properly validate input events and identify if the events were processed before. For more information, see “How do I make my Lambda function idempotent?

When processing streaming data, your application must anticipate and appropriately handle processing individual records multiple times. There are two primary reasons why records may be delivered more than once to your Amazon Kinesis Data Streams application: producer retries and consumer retries. For more information, see “Handling Duplicate Records”.

Generate unique attributes to manage duplicate events at the beginning of the transaction

Create, or use an existing unique identifier at the beginning of a transaction to ensure idempotency. These identifiers are also known as idempotency tokens. A number of Lambda triggers include a unique identifier as part of the event:

You can also create your own identifiers. These can be business-specific, such as transaction ID, payment ID, or booking ID. You can use an opaque random alphanumeric string, unique correlation identifiers, or the hash of the content.

A Lambda function, for example can use these identifiers to check whether the event has been previously processed.

Depending on the final destination, duplicate events might write to the same record with the same content instead of generating a duplicate entry. This may therefore not require additional safeguards.

Use an external system to store unique transaction attributes and verify for duplicates

Lambda functions can use Amazon DynamoDB to store and track transactions and idempotency tokens to determine if the transaction has been handled previously. DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. This helps to limit the storage space used. Base the TTL on the event source. For example, the message retention period for SQS.

Using DynamoDB to store idempotent tokens

Using DynamoDB to store idempotent tokens

You can also use DynamoDB conditional writes to ensure a write operation only succeeds if an item attribute meets one of more expected conditions. For example, you can use this to fail a refund operation if a payment reference has already been refunded. This signals to the application that it is a duplicate transaction. The application can then catch this exception and return the same result to the customer as if the refund was processed successfully.

Third-party APIs can also support idempotency directly. For example, Stripe allows you to add an Idempotency-Key: <key> header to the request. Stripe saves the resulting status code and body of the first request made for any given idempotency key, regardless of whether it succeeded or failed. Subsequent requests with the same key return the same result.

Validate events using a pre-defined and agreed upon schema

Implicitly trusting data from clients, external sources, or machines could lead to malformed data being processed. Use a schema to validate your event conforms to what you are expecting. Process the event using the schema within your application code or at the event source when applicable. Events not adhering to your schema should be discarded.

For API Gateway, I cover validating incoming HTTP requests against a schema in “Implementing application workload security – part 1”.

Amazon EventBridge rules match event patterns. EventBridge provides schemas for all events that are generated by AWS services. You can create or upload custom schemas or infer schemas directly from events on an event bus. You can also generate code bindings for event schemas.

SNS supports message filtering. This allows a subscriber to receive a subset of the messages sent to the topic using a filter policy. For more information, see the documentation.

JSON Schema is a tool for validating the structure of JSON documents. There are a number of implementations available.

Best practice: Consider scaling patterns at burst rates

Load testing your serverless application allows you to monitor the performance of an application before it is deployed to production. Serverless applications can be simpler to load test, thanks to the automatic scaling built into many of the services. For more information, see “How to design Serverless Applications for massive scale”.

In addition to your baseline performance, consider evaluating how your workload handles initial burst rates. This ensures that your workload can sustain burst rates while scaling to meet possibly unexpected demand.

Perform load tests using a burst strategy with random intervals of idleness

Perform load tests using a burst of requests for a short period of time. Also introduce burst delays to allow your components to recover from unexpected load. This allows you to future-proof the workload for key events when you do not know peak traffic levels.

There are a number of AWS Marketplace and AWS Partner Network (APN) solutions available for performance testing, including Gatling FrontLine, BlazeMeter, and Apica.

In regulating inbound request rates – part 1, I cover running a performance test suite using Gatling, an open source tool.

Gatling performance results

Gatling performance results

Amazon does have a network stress testing policy that defines which high volume network tests are allowed. Tests that purposefully attempt to overwhelm the target and/or infrastructure are considered distributed denial of service (DDoS) tests and are prohibited. For more information, see “Amazon EC2 Testing Policy”.

Review service account limits with combined utilization across resources

AWS accounts have default quotas, also referred to as limits, for each AWS service. These are generally Region-specific. You can request increases for some limits while other limits cannot be increased. Service Quotas is an AWS service that helps you manage your limits for many AWS services. Along with looking up the values, you can also request a limit increase from the Service Quotas console.

Service Quotas dashboard

Service Quotas dashboard

As these limits are shared within an account, review the combined utilization across resources including the following:

  • Amazon API Gateway: number of requests per second across all APIs. (link)
  • AWS AppSync: throttle rate limits. (link)
  • AWS Lambda: function concurrency reservations and pool capacity to allow other functions to scale. (link)
  • Amazon CloudFront: requests per second per distribution. (link)
  • AWS IoT Core message broker: concurrent requests per second. (link)
  • Amazon EventBridge: API requests and target invocations limit. (link)
  • Amazon Cognito: API limits. (link)
  • Amazon DynamoDB: throughput, indexes, and request rates limits. (link)

Evaluate key metrics to understand how workloads recover from bursts

There are a number of key Amazon CloudWatch metrics to evaluate and alert on to understand whether your workload recovers from bursts.

  • AWS Lambda: Duration, Errors, Throttling, ConcurrentExecutions, UnreservedConcurrentExecutions. (link)
  • Amazon API Gateway: Latency, IntegrationLatency, 5xxError, 4xxError. (link)
  • Application Load Balancer: HTTPCode_ELB_5XX_Count, RejectedConnectionCount, HTTPCode_Target_5XX_Count, UnHealthyHostCount, LambdaInternalError, LambdaUserError. (link)
  • AWS AppSync: 5XX, Latency. (link)
  • Amazon SQS: ApproximateAgeOfOldestMessage. (link)
  • Amazon Kinesis Data Streams: ReadProvisionedThroughputExceeded, WriteProvisionedThroughputExceeded, GetRecords.IteratorAgeMilliseconds, PutRecord.Success, PutRecords.Success (if using Kinesis Producer Library), GetRecords.Success. (link)
  • Amazon SNS: NumberOfNotificationsFailed, NumberOfNotificationsFilteredOut-InvalidAttributes. (link)
  • Amazon Simple Email Service (SES): Rejects, Bounces, Complaints, Rendering Failures. (link)
  • AWS Step Functions: ExecutionThrottled, ExecutionsFailed, ExecutionsTimedOut. (link)
  • Amazon EventBridge: FailedInvocations, ThrottledRules. (link)
  • Amazon S3: 5xxErrors, TotalRequestLatency. (link)
  • Amazon DynamoDB: ReadThrottleEvents, WriteThrottleEvents, SystemErrors, ThrottledRequests, UserErrors. (link)

Conclusion

This post continues from part 1 and looks at managing duplicate and unwanted events with idempotency and an event schema. I cover how to consider scaling patterns at burst rates by managing account limits and show relevant metrics to evaluate

Build resiliency into your workloads. Ensure that applications can withstand partial and intermittent failures across components that may only surface in production. In the next post in the series, I cover the performance efficiency pillar from the Well-Architected Serverless Lens.

For more serverless learning resources, visit Serverless Land.

How to Log Amazon SES details using Amazon CloudWatch

Post Syndicated from Rajat Kashyap original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-log-amazon-ses-details-using-amazon-cloudwatch/

One of the use cases Amazon Simple Email Service (Amazon SES) users try to implement is to centralize various SES notifications from different domains or email addresses to get insights about how many of those emails were delivered, bounced or complaint. If you have these details, they can be valuable to help you take appropriate business decisions to further strengthen your mail delivery process. Currently SES provides various metrics like number of sends, reject etc. but there is no direct way to log information like From email identity, recipient email identity, Subject, timestamp, source IP address, messageId etc. when sending emails.

In this blog post, you will learn how to capture detailed notifications about your bounces, complaints, and deliveries and log those in Amazon CloudWatch. With a centralized logging solution, customers can keep track of domains or email addresses from which they received complaints, identify email issues, stay informed, and even build custom dashboard capabilities. Logging the notifications in CloudWatch will help you to store these notifications for the long term and also will allow you to set up a process to back up this data and setup life-cycle policies for data retention.

Let’s quickly understand how Amazon SES categorizes if any email got delivered, bounced or received a complaint.

Types of notifications

Bounce occurs when a message cannot be delivered to the intended recipient. And there are two types of Bounce, hard bounce and soft bounce.

  • Hard bounces occur when email cannot be delivered because of a persistent issue, such as when a recipient’s email address or domain does not exist and Amazon SES will no longer attempt to deliver the message.
  • Soft bounces occur when there is a temporary issue preventing the email from being delivered, such as when the recipient’s mailbox is full, when the connection to the receiving email server times out, or when there are too many simultaneous connections to the receiving mail server. When there are soft bounces, Amazon will attempt to redeliver it again.

Complaint occurs when a recipient,

  • Reports that they don’t want to receive an email.
  • Clicks the “Report spam” button in their email client, and complains to their email provider that such emails belong to the Spam category.

Delivery occurs when,

  • An email is delivered successfully to recipient’s mail server.

Prerequisites

For this post, you should be familiar with the following:

Architecture Overview

The AWS CloudFormation template given in this post automatically sets up the different architecture components, to capture detailed notifications about your bounces, complaints, and deliveries and log those in Amazon CloudWatch. You still have to perform some manual tasks of configuring and validating components. For details, please follow the below steps in sequence.


Getting Started with Solution Deployment

Prerequisite tasks to be completed before deploying the logging solution:

  1. Domain and Email Address are verified
  2.  Creation of Amazon Simple Notification Service (Amazon SNS) Topic to capture detailed notifications for bounces, complaints or deliveries, please refer Create SNS Topic.
  3. Setup notifications at domain or from email address level, for the notifications you want to log into CloudWatch.
    1. Click on the Verified Identity domain name you want to setup bounce notifications.
    2. In the Notifications tab, click on Feedback Notifications section -> Click edit button to navigate on next page.
    3. From the drop down select and Update Bounce Notifications topic with the SNS Topic ARN that you created in prior step and click Save Changes.

Note: For this blog post, you will learn how to log bounce notifications. To capture complaints or delivery notifications you can configure SNS topic and redeploy the CloudFormation template multiple times choosing the Complaint and Delivery event types to capture all notifications.

Once the prerequisite tasks are completed, the logging solution is ready to be deployed.

As a very first step, download ses_bounce_logging_blog.yml CloudFormation file from the below given link, once you saved this on your local machine, follow the next steps to install this solution.

https://github.com/aws-samples/digital-user-engagement-reference architectures/blob/master/cloudformation/ses_bounce_logging_blog.yml

Steps to run the CloudFormation template:

  1. Go to CloudFormation Console and Click Create Stack.
  2. Select Upload template file radio button and Click Choose file to upload ses_bounce_logging_blog.yml file you downloaded earlier.
  3. Click Next on Create Stack screen.
  4. Specify Stack Name, for example ses-bounce-logging.
  5. Change default value of CloudWatchGroupName if needed.
  6. Select the Event TypeBounce”, “Complaint”, or “Delivery” you wish to track.
  7. Enter Amazon Resource Name (ARN) of Amazon SNS topic created in prior step, to capture bounce notification in SNSTopicName parameters field, and click Next.
  8. Click Next on Configure stack options screen.
  9. Select “I acknowledge that AWS CloudFormation might create IAM resources” and click Create Stack.

Wait for the CloudFormation template to complete and then verify resources in the CloudFormation stack has been created. Click on individual resources and verify.

  • IAM Role was created.
  • Lambda function to log capture bounce notification was created.
  • Verify that Lambda function subscription to SNS topic has been created and confirmed.

  • You can also verify SNS and Lambda integration in Lambda console.

How to test the solution?

You can test the Bounce scenario using Amazon SES mailbox simulators. When you send an email to selected mailbox simulator scenario, you will get a simulated detailed notification back to the Amazon SNS topic configured in the notification section described in the pre-requisite section. You can use AWS CLI, an AWS SDK, or Amazon SES console for the particular domain that you have configured to receive notifications.

[email protected] (In scope of this blog post and applies to test bounce notifications)

Use below in case you want to test other scenario:

[email protected]

[email protected]

Screenshots showing how to send bounce email using AWS Console

  1. Go to Amazon SES -> Select Verified Domain Identity Checkbox.
  2. Click on Send a Test Email Button.
  3. Fill in the required information, as given in the below snapshot like From-address, test Scenario (Bounce) and click Send Test Email.
  4. As soon as bounce notification is received in the SNS topic it will be sent to the Lambda function and finally logged in the /aws /ses/bounce_logs CloudWatch log group.

Clean up

When you’re done with this exercise, complete the following steps to delete your resources and stop incurring costs:

  1. Delete the SNS topic that you created.
  2. On the CloudFormation console, select your stack and choose Delete.

This cleans up all the resources created by the stack.

Conclusion

In this blog post, we have shown you how to capture and build a solution for Bounce notifications. We explained how to combine Amazon Simple Notification Service, AWS Lambda, and Amazon CloudWatch to create the logging solution. To enhance the visualization, you can filter Metrics in Amazon CloudWatch, allowing you to graph metrics and make it searchable. As the notifications are stored in Amazon CloudWatch, you can export the logs to Amazon S3 for the long term. You can modify the CloudFormation template in this blog and deploy it multiple times to capture complaints or delivery notifications for your business use cases.

 


About the Authors

Rajat Kashyap is a Solutions Architect at AWS. He is Containers, DevOps, BigData, Analytics and AI/ML enthusiast and loves helping customers design secure, reliable, scalable and cost-effective solutions on AWS. As a trusted customer advisor, he help organizations understand best practices for advanced AWS cloud-based solutions and help them in migration and modernization of their workload using Well Architected design principles and best practices.

 

 

Ashish Mehra is a Solutions Architecture Manager at AWS. He is a Middleware, Serverless, IoT and Containers enthusiast and loves helping customers design secure, reliable and cost-effective solutions on AWS.

Complying with DMARC across multiple accounts using Amazon SES

Post Syndicated from Brendan Paul original https://aws.amazon.com/blogs/messaging-and-targeting/complying-with-dmarc-across-multiple-accounts-using-amazon-ses/

Introduction

For enterprises of all sizes, email is a critical piece of infrastructure that supports large volumes of communication from an organization. As such, companies need a robust solution to deal with the complexities this may introduce. In some cases, companies have multiple domains that support several different business units and need a distributed way of managing email sending for those domains. For example, you might want different business units to have the ability to send emails from subdomains, or give a marketing company the ability to send emails on your behalf. Amazon Simple Email Service (Amazon SES) is a cost-effective, flexible, and scalable email service that enables developers to send mail from any application. One of the benefits of Amazon SES is that you can configure Amazon SES to authorize other users to send emails from addresses or domains that you own (your identities) using their own AWS accounts. When allowing other accounts to send emails from your domain, it is important to ensure this is done securely. Amazon SES allows you to send emails to your users using popular authentication methods such as DMARC. In this blog, we walk you through 1/ how to comply with DMARC when using Amazon SES and 2/ how to enable other AWS accounts to send authenticated emails from your domain.

DMARC: what is it, why is it important?

DMARC stands for “Domain-based Message Authentication, Reporting & Conformance”, and it is an email authentication protocol (DMARC.org). DMARC gives domain owners and email senders a way to protect their domain from being used by malicious actors in phishing or spoofing attacks. Email spoofing can be used as a way to compromise users’ financial or personal information by taking advantage of their trust of well-known brands. DMARC makes it easier for senders and recipients to determine whether or not an email was actually sent by the domain that it claims to have been sent by.

Solution Overview

In this solution, you will learn how to set up DKIM signing on Amazon SES, implement a DMARC Policy, and enable other accounts in your organization to send emails from your domain using Sending Authorization. When you set up DKIM signing, Amazon SES will attach a digital signature to all outgoing messages, allowing recipients to verify that the email came from your domain. You will then set your DMARC Policy, which tells an email receiver what to do if an email is not authenticated. Lastly, you will set up Sending Authorization so that other AWS accounts can send authenticated emails from your domain.

Prerequisites

In order to complete the example illustrated in this blog post, you will need to have:

  1. A domain in an Amazon Route53 Hosted Zone or third-party provider. Note: You will need to add/update records for the domain. For this blog we will be using Route53.
  2. An AWS Organization
  3. A second AWS account to send Amazon SES Emails within a different AWS Organizations OU. If you have not worked with AWS Organizations before, review the Organizations Getting Started Guide

How to comply with DMARC (DKIM and SPF) in Amazon SES

In order to comply with DMARC, you must authenticate your messages with either DKIM (DomainKeys Identified Mail), SPF (Sender Policy Framework), or both. DKIM allows you to send email messages with a cryptographic key, which enables email providers to determine whether or not the email is authentic. SPF defines what servers are allowed to send emails for their domain. To use SPF for DMARC compliance you need to set up a custom MAIL FROM domain in Amazon SES. To authenticate your emails with DKIM in Amazon SES, you have the option of:

In this blog, you will be setting up a sending identity.

Setting up DKIM Signing in Amazon SES

  1. Navigate to the Amazon SES Console 
  2. Select Verify a New Domain and type the name of your domain in
  3. Select Generate DKIM Settings
  4. Choose Verify This Domain
    1. This will generate the DNS records needed to complete domain verification, DKIM signing, and routing incoming mail.
    2. Note: When you initiate domain verification using the Amazon SES console or API, Amazon SES gives you the name and value to use for the TXT record. Add a TXT record to your domain’s DNS server using the specified Name and Value. Amazon SES domain verification is complete when Amazon SES detects the existence of the TXT record in your domain’s DNS settings.
  5. If you are using Route 53 as your DNS provider, choose the Use Route 53 button to update the DNS records automatically
    1. If you are not using Route 53, go to your third-party provider and add the TXT record to verify the domain as well as the three CNAME records to enable DKIM signing. You can also add the MX record at the end to route incoming mail to Amazon SES.
    2. A list of common DNS Providers and instructions on how to update the DNS records can be found in the Amazon SES documentation
  6. Choose Create Record Sets if you are using Route53 as shown below or choose Close after you have added the necessary records to your third-party DNS provider.

 

Note: in the case that you previously verified a domain, but did NOT generate the DKIM settings for your domain, follow the steps below. Skip these steps if this is not the case:

  1. Go to the Amazon SES Console, and select your domain
  2. Select the DKIM dropdown
  3. Choose Generate DKIM Settings and copy the three values in the record set shown
    1. You may also download the record set as a CSV file
  4. Navigate to the Route53 console or your third-party DNS provider. Instructions on how to update the DNS records in your third-party can be found in the Amazon SES documentation
  5. Select the domain you are using
  6. Choose Create Record

  1. Enter the values that Amazon SES has generated for you, and add the three CNAME records to your domain
  2. Wait a few minutes, and go back to your domain in the Amazon SES Console
  3. Check that the DKIM status is verified

You also want to set up a custom MAIL FROM domain that you will use later on. To do so, follow the steps in the documentation.

Setting up a DMARC policy on your domain

DMARC policies are TXT records you place in DNS to define what happens to incoming emails that don’t align with the validations provided when setting up DKIM and SPF. With this policy, you can choose to allow the email to pass through, quarantine the email into a folder like junk or spam, or reject the email.

As a best practice, you should start with a DMARC policy that doesn’t reject all email traffic and collect reports on emails that don’t align to determine if they should be allowed. You can also set a percentage on the DMARC policy to perform filtering on a subset of emails to, for example, quarantine only 50% of the emails that don’t align. Once you are in a state where you can begin to reject non-compliant emails, flip the policy to reject failed authentications. When you set the DMARC policy for your domain, any subdomains that are authorized to send on behalf of your domain will inherit this policy and the same rule will apply. For more information on setting up a DMARC policy, see our documentation.

In a scenario where you have multiple subdomains sending emails, you should be setting the DMARC policy for the organizational domain that you own. For example, if you own the domain example.com and also want to use the sub-domain sender.example.com to send emails you can set the organizational DMARC policy (as a DNS TXT record) to:

Name Type Value
1 _dmarc.example.com TXT “v=DMARC1;p=quarantine;pct=50;rua=mailto:[email protected]

This DMARC policy states that 50% of emails coming from example.com that fail authentication should be quarantined and you want to send a report of those failures to [email protected]. For your sender.example.com sub-domain, this policy will be inherited unless you specify another DMARC policy for our sub-domain. In the case where you want to be stricter on the sub-domain you could add another DMARC policy like you see in the following table.

 

Name Type Value
1 _dmarc.sender.example.com TXT “v=DMARC1;p=reject;pct=100;rua=mailto:[email protected];ruf=mailto:[email protected]

This policy would apply to emails coming from sender.example.com and would reject any email that fails authentication. It would also send aggregate feedback to [email protected] and detailed message-specific failure information to [email protected] for further analysis.

Sending Authorization in Amazon SES – Allowing Other Accounts to Send Authenticated Emails

Now that you have configured Amazon SES to comply with DMARC in the account that owns your identity, you may want to allow other accounts in your organization the ability to send emails in the same way. Using Sending Authorization, you can authorize other users or accounts to send emails from identities that you own and manage. An example of where this could be useful is if you are an organization which has different business units in that organization. Using sending authorization, a business unit’s application could send emails to their customers from the top-level domain. This application would be able to leverage the authentication settings of the identity owner without additional configuration. Another advantage is that if the business unit has its own subdomain, the top-level domain’s DKIM settings can apply to this subdomain, so long as you are using Easy DKIM in Amazon SES and have not set up Easy DKIM for the specific subdomains.

Setting up sending authorization across accounts

Before you set up sending authorization, note that working across multiple accounts can impact bounces, complaints, pricing, and quotas in Amazon SES. Amazon SES documentation provides a good understanding of the impacts when using multiple accounts. Specifically, delegated senders are responsible for bounces and complaints and can set up notifications to monitor such activities. These also count against the delegated senders account quotas. To set up Sending Authorization across accounts:

  1. Navigate to the Amazon SES Console from the account that owns the Domain
  2. Select Domains under Identity Management
  3. Select the domain that you want to set up sending authorization with
  4. Select View Details
  5. Expand Identity Policies and Click Create Policy
  6. You can either create a policy using the policy generator or create a custom policy. For the purposes of this blog, you will create a custom policy.
  7. For the custom policy, you will allow a particular Organization Unit (OU) from our AWS Organization access to our domain. You can also limit access to particular accounts or other IAM principals. Use the following policy to allow a particular OU to access the domain:

{
  “Version”: “2012-10-17”,
  “Id”: “AuthPolicy”,
  “Statement”: [
    {
      “Sid”: “AuthorizeOU”,
      “Effect”: “Allow”,
      “Principal”: “*”,
      “Action”: [
        “SES:SendEmail”,
        “SES:SendRawEmail”
      ],
      “Resource”: “<Arn of Verified Domain>”,
      “Condition”: {
        “ForAnyValue:StringLike”: {
          “aws:PrincipalOrgPaths”: “<Organization Id>/<Root OU Id>/<Organizational Unit Id>”
        }
      }
    }
  ]
}

9. Make sure to replace the escaped values with your Verified Domain ARN and the Org path of the OU you want to limit access to.

 

You can find more policy examples in the documentation. Note that you can configure sending authorization such that all accounts under your AWS Organization are authorized to send via a certain subdomain.

Testing

You can now test the ability to send emails from your domain in a different AWS account. You will do this by creating a Lambda function to send a test email. Before you create the Lambda function, you will need to create an IAM role for the Lambda function to use.

Creating the IAM Role:

  1. Log in to your separate AWS account
  2. Navigate to the IAM Management Console
  3. Select Role and choose Create Role
  4. Under Choose a use case select Lambda
  5. choose Next: Permissions
  6. In the search bar, type SES and select the check box next to AmazonSESFullAccess
  7. Choose Next:Tags and Review
  8. Give the role a name of your choosing, and choose Create Role

Navigate to Lambda Console

  1. Select Create Function
  2. Choose the box marked Author from Scratch
  3. Give the function a name of your choosing (Ex: TestSESfunction)
  4. In this demo, you will be using Python 3.8 runtime, but feel free to modify to your language of choice
  5. Select the Change default execution role dropdown, and choose the Use an existing role radio button
  6. Under Existing Role, choose the role that you created in the previous step, and create the function

Edit the function

  1. Navigate to the Function Code portion of the page and open the function python file
  2. Replace the default code with the code shown below, ensuring that you put your own values in based on your resources
  3. Values needed:
    1. Test Email Address: an email address you have access to
      1. NOTE: If you are still operating in the Amazon SES Sandbox, this will need to be a verified email in Amazon SES. To verify an email in Amazon SES, follow the process here. Alternatively, here is how you can move out of the Amazon SES Sandbox
    2. SourceArn: The arn of your domain. This can be found in Amazon SES Console → Domains → <YourDomain> → Identity ARN
    3. ReturnPathArn: The same as your Source ARN
    4. Source: This should be your Mail FROM Domain @ your domain
      1. Your Mail FROM Domain can be found under Domains → <YourDomain> → Mail FROM Domain dropdown
      2. Ex: [email protected]
    5. Use the following function code for this example

import json
import boto3
from botocore.exceptions import ClientError

client = boto3.client('ses')
def lambda_handler(event, context):
    # Try to send the email.
    try:
        #Provide the contents of the email.
        response = client.send_email(
            Destination={
                'ToAddresses': [
                    '<[email protected]>',
                ],
            },
            Message={
                'Body': {
                    'Html': {
                        'Charset': 'UTF-8',
                        'Data': 'This email was sent with Amazon SES.',
                    },
                },
                'Subject': {
                    'Charset': 'UTF-8',
                    'Data': 'Amazon SES Test',
                },
            },
            SourceArn='<your-ses-identity-ARN>',
            ReturnPathArn='<your-ses-identity-ARN>',
            Source='<[email protected]>',
             )
    # Display an error if something goes wrong.
    except ClientError as e:
        print(e.response['Error']['Message'])
    else:
        print("Email sent! Message ID:"),
        print(response['ResponseMetadata']['RequestId'])

  1. Once you have replaced the appropriate values, choose the Deploy button to deploy your changes

Run a Test invocation

  1. After you have deployed your changes, select the “Test” Panel above your function code

  1. You can leave all of these keys and values as default, as the function does not use any event parameters
  2. Choose the Invoke button in the top right corner
  3. You should see this above the test event window:

Verifying that the Email has been signed properly

Depending on your email provider, you may be able to check the DKIM signature directly in the application. As an example, for Outlook, right click on the message, and choose View Source from the menu. You should see line that shows the Authentication Results and whether or not the DKIM/SPF signature passed. For Gmail, go to your Gmail Inbox on the Gmail web app. Choose the message you wish to inspect, and choose the More Icon. Choose View Original from the drop-down menu. You should then see the SPF and DKIM “PASS” Results.

Cleanup

To clean up the resources in your account,

  1. Navigate to the Route53 Console
  2. Select the Hosted Zone you have been working with
  3. Select the CNAME, TXT, and MX records that you created earlier in this blog and delete them
  4. Navigate to the SES Console
  5. Select Domains
  6. Select the Domain that you have been working with
  7. Click the drop down Identity Policies and delete the one that you created in this blog
  8. If you verified a domain for the sake of this blog: navigate to the Domains tab, select the domain and select Remove
  9. Navigate to the Lambda Console
  10. Select Functions
  11. Select the function that you created in this exercise
  12. Select Actions and delete the function

Conclusion

In this blog post, we demonstrated how to delegate sending and management of your sub-domains to other AWS accounts while also complying with DMARC when using Amazon SES. In order to do this, you set up a sending identity so that Amazon SES automatically adds a DKIM signature to your messages. Additionally, you created a custom MAIL FROM domain to comply with SPF. Lastly, you authorized another AWS account to send emails from a sub-domain managed in a different account, and tested this using a Lambda function. Allowing other accounts the ability to manage and send email from your sub-domains provides flexibility and scalability for your organization without compromising on security.

Now that you have set up DMARC authentication for multiple accounts in your enviornment, head to the AWS Messaging & Targeting Blog to see examples of how you can combine Amazon SES with other AWS Services!

If you have more questions about Amazon Simple Email Service, check out our FAQs or our Developer Guide.

If you have feedback about this post, submit comments in the Comments section below.

Forwarding emails automatically based on content with Amazon Simple Email Service

Post Syndicated from Murat Balkan original https://aws.amazon.com/blogs/messaging-and-targeting/forwarding-emails-automatically-based-on-content-with-amazon-simple-email-service/

Introduction

Email is one of the most popular channels consumers use to interact with support organizations. In its most basic form, consumers will send their email to a catch-all email address where it is further dispatched to the correct support group. Often, this requires a person to inspect content manually. Some IT organizations even have a dedicated support group that handles triaging the incoming emails before assigning them to specialized support teams. Triaging each email can be challenging, and delays in email routing and support processes can reduce customer satisfaction. By utilizing Amazon Simple Email Service’s deep integration with Amazon S3, AWS Lambda, and other AWS services, the task of categorizing and routing emails is automated. This automation results in increased operational efficiencies and reduced costs.

This blog post shows you how a serverless application will receive emails with Amazon SES and deliver them to an Amazon S3 bucket. The application uses Amazon Comprehend to identify the dominant language from the message body.  It then looks it up in an Amazon DynamoDB table to find the support group’s email address specializing in the email subject. As the last step, it forwards the email via Amazon SES to its destination. Archiving incoming emails to Amazon S3 also enables further processing or auditing.

Architecture

By completing the steps in this post, you will create a system that uses the architecture illustrated in the following image:

Architecture showing how to forward emails by content using Amazon SES

The flow of events starts when a customer sends an email to the generic support email address like [email protected]. This email is listened to by Amazon SES via a recipient rule. As per the rule, incoming messages are written to a specified Amazon S3 bucket with a given prefix.

This bucket and prefix are configured with S3 Events to trigger a Lambda function on object creation events. The Lambda function reads the email object, parses the contents, and sends them to Amazon Comprehend for language detection.

Amazon DynamoDB looks up the detected language code from an Amazon DynamoDB table, which includes the mappings between language codes and support group email addresses for these languages. One support group could answer English emails, while another support group answers French emails. The Lambda function determines the destination address and re-sends the same email address by performing an email forward operation. Suppose the lookup does not return any destination address, or the language was not be detected. In that case, the email is forwarded to a catch-all email address specified during the application deployment.

In this example, Amazon SES hosts the destination email addresses used for forwarding, but this is not a requirement. External email servers will also receive the forwarded emails.

Prerequisites

To use Amazon SES for receiving email messages, you need to verify a domain that you own. Refer to the documentation to verify your domain with Amazon SES console. If you do not have a domain name, you will register one from Amazon Route 53.

Deploying the Sample Application

Clone this GitHub repository to your local machine and install and configure AWS SAM with a test AWS Identity and Access Management (IAM) user.

You will use AWS SAM to deploy the remaining parts of this serverless architecture.

The AWS SAM template creates the following resources:

  • An Amazon DynamoDB mapping table (language-lookup) contains information about language codes and associates them with destination email addresses.
  • An AWS Lambda function (BlogEmailForwarder) that reads the email content parses it, detects the language, looks up the forwarding destination email address, and sends it.
  • An Amazon S3 bucket, which will store the incoming emails.
  • IAM roles and policies.

To start the AWS SAM deployment, navigate to the root directory of the repository you downloaded and where the template.yaml AWS SAM template resides. AWS SAM also requires you to specify an Amazon Simple Storage Service (Amazon S3) bucket to hold the deployment artifacts. If you haven’t already created a bucket for this purpose, create one now. You will refer to the documentation to learn how to create an Amazon S3 bucket. The bucket should have read and write access by an AWS Identity and Access Management (IAM) user.

At the command line, enter the following command to package the application:

sam package --template template.yaml --output-template-file output_template.yaml --s3-bucket BUCKET_NAME_HERE

In the preceding command, replace BUCKET_NAME_HERE with the name of the Amazon S3 bucket that should hold the deployment artifacts.

AWS SAM packages the application and copies it into this Amazon S3 bucket.

When the AWS SAM package command finishes running, enter the following command to deploy the package:

sam deploy --template-file output_template.yaml --stack-name blogstack --capabilities CAPABILITY_IAM --parameter-overrides FromEmailAddress=info@ YOUR_DOMAIN_NAME_HERE CatchAllEmailAddress=catchall@ YOUR_DOMAIN_NAME_HERE

In the preceding command, change the YOUR_DOMAIN_NAME_HERE with the domain name you validated with Amazon SES. This domain also applies to other commands and configurations that will be introduced later.

This example uses “blogstack” as the stack name, you will change this to any other name you want. When you run this command, AWS SAM shows the progress of the deployment.

Configure the Sample Application

Now that you have deployed the application, you will configure it.

Configuring Receipt Rules

To deliver incoming messages to Amazon S3 bucket, you need to create a Rule Set and a Receipt rule under it.

Note: This blog uses Amazon SES console to create the rule sets. To create the rule sets with AWS CloudFormation, refer to the documentation.

  1. Navigate to the Amazon SES console. From the left navigation choose Rule Sets.
  2. Choose Create a Receipt Rule button at the right pane.
  3. Add info@YOUR_DOMAIN_NAME_HERE as the first recipient addresses by entering it into the text box and choosing Add Recipient.

 

 

Choose the Next Step button to move on to the next step.

  1. On the Actions page, select S3 from the Add action drop-down to reveal S3 action’s details. Select the S3 bucket that was created by the AWS SAM template. It is in the format of your_stack_name-inboxbucket-randomstring. You will find the exact name in the outputs section of the AWS SAM deployment under the key name InboxBucket or by visiting the AWS CloudFormation console. Set the Object key prefix to info/. This tells Amazon SES to add this prefix to all messages destined to this recipient address. This way, you will re-use the same bucket for different recipients.

Choose the Next Step button to move on to the next step.

In the Rule Details page, give this rule a name at the Rule name field. This example uses the name info-recipient-rule. Leave the rest of the fields with their default values.

Choose the Next Step button to move on to the next step.

  1. Review your settings on the Review page and finalize rule creation by choosing Create Rule

  1. In this example, you will be hosting the destination email addresses in Amazon SES rather than forwarding the messages to an external email server. This way, you will be able to see the forwarded messages in your Amazon S3 bucket under different prefixes. To host the destination email addresses, you need to create different rules under the default rule set. Create three additional rules for catchall@YOUR_DOMAIN_NAME_HERE , english@ YOUR_DOMAIN_NAME_HERE and french@YOUR_DOMAIN_NAME_HERE email addresses by repeating the steps 2 to 5. For Amazon S3 prefixes, use catchall/, english/, and french/ respectively.

 

Configuring Amazon DynamoDB Table

To configure the Amazon DynamoDB table that is used by the sample application

  1. Navigate to Amazon DynamoDB console and reach the tables view. Inspect the table created by the AWS SAM application.

language-lookup table is the table where languages and their support group mappings are kept. You need to create an item for each language, and an item that will hold the default destination email address that will be used in case no language match is found. Amazon Comprehend supports more than 60 different languages. You will visit the documentation for the supported languages and add their language codes to this lookup table to enhance this application.

  1. To start inserting items, choose the language-lookup table to open table overview page.
  2. Select the Items tab and choose the Create item From the dropdown, select Text. Add the following JSON content and choose Save to create your first mapping object. While adding the following object, replace Destination attribute’s value with an email address you own. The email messages will be forwarded to that address.

{

  “language”: “en”,

  “destination”: “english@YOUR_DOMAIN_NAME_HERE”

}

Lastly, create an item for French language support.

{

  “language”: “fr”,

  “destination”: “french@YOUR_DOMAIN_NAME_HERE”

}

Testing

Now that the application is deployed and configured, you will test it.

  1. Use your favorite email client to send the following email to the domain name info@ email address.

Subject: I need help

Body:

Hello, I’d like to return the shoes I bought from your online store. How can I do this?

After the email is sent, navigate to the Amazon S3 console to inspect the contents of the Amazon S3 bucket that is backing the Amazon SES Rule Sets. You will also see the AWS Lambda logs from the Amazon CloudWatch console to confirm that the Lambda function is triggered and run successfully. You should receive an email with the same content at the address you defined for the English language.

  1. Next, send another email with the same content, this time in French language.

Subject: j’ai besoin d’aide

Body:

Bonjour, je souhaite retourner les chaussures que j’ai achetées dans votre boutique en ligne. Comment puis-je faire ceci?

 

Suppose a message is not matched to a language in the lookup table. In that case, the Lambda function will forward it to the catchall email address that you provided during the AWS SAM deployment.

You will inspect the new email objects under english/, french/ and catchall/ prefixes to observe the forwarding behavior.

Continue experimenting with the sample application by sending different email contents to info@ YOUR_DOMAIN_NAME_HERE address or adding other language codes and email address combinations into the mapping table. You will find the available languages and their codes in the documentation. When adding a new language support, don’t forget to associate a new email address and Amazon S3 bucket prefix by defining a new rule.

Cleanup

To clean up the resources you used in your account,

  1. Navigate to the Amazon S3 console and delete the inbox bucket’s contents. You will find the name of this bucket in the outputs section of the AWS SAM deployment under the key name InboxBucket or by visiting the AWS CloudFormation console.
  2. Navigate to AWS CloudFormation console and delete the stack named “blogstack”.
  3. After the stack is deleted, remove the domain from Amazon SES. To do this, navigate to the Amazon SES Console and choose Domains from the left navigation. Select the domain you want to remove and choose Remove button to remove it from Amazon SES.
  4. From the Amazon SES Console, navigate to the Rule Sets from the left navigation. On the Active Rule Set section, choose View Active Rule Set button and delete all the rules you have created, by selecting the rule and choosing Action, Delete.
  5. On the Rule Sets page choose Disable Active Rule Set button to disable listening for incoming email messages.
  6. On the Rule Sets page, Inactive Rule Sets section, delete the only rule set, by selecting the rule set and choosing Action, Delete.
  7. Navigate to CloudWatch console and from the left navigation choose Logs, Log groups. Find the log group that belongs to the BlogEmailForwarderFunction resource and delete it by selecting it and choosing Actions, Delete log group(s).
  8. You will also delete the Amazon S3 bucket you used for packaging and deploying the AWS SAM application.

 

Conclusion

This solution shows how to use Amazon SES to classify email messages by the dominant content language and forward them to respective support groups. You will use the same techniques to implement similar scenarios. You will forward emails based on custom key entities, like product codes, or you will remove PII information from emails before forwarding with Amazon Comprehend.

With its native integrations with AWS services, Amazon SES allows you to enhance your email applications with different AWS Cloud capabilities easily.

To learn more about email forwarding with Amazon SES, you will visit documentation and AWS blogs.

Maintain consistency in emails with custom content using Amazon SES templates

Post Syndicated from Seth Theeke original https://aws.amazon.com/blogs/messaging-and-targeting/maintain-consistency-in-emails-with-custom-content-with-amazon-ses-templates/

When sending emails, content creators often want to add custom content such as images or videos while maintaining consistency in their messages. They also want to send those emails automatically once new content is ready. In this blog, we will show you how to create templates for emails with a common theme by combining Amazon Simple Email Service (Amazon SES) templates, AWS Lambda, Amazon Simple Storage Service (Amazon S3), and Amazon SES templates.

Promotional content (such as logos, images, videos, and more) can be stored, managed, and hosted in Amazon S3. You can then embed this content into promotional emails without making any changes to email templates or email processing. You can trigger a Lambda function to send promotional emails with the newly added content through using the Amazon SES SDK.

This post shows readers how to:

  • Create an Amazon SES email template with tags to be replaced by image URLs
  • Upload those templates to Amazon SES
  • Setup an AWS CloudFormation stack using the AWS Cloud Development Kit(AWS CDK)
  • Create a Lambda function and Amazon S3 bucket to send emails using the AWS SDK for Javascript

Solution Architecture

The proceeding image shows the architecture you will build as part of this post. You will use the AWS CDK to provision an Amazon S3 bucket, Lambda, and AWS Identity and Access Management(IAM) permissions. You will also use the AWS Command Line Interface(AWS CLI) to upload and manage our Amazon SES templates.

The architecture allows you to upload content to S3 which will trigger a Lambda function. That Lambda will form an Amazon SES request using the template you have uploaded and embed the S3 content as a parameter which will be sent to the user and render in their email client.

Metadata

Time to Read: ~ 20 minutes

Time to Complete: ~ 15 minutes

Cost to Complete: Free Tier

Learning Level: Intermediate

Services Used: Amazon Simple Email Service, Amazon Simple Storage Service, AWS Lambda

Prerequisites

For this walkthrough, you should have the following prerequisites:

Solution Overview

You will walk through creating Amazon SES templates and then a CloudFormation stack using the AWS CDK. You will then create a template file, a lambda directory, and a CDK application directory which should all be in the same level in your package structure in order to follow these steps explicitly.

Step 1: Create an Amazon SES template in JSON with a tag representing your image URL and upload via the CLI

Step 2: Initialize an AWS CDK package using the AWS CDK CLI and add necessary dependencies

Step 3: Initialize a NodeJS AWS Lambda package

Step 4: Provision an Amazon S3 bucket and Lambda in your AWS CDK app

Step 5: Configure your Lambda to be triggered when objects are added to S3

Step 6: Configure your Lambda’s IAM role to allow sending emails via Amazon SES

Step 7: Write necessary code for AWS Lambda to send emails via Amazon SES

Step 8: Deploy and Test!

Step 1: Create an Amazon SES Email template

Amazon SES Email templates are defined as a JSON object containing:

  • TemplateName – name of the template, must be unique across email templates and will be used by our Lambda to pass to Amazon SES
  • SubjectPart – represents the subject of the email
  • HtmlPart – represents the body of the email
  • TextPart – when email clients cannot render HTML, this is displayed instead of HtmlPart

More detailed information about email templates can be found in the Amazon SES Developer Guide.

1.     Open your text editor and save the empty file as email-template.json

2.     Paste the following into your json file and save your changes

{
  "Template": {
    "TemplateName": "MyTemplate",
    "SubjectPart": "Greetings Customer",
    "HtmlPart": "<img src={{imageURL}} alt=\"logo\" width=\"100\" height=\"100\">",
    "TextPart": "Dear Customer,\r\nCheck out our website for new promotional content."
  }
}

This template has a single tag called imageURL which will be replaced during execution with our content’s S3 URL.

3.     Run the following AWS CLI command to upload your template to Amazon SES

aws ses create-template --cli-input-json file://email-template.json

4.     Once your template has been uploaded, you can confirm its creation by logging into the AWS Console, navigating to Amazon SES, and then selecting Email Templates

Step 2: Initialize an AWS CDK package and add necessary dependencies

In this section, you will be using the AWS CDK CLI to initialize a new code package and add the dependencies for Lambda and Amazon S3.

1.     Create a new directory at the same level as email-template.json called email-infrastructure

2.     Navigate to the promotional-email-infrastructure directory and run the following CDK CLI command to generate the skeleton for your cdk application

cdk init app --language=typescript

3.     Add dependencies for Amazon S3, Lambda, and IAM by adding the following lines to your dependencies section of your package.json and then run npm install

"@aws-cdk/aws-lambda": "1.86.0"
"@aws-cdk/aws-lambda-event-sources": "1.86.0"
"@aws-cdk/aws-s3": "1.86.0"
"@aws-cdk/aws-iam": "1.86.0"

Make sure to install the version of these dependencies that matches the version of the aws-cdk stack so you don’t run into compatibility issues.

Step 3: Create a NodeJS Lambda package

In this step, you will create the barebones for our Lambda function that will call Amazon SES and revisit in step 7 to implement the handler

1.     Create a new directory at the same level as email-template.json called email-lambda

2.     Add a package.json file in the email-lambda directory that looks like the following

{
    "name": "email-lambda",
    "version": "1.0.0",
    "main": "index.js",
    "dependencies": {
        "aws-sdk": "2.831.0"
    }
}

3.     Add a file called index.js, this will be our Lambda handler and will look like the following for now. Make sure to insert your verified email address into the testAddress variable, this will be used later as both your to and from address for testing.

var AWS = require("aws-sdk");
var ses = new AWS.SES({apiVersion: "2010-12-01"});
var testAddress = "INSERT_VERIFIED_EMAIL_HERE";

exports.handler = async function(event) { 
    console.log(JSON.stringify(event));
    return "200";
}

4.     Finish this step by running npm install in the email-lambda directory to install the aws-sdk dependencies you will use in a later step. Your top-level directory structure should look like the following:

  • email-template.json – contains your email template from step 1
  • email-infrastructure – contains your CDK stack from step 2
  • email-lambda – contains your email lambda function code from step 3

Step 4: Provision an Amazon S3 bucket and Lambda function in your CDK app

In this step, you will add an Amazon S3 bucket and a NodeJS Lambda function into our CDK application based on what you setup in previous steps. After this, you will connect all the pieces together.

1.     Import all the services you need into our CDK stack construct. The imports you will need are listed below, copy them into your editor in the imports section.

import * as s3 from '@aws-cdk/aws-s3';
import * as lambda from '@aws-cdk/aws-lambda';
import * as lambdaEventSource from '@aws-cdk/aws-lambda-event-sources';
import * as iam from '@aws-cdk/aws-iam';
import path = require('path');

2.     Add S3 bucket to CDK App by adding an instance of the Bucket construct

const promotionalContentBucket = new s3.Bucket(this, "DOC-EXAMPLE-BUCKET");

3.     Similarly, add your Lambda function by creating an instance of the Function construct referencing your Lambda function package by path

const emailLambda = new lambda.Function(this, "EmailLambda", {
    code: lambda.Code.fromAsset(path.join(__dirname, "../../email-lambda")),
    handler: "index.handler",
    runtime: lambda.Runtime.NODEJS_12_X
});

4.     Execute npm run build in your CDK directory to ensure you’ve setup the package correctly. You can get additional help from the Troubleshooting Guide for CDK

Step 5: Configure Lambda with an S3 Event Source

In this step, you will configure your Lambda function to be triggered when objects are added to your Amazon S3 bucket by using the Lambda event sources module for CDK.

1.     Create an instance of the S3EventSource construct in your stack for OBJECT_CREATED events only because you don’t want to trigger a lambda invocation when an object is removed for this post

const s3EventSource = new lambdaEventSource.S3EventSource(promotionalContentBucket, {
    events: [s3.EventType.OBJECT_CREATED]
});

2.     Now that you have an event source defined, you need to add the event source to your Lambda function

emailLambda.addEventSource(s3EventSource);

3.     Add the Amazon S3 bucket’s domain name as an environment variable so you can reference objects by URL by adding the environment property to your Lambda function like below.

environment: {
    "BUCKET_DOMAIN_NAME": promotionalContentBucket.bucketDomainName
}

Step 6: Configure the email Lambda IAM role

In this step, you will add an IAM policy statement to your email Lambda’s execution role so it can call Amazon SES.

1.     Add an IAM PolicyStatement construct with effect ALLOW on all resources with SendTemplatedEmail action

const emailPolicyStatement = new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["ses:SendTemplatedEmail"],
    resources: ["*"]
});

2.     Finish this step by adding your newly created policy to the Lambda execution role

emailLambda.addToRolePolicy(emailPolicyStatement)

Step 7: Add Lambda implementation to send templated emails

You will use the AWS NodeJS Amazon SES SDK to send emails using the SendTemplatedEmail API. Our implementation will assume a batch of size 1 for each Lambda invocation for simplicity.

1.     Replace your function handler in the email Lambda function with the code below. This will read the S3Event’s first record, prepare parameters for the Amazon SES SDK call and invoke the sendTemplatedEmail function with the imageURL embedded into your previously created template.

exports.handler =  async function(event) {  
    console.log(JSON.stringify(event));
    let s3Object = event.Records[0];
    let sendEmailParams = {
        Destination: {
            ToAddresses: [testAddress]
        },
        Template: 'MyTemplate',
        TemplateData: JSON.stringify({
            "imageURL": process.env.BUCKET_DOMAIN_NAME + "/" + s3Object.s3.object.key,
        }),
        Source: testAddress
    };
    let response = await ses.sendTemplatedEmail(sendEmailParams).promise();
    return response;
}

2.     Deploy your stack with the CDK CLI by running cdk deploy, this may take a couple minutes. If you run into problems, see the Troubleshooting Guide for CDK.

Step 8: Test your System

At this point, you should have an Amazon SES template uploaded to your account as well as a CloudFormation stack that contains an Amazon S3 bucket and a Lambda function that is triggered when objects are added to that bucket and had permissions to invoke Amazon SES APIs. Now you will test the system by adding an image into our Amazon S3 bucket.

1.     Log in to the AWS Console

2.     Navigate to Amazon S3

3.     Select your promotional content bucket from the list of buckets

4.     Click Upload on the right-hand side of the screen

5.     Add an image from your computer by clicking Add files

6.     Scroll down to the bottom and expand Additional Upload Options

7.     Scroll down to Access Control List

8.     Select the check boxes for Read for Everyone(public access) so the images are accessible when the user opens their email

9.     Scroll down to the bottom and select Upload

10.     Done! You should have an email in your inbox shortly that renders the image you just uploaded. Check the Lambda logs and errors in case you don’t see your email.

Cleaning up

To avoid incurring future charges, delete your CloudFormation stack by running cdk destroy or manually through the AWS Console. Keep in mind, by default Amazon S3 buckets won’t be deleted so you will need to navigate to Amazon S3 in the AWS Console, clear the bucket of any objects and then manually delete the resource.

Conclusion

Congratulations! You now have an understanding of how to combine Amazon SES templates with Amazon S3 and Lambda to inject custom images into emails without the need for any servers and have launched this stack using the AWS Cloud Development Kit.

Author Bio

My name is Seth Theeke, I work as a Software Development Engineer in Amazon Freight. I’ve been working with AWS since 2016 and hold a Developer Associate Certification. I love soccer and I love software engineering, the simplest things in life!

Opt-in to the new Amazon SES console experience

Post Syndicated from Simon Poile original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-ses-console-opt-in/

Amazon Web Services (AWS) is pleased to announce the launch of the newly redesigned Amazon Simple Email Service (SES) console. With its streamlined look and feel, the new console makes it even easier for customers to leverage the speed, reliability, and flexibility that Amazon SES has to offer. Customers can access the new console experience via an opt-in link on the classic console.

Amazon SES now offers a new, optimized console to provide customers with a simpler, more intuitive way to create and manage their resources, collect sending activity data, and monitor reputation health. It also has a more robust set of configuration options and new features and functionality not previously available in the classic console.

Here are a few of the improvements customers can find in the new Amazon SES console:

Verified identities

Streamlines how customers manage their sender identities in Amazon SES. This is done by replacing the classic console’s identity management section with verified identities. Verified identities are a centralized place in which customers can view, create, and configure both domain and email address identities on one page. Other notable improvements include:

  • DKIM-based verification
    DKIM-based domain verification replaces the previous verification method which was based on TXT records. DomainKeys Identified Mail (DKIM) is an email authentication mechanism that receiving mail servers use to validate email. This new verification method offers customers the added benefit of enhancing their deliverability with DKIM-compliant email providers, and helping them achieve compliance with DMARC (Domain-based Message Authentication, Reporting and Conformance).
  • Amazon SES mailbox simulator
    The new mailbox simulator makes it significantly easier for customers to test how their applications handle different email sending scenarios. From a dropdown, customers select which scenario they’d like to simulate. Scenario options include bounces, complaints, and automatic out-of-office responses. The mailbox simulator provides customers with a safe environment in which to test their email sending capabilities.

Configuration sets

The new console makes it easier for customers to experience the benefits of using configuration sets. Configuration sets enable customers to capture and publish event data for specific segments of their email sending program. It also isolates IP reputation by segment by assigning dedicated IP pools. With a wider range of configuration options, such as reputation tracking and custom suppression options, customers get even more out of this powerful feature.

  • Default configuration set
    One important feature to highlight is the introduction of the default configuration set. By assigning a default configuration set to an identity, customers ensure that the assigned configuration set is always applied to messages sent from that identity at the time of sending. This enables customers to associate a dedicated IP pool or set up event publishing for an identity without having to modify their email headers.

Account dashboard

There is also an account dashboard for the new SES console. This feature provides customers with fast access to key information about their account, including sending limits and restrictions, and overall account health. A visual representation of the customer’s daily email usage helps them ensure that they aren’t approaching their sending limits. Additionally, customers who use the Amazon SES SMTP interface to send emails can visit the account dashboard to obtain or update their SMTP credentials.

Reputation metrics

The new reputation metrics page provides customers with high-level insight into historic bounce and complaint rates. This is viewed at both the account level and the configuration set level. Bounce and complaint rates are two important metrics that Amazon SES considers when assessing a customer’s sender reputation, as well as the overall health of their account.

The redesigned Amazon SES console, with its easy-to-use workflows, will not only enhance the customers’ on-boarding experience, it will also change the paradigms used for their on-going usage. The Amazon SES team remains committed to investing on behalf of our customers and empowering them to be productive anywhere, anytime. We invite you to opt in to the new Amazon SES console experience and let us know what you think.

Amazon SES celebrates 10 years of email sending and deliverability

Post Syndicated from Simon Poile original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-ses-celebrates-10-years-of-email-sending-and-deliverability/

Amazon Simple Email Service (Amazon SES) turns 10 years old today. Back on January 25th 2011, Amazon Web Services (AWS) had only 15 services. Today, AWS has grown to over 180 services. Jeff Barr launched Amazon SES as part of his web evangelist blog. Much of what he wrote about then is still true today. Even 10 years later, email is an important channel for customer communications. Developers still want to rely on a trusted global partner to deliver email at scale. However, mailbox providers are even more protective of their end users’ security. They actively work to ensure that any perceived, unwanted email doesn’t make it to the inbox.

Inbox providers use several factors to determine the legitimacy of email traffic. Over the last decade, we have worked diligently to measure many of those factors in Amazon SES to help our customers achieve great deliverability. The focus for much of that work has been a combination of investments into reputation, engagement, and trust. I want to outline what we’ve accomplished to improve your email sending over the last 10 years.

Reputation

Reputation is the measurement mailbox providers use to determine how closely you follow their sending standards. Amazon SES measures perceived reputation through metrics such as bounce rate or complaint rate in the reputation dashboard. The reputation dashboard also shares overall Amazon SES account sending status like “Healthy” or “Under Review.” Some Inbox providers, or ISPs, also provide feedback to help us measure the effectiveness of a specific IP or domain in sending trustworthy traffic.

You can influence reputation in Amazon SES through:

  • Setting up dedicated IPs: Set up IPs in Amazon SES for your own specific sending with appropriate warm-up plans. Split IPs out by use case such as separating password resets from marketing messages.
  • Customer owned IPs (New in 2020): You can now transition IPs you’ve invested in through your own data center or with another ESP to Amazon SES without interruption.
  • Following sending volume best practices: Nothing can flag your IP addresses faster than non-predictable sending patterns. We help you manage this through sending quotas.
  • Use our SES email simulator: Test your application sending without messages leaving the sandbox.

 

Engagement

Engagement is the rate by which customers are interacting with your content. Amazon SES helps you measure engagement through conversion rates (such as open or click-through) and unsubscribe rates. These are measured in the event publishing click stream. This area is more of an art in our deliverability calculus because success varies by industry and use case.

You can influence engagement in Amazon SES through:

  • Customizing content as much as possible, but follow content best practices to avoid setting off content filters. Mailbox providers often utilize behavioral content filtering using AI to determine if your content is relevant based on engagement behavior.
  • Use consent and list management (New in 2020) with customized topics and opt-out pages. It’s important to offer recipients a way to select what emails they want to receive from you and give them an option to opt-out. This is a great new feature that we’ve added based on customer feedback.
  • Remove emails that are not engaging from your lists. Some customers have a time limit, for example, 60 days, before they are automatically removed from an active email list.

 

Trust

Earning trust on email sending is done through the adherence to proper sending behavior, as measured by both individual ISPs as well as industry watch-groups. Trust is closely related to reputation.  We measure trust through messages in the reputation dashboard based on feedback loops, Real-time Blocklists (RBLs), and spam-traps. You can also see the complaint rate associated to your sending in the complaint area of the reputation dashboard. It has statuses like healthy or under review.

You can influence trust in Amazon SES through:

 

Deliverability is a multi-dimensional part of email sending, beyond just setting up an SMTP (Simple Mail Transfer Protocol) endpoint, with constant complexities. But, we’re here to help. In addition to these investments in deliverability, we’ve also expanded Amazon SES to 18 regions, including the government cloud. It’s been an exciting time at AWS, and we look forward to supporting all of our customers in the years to come with Amazon SES.

 

 

 

 

Strategies for list management with Amazon Pinpoint and Amazon Simple Email Service

Post Syndicated from Heidi Gloudemans original https://aws.amazon.com/blogs/messaging-and-targeting/strategies-for-list-management-with-amazon-pinpoint-and-amazon-simple-email-service/

Managing customer lists is a large part of any outbound customer communication program. From customer acquisition to ongoing engagement, locating the best sources for subscribers and respecting their contact preferences is key to maintaining healthy customer lists. This article will discuss recommendations for list building using Amazon Pinpoint and Amazon Simple Email Service (SES). We will provide recommendations for a subscription process, what information to ask for, how to manage opt-outs, and optimize lists over time.

Customer acquisition

Customer acquisition is the first part of any list activity. There are a few guidelines that all outbound marketers should follow during the list building process. First, do not use 3rd party, leased, or purchased lists. The use of 3rd party lists for email risks complaints, impact to sender reputation, and inclusion in monitoring functions like spam traps. Email service providers like Amazon Pinpoint and Amazon SES will discontinue service for accounts with poor sending behavior that results from use of 3rd party lists.

Second, if you plan on contacting customers across channels, make sure you acquire permission to contact users on each channel. There are many places you can acquire customer contacts, ranging from your website, social media presence, or a QR code on a physical sign. Amazon Pinpoint also has a solution called the Amazon Pinpoint Preference Center that you can deploy to gather and manage customer contact preferences across channels.

There are a few items that you want to include in any customer acquisition form. First, tell the customer how often they can expect communication from you. Is it a weekly newsletter? Monthly? Even better if you give them the option to select how often you communicate with them. Next, tell them what value they can expect from registration. For example, special deals, early access to sales, or even just product and industry news. While you can provide some incentives, avoid providing high-value incentives for registration. Over-the-top enticements like free products will always cause low-quality registrations and resulting low-quality lists.

In addition, make your sign-up forms as concise as possible. Only put high-value content behind registration, and minimize the amount of information customers must provide to register. Having a full profile makes your life as a marketer trying to segment your customers easier. However, it potentially adds friction to the sign-up process which can result in lost customers.

If you can, allow the customer to indicate their content preferences later or during onboarding communications. If you use Amazon SES, you can support up to 20 list topics per account in the Amazon SES list management. For example, if you are a sportswear company, interests could include topics like hiking, biking, or running. You should then send customers emails only about the specific topics that the recipient is interested in receiving. Make sure you retain preference data. All countries are different, but some require you to prove that you received permission to contact a customer.

Managing your customer contacts

Onboarding communication is an essential first step once a customer has submitted their initial registration. Some organizations use the first communication as a registration confirmation step, called a “double opt-in.” In addition to driving engagement and initial calls-to-action (“Confirm your email”), double opt-in emails have the added benefit of verifying that a bot did not submit your customer email.

From the first message you send to a new customer to the last, you should always include the unsubscribe option. Every country has different requirements, of which you should research and educate your organization. Amazon SES now supports subscription management for custom URLs in the footers of your emails. Amazon SES also now supports contact preferences in a custom landing page, where customers can adjust their contact list preferences.

Removing unengaged users

Both Amazon Pinpoint and Amazon SES enable visibility to the success of outbound communications through open rates and click-through rates at the account or campaign level. If time passes with limited engagement from an individual customer, (i.e., they do not open your mails or engage with the content) there is a risk the recipient mailbox provider will start marking your messages as spam. Work with your business to determine the period of time after which you should automatically remove unengaged users from your contact list. Many organizations will remove customers from their contact lists after 60 to 90 days of non-engagement.  If you need the ability to quickly query customers that are not engaging with your communications from your data store of choice, enable event stream data in Amazon Pinpoint or Amazon SES using Amazon Kinesis. Amazon Pinpoint also has a solution, the Digital User Engagement Events Stream Database that creates a data store for that purpose.

Amazon SES and Amazon Pinpoint both also have the concept of global and account suppression lists. Global suppression lists are managed across AWS accounts, while account-based suppression lists are associated to your AWS account. If a customer explicitly unsubscribes from your list using Amazon SES list management, or complains through their inbox provider, they will automatically be added to the respective account suppression list. Customers that are part of suppression lists are no longer sent messages from your account. Respecting contact preferences like unsubscribe is an opportunity to earn trust with that specific customer, the market, and the recipient mailbox provider.

Conclusion

There are a number of additional best practices to drive customer engagement across communications channels. They can include message headlines, copy, graphics/images, and adaptive design across various endpoints and clients. However, nothing is as important as maintaining the trust of end customers with their contact information. Sourcing contact information with the customer permission, respecting contact preferences, and maintaining list hygiene is the cornerstone to a successful customer communications program. Learn more today about Amazon Pinpoint and Amazon SES and customer communications.

How Amazon Simple Email Service supported the growth of email in 2020

Post Syndicated from Simon Poile original https://aws.amazon.com/blogs/messaging-and-targeting/how-amazon-simple-email-service-supported-the-growth-of-email-in-2020/

Over the last 12 months, organizations of all types have increasingly needed to stay connected to their customers. With the move to virtual interactions accelerating across industries, email has remained a trusted channel for customer communications. Amazon Simple Email Service (SES) has seen record outbound email traffic in 2020, supporting critical customer communications during COVID and commercial moments like Black Friday and Cyber Monday.

The importance of email during COVID

Unlike real-time communications like voice or live chat, email is asynchronous. It can be read and consumed at the customer’s leisure. In some geographies like North America, email also represents an individual’s unique identity, persisting longer than mobile phone numbers or social networking accounts. Even with the importance of email established before 2020, it was important to most organizations to send only the right messages during the COVID crisis.

Many organizations chose to decrease promotional or marketing emails during the pandemic voluntarily. This decrease in sending was to recognize the increased stress most individuals were facing in their personal lives. However, even with the drop in marketing emails across organizational types, there was an increased need to communicate and maintain customer engagement. Most organizations went through three distinct customer communication phases with email in 2020: React, Respond, and Reimagine.

  • React – These were the initial emails sent to acknowledge the COVID crisis, occurring early in 2020. These emails included messages reinforcing commitment to customer health, employee safety, or communicating new cleaning protocols.
  • Respond – These messages often included communication on the status of the business or event. Most businesses needed to communicate their transition to remote work, temporary closures, and many in-person events canceled.
  • Reimagine – Throughout the crisis, organizations were reimagining how to do business. Healthcare started operating video consultations, and restaurants shifted to pick up/take out only. Email communication was vital to take customers on the journey into this “new normal,” even as some businesses started to reopen.

To send these customer communications at scale, many organizations worked with Amazon SES.

How Amazon SES scaled and supported customers in 2020

Amazon SES saw several sending spikes that aligned with organizations working to communicate with their customers during COVID. Nine times in 2020, transactions per second (TPS) in Amazon SES exceeding 150% of the previous record held by 2019 Black Friday. This over 150% TPS spike also occurred on 2020 Black Friday and Cyber Monday.

In addition to supporting those upsurges in throughput, the Amazon SES team also responded to customer feedback on increasing the global footprint of Amazon SES. Since January, Amazon SES increased the total number of regions supported from 7 to 14, including the US government cloud. These additional regions were deployed during 2020 as the team worked remotely. This regional expansion enabled customers to adhere to local data sovereignty requirements for email sending while also improving performance.

Customers also told us they needed tools to help them manage compliance with important governance laws like CAN-SPAM and GDPR. Amazon SES released list management to help organizations manage their customer’s contact information and preferences.

Looking forward

As we move into 2021, email will remain at the forefront of customer communication channels. Enterprise customers like Netflix and Duolingo rely on Amazon SES to deliver their email at scale. For more information on how you can use Amazon SES, visit our website.

Auto-reply to incoming emails using Amazon Simple Email Service (SES)

Post Syndicated from Ilya Pupko original https://aws.amazon.com/blogs/messaging-and-targeting/auto-reply-to-incoming-emails-using-amazon-simple-email-service-ses/

Both Amazon Pinpoint and Amazon Simple Email Service (SES) are known for their ability to send out transactional and promotional emails at scale and with ease. However, both are often not set up to receive email replies. Owners often assume that the “no-reply” addresses they are using do not require much consideration. This means that if a customer does reply, they would get an unhelpful server rejection indicating that the address is invalid. They would also not be able to unsubscribe via the simple reply, which is an otherwise established common practice. Automated guidance that the address is not monitored and who and how to reach for assistance would never be provided. In summary, a very unprofessional experience.

If you do have full control over the DNS and are not already receiving emails at the subdomain used for these emails, you can follow this short guide. It walks you through all the setup needed to have automated and templated responses to any address at the domain. This includes the address you use to send emails. Follow this post to ensure that your Amazon SES and Amazon Pinpoint are set up in accordance with common configuration and best business practices to have professional auto-reply to emails sent to the configured sending email addresses.

Solution overview

The proposed solution does not rely on any additional services. It does not add any additional charges beyond the cost directly associated with receiving and sending the emails and the minimal AWS Lambda function for the automated logic. It relies on SES built-in capability to receive emails, Amazon Pinpoint native templates, and uses Lambda for basic orchestration.

lambda diagram for response

Note, in this walkthrough and related code, we are using Amazon Pinpoint templates as they can be managed and maintained directly via the console, but you can choose to use SES templates (via the CreateTemplate API) or, if it makes better sense in your scenario, even just hardcode the template into the AWS Lambda function itself.

To complete the setup, all you must do is follow these steps:

      1. Confirm (Sub-) Domain setup in SES (even if you use Amazon Pinpoint to send your emails out, the SES portion of the console should show the validated domain as well). See SES Developer Guide.
      2. Ensure that your SES domain is verified and you are out of the sandbox. If still in the sandbox, you can only send emails to the Amazon SES mailbox simulator addresses and email addresses/domains that you have pre-verified. See Moving out of the Amazon SES sandbox.
      3. Configure SES to receive incoming emails. Please note that this must be done on the whole subdomain you use, not just a single email address. See Setting up Amazon SES email receiving.
      4. Create/add a new template you want to use via Amazon Pinpoint. Simply switch the console over to Amazon Pinpoint, select Message templates, click Create, select Email, and fill out the rest of the self-explanatory field.
        1. Plaintext portion is optional – you can either skip it or fill it out and enable in the Lambda function we are deploying in the next step.
        2. Similarly, if you prefer to use the SES template, you can instead. Just use the associated line in that same code.
        3. Same with a hardcoded template, if you prefer that for some reason.
      5. Have this pre-defined CloudFormation create the required SES receive rule, and Lambda function. This processes the incoming email and sends back the response, all using the code shared in the dedicated portion of our GitHub, AWS Digital User Engagement Reference Architectures repository. Specifically:
          1. Download the YAML from SES_Auto_Reply.yaml.
          2. Go to CloudFormation in AWS Management Console. (Remember to choose the region you want it deployed on)
          3. Click Create Stack and then choose With new resources
          4. Leave the default “Template is ready“, but switch to ”Upload a template file“ and choose the file you just downloaded
          5. Follow the wizard to give the “stack” a new name and enter the name of the template you created in step 4.
          6. Optionally you can also set the default response address, the addresses and/or domains you want to limit the auto-response to, and adjust the incoming email rule-set it should be stored under (the default should be fine, unless you have manually adjusted it in the past)
      6. Once deployed, the behavior is immediately active and you can further adjust any of these elements.

 

Conclusion and what’s next?

This architecture, once deployed, sends out the templated auto-response using the SES/Pinpoint domain/email address it received the original email on.

The new rule is added to the SES email receiving rule set to allow further customization:

  1. The rule can be limited to specific email address, specific domain, or just be set to be across all domains.
  2. It can also have the default response address set or reuse the address that the original rejected email was sent to.
  3. It can be moved down on the priority with other rules taking precedence and possibly even overriding it.
  4. It can have other actions added to it, like notifying SNS for additional tracking.

The Lambda function looks up the chosen Amazon Pinpoint template and uses it to reply. Here are some of the customizations you may want to consider within this function and the template:

  1. When sending the automated reply, by default, the template’s configured subject is appended with the original incoming email subject. You can adjust this to fit your company’s brand better.
  2. By default, the function supports an optional template tag %%NAME%% and %%ID%%. If the first appears in the template, it is automatically replaced with the original email’s FROM address. And if %%ID%% appears in the template, it is replaced with the SES’s original email message id, to help with any required audits.
  3. It is assumed that no additional tracking and actions are needed on such rejected and auto-replied emails, but you can further modify the flow by moving the rule around and adding more actions (as mentioned above), and even specify a particular/different SES Configuration Set for the outgoing emails.

Are you using this flow as a baseline for a more complex business flow or have other questions about it? We want to hear back – please comment here or file an issue in the GitHub repository. If you want to file a pull request to make it even more useful for others, please do so, we do appreciate community participation.

If you liked this article, we are continually expanding our Amazon Pinpoint and SES Architecture References and publish new solutions for these and other services. For most recent SES documentation, please see official SES documentation site, and for Amazon Pinpoint, please see Amazon Pinpoint documentation site.

 

 

 

Analyze and improve email campaigns with Amazon Simple Email Service and Amazon QuickSight

Post Syndicated from Apoorv Gakhar original https://aws.amazon.com/blogs/messaging-and-targeting/analyze-and-improve-email-campaigns-with-amazon-simple-email-service-and-amazon-quicksight/

Email is a popular channel for applications, used in both marketing campaigns and other outbound customer communications. The challenge with email is that it can become increasingly complex to manage for companies that must send large quantities of messages per month. This complexity is especially true when companies need to measure detailed email engagement metrics to track campaign success.

As a marketer, you want to monitor several metrics, including open rates, click-through rates, bounce rates, and delivery rates. If you do not track your email results, you could potentially be wasting your campaign resources. Monitoring and interpreting your sending results can help you deliver the best content possible to your subscribers’ inboxes, and it can also ensure that your IP reputation stays high. Mailbox providers prioritize inbox placement for senders that deliver relevant content. As a business professional, tracking your emails can also help you stay on top of hot leads and important clients. For example, if someone has opened your email multiple times in one day, it might be a good idea to send out another follow-up email to touch base.

Building a large-scale email solution is a complex and expensive challenge for any business. You would need to build infrastructure, assemble your network, and warm up your IP addresses. Alternatively, working with some third-party email solutions require contract negotiations and upfront costs.

Fortunately, Amazon Simple Email Service (SES) has a highly scalable and reliable backend infrastructure to reduce the preceding challenges. It has improved content filtering techniques, reputation management features, and a vast array of analytics and reporting functions. These features help email senders reach their audiences and make it easier to manage email channels across applications. Amazon SES also provides API operations to monitor your sending activities through simple API calls. You can publish these events to Amazon CloudWatch, Amazon Kinesis Data Firehose, or by using Amazon Simple Notification Service (SNS).

In this post, you learn how to build and automate a serverless architecture that analyzes email events. We explore how to track important metrics such as open and click rate of the emails.

Solution overview

 

The metrics that you can measure using Amazon SES are referred to as email sending events. You can use Amazon CloudWatch to retrieve Amazon SES event data. You can also use Amazon SNS to interpret Amazon SES event data. However, in this post, we are going to use Amazon Kinesis Data Firehose to monitor our user sending activity.

Enable Amazon SES configuration sets with open and click metrics and publish email sending events to Amazon Kinesis Data Firehose as JSON records. A Lambda function is used to parse the JSON records and publish the content in the Amazon S3 bucket.

Ingested data lands in an Amazon S3 bucket that we refer to as the raw zone. To make that data available, you have to catalog its schema in the AWS Glue data catalog. You create and run the AWS Glue crawler that crawls your data sources and construct your Data Catalog. The Data Catalog uses pre-built classifiers for many popular source formats and data types, including JSON, CSV, and Parquet.

When the crawler is finished creating the table definition and schema, you analyze the data using Amazon Athena. It is an interactive query service that makes it easy to analyze data in Amazon S3 using SQL. Point to your data in Amazon S3, define the schema, and start querying using standard SQL, with most results delivered in seconds.

Now you can build visualizations, perform ad hoc analysis, and quickly get business insights from the Amazon SES event data using Amazon QuickSight. You can easily run SQL queries using Amazon Athena on data stored in Amazon S3, and build business dashboards within Amazon QuickSight.

 

Deploying the architecture:

Configuring Amazon Kinesis Data Firehose to write to Amazon S3:

  1. Navigate to the Amazon Kinesis in the AWS Management Console. Choose Kinesis Data Firehose and create a delivery stream.
  2. Enter delivery stream name as “SES_Firehose_Demo”.
  3. Under the source category, select “Direct Put or other sources”.
  4. On the next page, make sure to enable Data Transformation of source records with AWS Lambda. We use AWS Lambda to parse the notification contents that we only process the required information as per the use case.
  5. Click the “Create New” Lambda function.
  6. Click on “General Kinesis Data FirehoseProcessing” Lambda blueprint and this opens up the Lambda console. Enter following values in Lambda
    • Name: SES-Firehose-Json-Parser
    • Execution role: Create a new role with basic Lambda permissions.
  7. Click “Create Function”. Now replace the Lambda code with the following provided code and save the function.
    • 'use strict';
      console.log('Loading function');
      exports.handler = (event, context, callback) => {
         /* Process the list of records and transform them */
          const output = event.records.map((record) => {
              console.log(record.recordId);
              const payload =JSON.parse((Buffer.from(record.data, 'base64').toString()))
              console.log("payload : " + payload);
              
              if (payload.eventType == "Click") {
              const resultPayLoadClick = {
                      eventType : payload.eventType,
                      destinationEmailId : payload.mail.destination[0],
                      sourceIp : payload.click.ipAddress,
                  };
              console.log("resultPayLoad : " + resultPayLoadClick.eventType + resultPayLoadClick.destinationEmailId + resultPayLoadClick.sourceIp);
              
              //const parsed = resultPayLoad[0];
              //console.log("parsed : " + (Buffer.from(JSON.stringify(resultPayLoad))).toString('base64'));
              
              
              return{
                  recordId: record.recordId,
                  result: 'Ok',
                  data: (Buffer.from(JSON.stringify(resultPayLoadClick))).toString('base64'),
              };
              }
              else {
                  const resultPayLoadOpen = {
                      eventType : payload.eventType,
                      destinationEmailId : payload.mail.destination[0],
                      sourceIp : payload.open.ipAddress,
                  };
              console.log("resultPayLoad : " + resultPayLoadOpen.eventType + resultPayLoadOpen.destinationEmailId + resultPayLoadOpen.sourceIp);
              
              //const parsed = resultPayLoad[0];
              //console.log("parsed : " + (Buffer.from(JSON.stringify(resultPayLoad))).toString('base64'));
              
              
              return{
                  recordId: record.recordId,
                  result: 'Ok',
                  data: (Buffer.from(JSON.stringify(resultPayLoadOpen))).toString('base64'),
              };
              }
          });
          console.log("Output : " + output.data);
          console.log(`Processing completed.  Successful records ${output.length}.`);
          callback(null, { records: output });
      };

      Please note:

      For this blog, we are only filtering out three fields i.e. Eventname, destination_Email, and SourceIP. If you want to store other parameters you can modify your code accordingly. For the list of information that we receive in notifications, you may check out the following document.

      https://docs.aws.amazon.com/ses/latest/DeveloperGuide/event-publishing-retrieving-firehose-examples.html

  8. Now, navigate back to your Amazon Kinesis Data Firehose console and choose the newly created Lambda function.
  9. Keep the convert record format disabled and click “Next”.
  10. In the destination, choose Amazon S3 and select a target Amazon S3 bucket. Create a new bucket if you do not want to use the existing bucket.
  11. Enter the following values for Amazon S3 Prefix and Error Prefix. When event data is published.
    • Prefix:
      fhbase/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/hour=!{timestamp:HH}/
    • Error Prefix:
      fherroroutputbase/!{firehose:random-string}/!{firehose:error-output-type}/!{timestamp:yyyy/MM/dd}/
  12. You may utilize the above values in the Amazon S3 prefix and error prefix. If you use your own prefixes make sure to accordingly update the target values in AWS Glue which you will see in further process.
  13. Keep the Amazon S3 backup option disabled and click “Next”.
  14. On the next page, under the Permissions section, select create a new role. This opens up a new tab and then click “Allow” to create the role.
  15. Navigate back to the Amazon Kinesis Data Firehose console and click “Next”.
  16. Review the changes and click on “Create delivery stream”.

Configure Amazon SES to publish event data to Kinesis Data Firehose:

  1. Navigate to Amazon SES console and select “Email Addresses” from the left side.
  2. Click on “Verify a New Email Address” on the top. Enter your email address to which you send a test email.
  3. Go to your email inbox and click on the verify link. Navigate back to the Amazon SES console and you will see verified status on the email address provided.
  4. Open the Amazon SES console and select Configuration set from the left side.
  5. Create a new configuration set. Enter “SES_Firehose_Demo”  as the configuration set name and click “Create”.
  6. Choose Kinesis Data Firehose as the destination and provide the following details.
    • Name: OpenClick
    • Event Types: Open and Click
  7. In the IAM Role field, select ‘Let SES make a new role’. This allows SES to create a new role and add sufficient permissions for this use case in that role.
  8. Click “Save”.

Sending a Test email:

  1. Navigate to Amazon SES console, click on “Email Addresses” on the left side.
  2. Select your verified email address and click on “Send a Test email”.
  3. Make sure you select the raw email format. You may use the following format to send out a test email from the console. Make sure you send out this email to a recipient inbox to which you have the access.
    • X-SES-CONFIGURATION-SET: SES_Firehose_Demo
      X-SES-MESSAGE-TAGS: Email=NULL
      From: [email protected]
      To: [email protected]
      Subject: Test email
      Content-Type: multipart/alternative;
          		boundary="----=_boundary"
      
      ------=_boundary
      Content-Type: text/html; charset=UTF-8
      Content-Transfer-Encoding: 7bit
      This is a test email.
      
      <a href="https://aws.amazon.com/">Amazon Web Services</a>
      ------=_boundary
  4. Once the email is received in the recipient’s inbox, open the email and click the link present in the same. This generates a click and open event and send the response back to SES.

Creating Glue Crawler:

  1. Navigate to the AWS Glue console, select “crawler” from the left side, and then click on “Add crawler” on the top.
  2. Enter the crawler name as “SES_Firehose_Crawler” and click “Next”.
  3. Under Crawler source type, select “Data stores” and click “Next”.
  4. Select Amazon S3 as the data source and prove the required path. Include the path until the “fhbase” folder.
  5. Select “no” under Add another data source section.
  6. In the IAM role, select the option to ‘Create an IAM role’. Enter the name as “SES_Firehose-Crawler”. This provides the necessary permissions automatically to the newly created role.
  7. In the frequency section, select run on demand and click “Next”. You may choose this value as per your use case.
  8. Click on add Database and provide the name as “ses_firehose_glue_db”. Click on create and then click “Next”.
  9. Review your Glue crawler setting and click on “Finish”.
  10. Run the above-created crawler. This crawls the data from the specified Amazon S3 bucket and create a catalog and table definition.
  11. Now navigate to “tables” on the left, and verify a “fhbase” table is created after you run the crawler.

If you want to analyze the data stored until now, you can use Amazon Athena and test the queries. If not, you can move to the Amazon Quicksight directly.

Analyzing the data using Amazon Athena:

  1. Open Athena console and select the database, which is created using AWS Glue
  2. Click on “setup a query result location in Amazon S3” as shown in the following screenshot.
  3. Navigate to the Amazon S3 bucket created in earlier steps and create a folder called “AthenaQueryResult”. We store our Athena query result in this bucket.
  4. Now navigate back to Amazon Athena and select the Amazon S3 bucket with the folder location as shown in the following screenshot and click “Save”.
  5. Run the following query to test the sample output and accordingly modify your SQL query to get the desired output.
    • Select * from “ses_firehose_glue_db”.”fhbase”

Note: If you want to track the opened emails by unique Ip addresses then you can modify your SQL query accordingly. This is because every time an email gets opened, you will receive a notification even if the same email was previously opened.

 

Visualizing the data in Amazon QuickSight dashboards:

  1. Now, let’s analyze this data using Amazon Athena via Amazon Quicksight.
  2. Log into Amazon Quicksight and choose Manage data, New dataset. Choose Amazon Athena as a new data source.
  3. Enter the data source name as “SES-Demo” and click on “Create the data source”.
  4. Select your database from the drop-down as “ses_firehose_glue_db” and table “fhbase” that you have created in AWS Glue.
  5. And add a custom SQL based on your use case and click on “Confirm query”. Refer to the example below.
  6. You can perform ad hoc analysis and modify your query according to your business needs as shown in the following image. Click “Save & Visualize”.
  7. You can now visualize your event data on Amazon Quicksight dashboard. You can use various graphs to represent your data. For this demo, the default graph is used and two fields are selected to populate on the graph, as shown below.

 

Conclusion:

This architecture shows how to track your email sending activity at a granular level. You set up Amazon SES to publish event data to Amazon Kinesis Data Firehose based on fine-grained email characteristics that you define. You can also track several types of email sending events, including sends, deliveries, bounces, complaints, rejections, rendering failures, and delivery delays. This information can be useful for operational and analytical purposes.

To get started with Amazon SES, follow this quick start guide and you can learn more about monitoring sending activity here.

About the Authors

Chirag Oswal is a solutions architect and AR/VR specialist working with the public sector India. He works with AWS customers to help them adopt the cloud operating model on a large scale.

Apoorv Gakhar is a Cloud Support Engineer and an Amazon SES Expert. He is working with AWS to help the customers integrate their applications with various AWS Services.

 

Additional Resources:

Amazon SES Dedicated IP Pools

Amazon Personalize optimizer using Amazon Pinpoint events

Template Personalization using Amazon Pinpoint