Tag Archives: AWS

Updates to Layered Context Enable Teams to Quickly Understand Which Risk Signals Are Most Pressing

Post Syndicated from Pauline Logan original https://blog.rapid7.com/2023/11/28/updates-to-layered-context-enable-teams-to-quickly-understand-which-risk-signals-are-most-pressing/

Updates to Layered Context Enable Teams to Quickly Understand Which Risk Signals Are Most Pressing

Layered Context introduced a consolidated view of all security risks insightCloudSec collects from the various layers of a cloud environment. This enabled our customers to go from visibility into individual security risks on a resource, to understanding all of the risks that impacted that resource and the overall risk of that resource.

For example: let’s take a cloud resource that has a port left open to the public.

With this level of detail it is pretty challenging to identify the risk level, because we don’t know enough about the resource in question, or even if it was supposed to be opened to the public or not. It’s not that this isn’t risky, we just need to know more to evaluate just how risky it is. As we add more context, we start to get a clearer picture: the environment the resource is running in, if it is connected to a business critical application, does it have any known vulnerabilities, are there identities with elevated permissions associated with the resource, etc.

Updates to Layered Context Enable Teams to Quickly Understand Which Risk Signals Are Most Pressing

By layering together all of this context, customers are able to effectively understand the actual risk associated with each and every one of their resources – in real-time. This is of course helpful information to have in one consolidated view, but even still it can be difficult to sift through potentially thousands of resources and prioritize the work that needs to be done to secure each one. To that end, we are excited to introduce a new risk score in Layered Context, which analyzes all the signals and context we know about a given cloud resource and automatically assigns a score and a severity, making it easy for our customers to understand the riskiest resources they should focus on.

Prioritizing Risk By Focusing on Toxic Combinations

Much like Layered Context itself, the new risk score combines a variety of risk signals, assigning a higher risk score to resources that suffer from toxic combinations, or multiple risk vectors that compound to present an increased likelihood or impact of compromise.

The risk score takes into account:

  • Business Criticality, with an understanding of what applications the resource is associated with such as a crown-jewel or revenue generating app
  • Public Accessibility, both from a network perspective as well as via user permissions (more on that in a second)
  • Potential Attack Paths, to understand how a bad actor could move laterally across your inter-connected environment
  • Identity-related risk, including excessive and/or unused permissions and privileges
  • Misconfigurations, including whether or not the resource is in compliance with organizational standards
  • Threats to factor in any malicious behavior that has been detected
  • And of course, Vulnerabilities, using Rapid7’s Active Risk model which consumes data on exploitability and active exploitation in the wild

By identifying these toxic combinations, we can ensure the riskiest resources are given the highest priority. Each resource is assigned a score and a severity, making it easy for our customers to see where the riskiest resources exist in their environment and where to focus.

A Clear Understanding of How We Calculate Risk

Alongside our risk score, we are  introducing a new view to breakdown all of the reasons why a resource has been scored accordingly. This will give an overview of the most important information our customers need to know that clearly summarizes the factors that influenced the risk scoring. Reducing the time required to understand why a resource is risky, meaning security teams can focus on remediating the risks.

Updates to Layered Context Enable Teams to Quickly Understand Which Risk Signals Are Most Pressing

A Bit More on How we Determine Public Accessibility

As mentioned previously, the basis of much of our risk calculation in cloud resources stems from a simple question: “is this resource publicly accessible?” This is a critical detail in determining relative risk, but can be very difficult to ascertain given the complex and ephemeral nature of cloud environments. To address this, we’ve invested significant time and effort to ensure we’re assessing public accessibility as accurately as possible but also explaining why we’ve determined it that way, so it’s much easier to take remediation action. This determination can easily be viewed on a per resource basis from the Layered Context page.

We have lots of exciting releases coming up in the next few months, alongside Risk scoring we are also extending our Attack Path Analysis feature to show the Blast Radius of an Attack with improved topology visualizations.  This will give our customers not only the visibility into how an attacker could exploit a given resource but also the potential for lateral movement between interconnected resources. Additionally, we’ll be updating the way we validate and show proof of public accessibility. Should a resource be publicly accessible, you will be able to easily view the proof details which will show exactly which combination of configurations is resulting in the resource being publicly accessible.

The new risk scoring capabilities in Layered Context will be on display at AWS Re:Invent next week. Be sure to stop by booth #1270 to see it in action!

Simplify your SMS setup with the new Amazon Pinpoint SMS console

Post Syndicated from hamzarau original https://aws.amazon.com/blogs/messaging-and-targeting/send-sms-using-the-new-amazon-pinpoint-sms-console/

Amazon Pinpoint is a multichannel communication service that helps application developers engage their customers through communication channels such as SMS or text messaging, email, mobile push, voice, and in-app messaging.

Amazon Pinpoint SMS provides the global scale, resiliency, and flexibility required to deliver SMS and voice messaging in web, mobile, or business applications. SMS messaging is used for use cases like one-time passcode validation, time sensitive alerts, and two-way chat due to its global reach and ubiquity. Today Amazon Pinpoint SMS sends messages to over 240 countries and regions. In this post, we will review how to use the new Pinpoint SMS management console to get your SMS resources setup correctly the first time.

This blog walks through the setup and configuration steps for Pinpoint SMS using the management console. Additionally, all setup and configurations can also be completed using Pinpoint SMS APIs. For more information visit the Pinpoint SMS documentation, or complete the Amazon Pinpoint SMS workshop.

The Pinpoint SMS management console provides control for the existing functionality of the Pinpoint SMS APIs to create, and manage your SMS and voice resources. In addition, the Pinpoint SMS console has a Quick start – SMS setup guide or Request originator flow to guide you through the setup process and for requesting and managing your SMS resources.

If you require additional background on how SMS works using Amazon Pinpoint SMS, refer to How to Manage Global Sending of SMS with Amazon Pinpoint. Below are some important SMS concepts we’ll highlight in this blog post.

Important SMS Concepts and Resources

  • Phone pool: The phone pool resource is a collection of phone numbers and sender IDs that all share the same settings and provide failover if a number becomes unavailable.
  • Originator: An originator refers to either a phone number or sender ID.
  • Phone number: Also called originator number, a phone number is a numeric string of numbers that identifies the sender. This can be a long code, short code, toll-free number (TFN), or 10-digit long code (10DLC). For more information see choosing a phone number or sender ID.
  • Verified destination phone number: When your account is in Sandbox you can only send SMS messages to phone numbers that have gone through the verification process. The phone number receives an SMS message with a verification code. The received code must be entered into the console to complete the process.
  • Simulator phone number: A simulator phone number behaves as any other origination and destination phone number without sending the SMS message to mobile carriers. Simulator phone numbers do not require registration and are used for testing scenarios.
  • Sender ID: Also called originator ID, a sender ID is an alphanumeric string that identifies the sender. For more information see choosing a phone number or sender ID.
  • Registered phone number: Some countries require you to register your company’s identity before you can purchase phone numbers or sender IDs. They also require a review of the messages that you send to recipients in their country. Registrations are processed by external third parties, so the amount of time to process a registration varies by phone number type and country. After all required registrations are complete, the status of your phone numbers changes to Active and is available for use. For more information about which countries require registration see, supported countries and regions (SMS channel).

Getting started

Sign-in to the AWS management console and search for Amazon Pinpoint. If you don’t have an existing AWS account, complete the following steps to create one.

In the Amazon Pinpoint console, you can choose between managing Pinpoint SMS and Pinpoint campaign orchestration. Pinpoint SMS is the place where applications developers go to setup and configure their associated resources for SMS sending through any AWS service. Pinpoint campaign orchestration is for builders who want to manage their customer segments and send messages using campaigns, or multi-step journeys. Campaign orchestration utilizes communication channels like Pinpoint SMS or Amazon SES (simple email service) to deliver its messages. In this blog, we will discuss how to configure Pinpoint SMS using its management console.

Amazon Pinpoint SMS Console

Quick start – SMS setup guide

Once you’ve selected the Amazon Pinpoint SMS console, you will land on the Overview page. On this page, you get a summary of your SMS resources and the Quick start – SMS setup guide. This guide will walk you through creating the appropriate SMS resources to start sending SMS messages. The steps outlined in the Quick start guide are recommended but not required.

Step 1: Create a phone pool

A phone pool is a collection of phone numbers and sender IDs that all share the same settings and provide failover if a number becomes unavailable. Phone pools provide the benefit of managing for number resiliency, removes the complexity from sending applications, and provides a logical grouping to manage phone numbers and sender IDs. For example, phone pools can be grouped by use-case such as having a phone pool for OTP (one-time password) messages.

In the navigation pane, under Overview, in the Quick start section, choose Create pool. Under the pool setup section, enter a name for your pool in Pool name. To create a pool, you will need to select an origination identity, either a phone number or sender ID to associate with the pool. Additional origination identities can be added once the pool is created on the Phone pools page. If you don’t have an active phone number or sender ID in your account, we recommend selecting a simulator number, which can be used for testing and does not require any registration. Once you’ve selected an origination identity, you can choose Create phone pool to complete step 1.

Setting up phone pools for sending SMS

Step 2: Create a configuration set

A configuration set is a set of rules that are applied when you send a message. For example, a configuration set can specify a destination for events related to a message. When SMS events occur (such as delivery or failure events), they are routed to the destination associated with the configuration set that you specified when you sent the message. You’re not required to use configuration sets when you send messages, but we recommend that you do. We support sending SMS and voice events to Amazon CloudWatch, Amazon Kinesis DataFirehose, and Amazon SNS.

In the navigation pane, under Overview, in the Quick start section, choose Create set. Under the Configuration set details section, enter a name in Configuration set name. For Event Destination setup, choose either the quick start option to create a Cloud formation stack to automatically create and configure CloudWatch, Kinesis DataFirehose, and SNS to log all events or the advanced option to manually select which event destinations you would like to setup. Once you’ve made the selection, choose Create Configuration set to complete step 2.

How to create a configuration set for sending SMS

Step 3: Test SMS sending

Send a test message using the SMS simulator. Select an originator to send from, and a destination number to send to. To track the status of your message, add a configuration set to publish SMS events.

In the navigation pane, under Overview, in the Quick start section, choose Test SMS sending. Under the Originator section, select either a phone pool, phone number, or sender ID in your account to send test messages from. Next, under the Destination phone number section, select either a simulator number or active destination number to send test messages to. If your account is in Sandbox, you can only send messages to simulator numbers or verified destination numbers. Once your account is in Production you can send messages to simulator numbers or any active destination number. You can (optionally) select a configuration set to track your SMS events. Next, under the Message body section, enter a sample message and send the test message.

Note – If you are sending from a US simulator number (or using a phone pool that only contains a US simulator number) you can only send messages to US simulator destination numbers. A simulator phone number behaves like any other phone number without sending the SMS message to mobile carriers.

SMS simulator in the SMS console

Step 4: Request production Access

Finally, if your account is in Sandbox there are limits to the amount you can spend and can only send to verified destination phone numbers. Request moving your account from Sandbox to Production to remove these limits. To move to Production, open a case with AWS Support Center.

Conclusion

After following the request for Production access, you’ve completed the recommended steps to get your account configuration setup. You have now tested and configured the following resources in your account:

  • Phone pool: A phone pool is a collection of phone numbers and sender IDs that all share the same settings and provide failover if a number becomes unavailable. Phone pools provide the benefit of managing for number resiliency, removes the complexity from sending applications, and provides a logical grouping to manage phone numbers and sender IDs.
    • Originator: As part of the pool setup, you are required to associate at least one originator to the phone pool. An originator refers to either a phone number or sender ID. If you’ve selected a simulator number and would like to now request a new phone number or sender ID, you can do so following Request originator flow.
  • Configuration set: A configuration set allows you to organize, track, and configure logging of your SMS events, specifying where to publish them by adding event destinations.

Next steps

To request additional originators such as phone numbers or sender IDs, you can follow the Request Originator flow in the management console. If your originator requires registrations and is supported, you can self-service the phone number or sender ID registration in the management console.

An Overview of Bulk Sender Changes at Yahoo/Gmail

Post Syndicated from Dustin Taylor original https://aws.amazon.com/blogs/messaging-and-targeting/an-overview-of-bulk-sender-changes-at-yahoo-gmail/

In a move to safeguard user inboxes, Gmail and Yahoo Mail announced a new set of requirements for senders effective from February 2024. Let’s delve into the specifics and what Amazon Simple Email Service (Amazon SES) customers need to do to comply with these requirements.

What are the new email sender requirements?

The new requirements include long-standing best practices that all email senders should adhere to in order to achieve good deliverability with mailbox providers. What’s new is that Gmail, Yahoo Mail, and other mailbox providers will require alignment with these best practices for those who send bulk messages over 5000 per day or if a significant number of recipients indicate the mail as spam.

The requirements can be distilled into 3 categories: 1) stricter adherence to domain authentication, 2) give recipients an easy way to unsubscribe from bulk mail, and 3) monitoring spam complaint rates and keeping them under a 0.3% threshold.

* This blog was originally published in November 2023, and updated on January 12, 2024 to clarify timelines, and to provide links to additional resources.

1. Domain authentication

Mailbox providers will require domain-aligned authentication with DKIM and SPF, and they will be enforcing DMARC policies for the domain used in the From header of messages. For example, gmail.com will be publishing a quarantine DMARC policy, which means that unauthorized messages claiming to be from Gmail will be sent to Junk folders.

Read Amazon SES: Email Authentication and Getting Value out of Your DMARC Policy to gain a deeper understanding of SPF and DKIM domain-alignment and maximize the value from your domain’s DMARC policy.

The following steps outline how Amazon SES customers can adhere to the domain authentication requirements:

Adopt domain identities: Amazon SES customers who currently rely primarily on email address identities will need to adopt verified domain identities to achieve better deliverability with mailbox providers. By using a verified domain identity with SES, your messages will have a domain-aligned DKIM signature.

Not sure what domain to use? Read Choosing the Right Domain for Optimal Deliverability with Amazon SES for additional best practice guidance regarding sending authenticated email. 

Configure a Custom MAIL FROM domain: To further align with best practices, SES customers should also configure a custom MAIL FROM domain so that SPF is domain-aligned.

The table below illustrates the three scenarios based on the type of identity you use with Amazon SES

Scenarios using example.com in the From header DKIM authenticated identifier SPF authenticated identifier DMARC authentication results
[email protected] as a verified email address identity amazonses.com email.amazonses.com Fail – DMARC analysis fails as the sending domain does not have a DKIM signature or SPF record that matches.
example.com as a verified domain identity example.com email.amazonses.com Success – DKIM signature aligns with sending domain which will cause DMARC checks to pass.
example.com as a verified domain identity, and bounce.example.com as a custom MAIL FROM domain example.com bounce.example.com Success – DKIM and SPF are aligned with sending domain.

Figure 1: Three scenarios based on the type of identity used with Amazon SES. Using a verified domain identity and configuring a custom MAIL FROM domain will result in both DKIM and SPF being aligned to the From header domain’s DMARC policy.

Be strategic with subdomains: Amazon SES customers should consider a strategic approach to the domains and subdomains used in the From header for different email sending use cases. For example, use the marketing.example.com verified domain identity for sending marketing mail, and use the receipts.example.com verified domain identity to send transactional mail.

Why? Marketing messages may have higher spam complaint rates and would need to adhere to the bulk sender requirements, but transactional mail, such as purchase receipts, would not necessarily have spam complaints high enough to be classified as bulk mail.

Publish DMARC policies: Publish a DMARC policy for your domain(s). The domain you use in the From header of messages needs to have a policy by setting the p= tag in the domain’s DMARC policy in DNS. The policy can be set to “p=none” to adhere to the bulk sending requirements and can later be changed to quarantine or reject when you have ensured all email using the domain is authenticated with DKIM or SPF domain-aligned authenticated identifiers.

2. Set up an easy unsubscribe for email recipients

Bulk senders are expected to include a mechanism to unsubscribe by adding an easy to find link within the message. The February 2024 mailbox provider rules will require senders to additionally add one-click unsubscribe headers as defined by RFC 2369 and RFC 8058. These headers make it easier for recipients to unsubscribe, which reduces the rate at which recipients will complain by marking messages as spam.

There are many factors that could result in your messages being classified as bulk by any mailbox provider. Volume over 5000 per day is one factor, but the primary factor that mailbox providers use is in whether the recipient actually wants to receive the mail.

If you aren’t sure if your mail is considered bulk, monitor your spam complaint rates. If the complaint rates are high or growing, it is a sign that you should offer an easy way for recipients to unsubscribe.

How to adhere to the easy unsubscribe requirement

The following steps outline how Amazon SES customers can adhere to the easy unsubscribe requirement:

Add one-click unsubscribe headers to the messages you send: Amazon SES customers sending bulk or potentially unwanted messages will need to implement an easy way for recipients to unsubscribe, which they can do using the SES subscription management feature.

Mailbox providers are requiring that large senders give recipients the ability to unsubscribe from bulk email in one click using the one-click unsubscribe header, however it is acceptable for the unsubscribe link in the message to direct the recipient to a landing page for the recipient to confirm their opt-out preferences.

To set up one-click unsubscribe without using the SES subscription management feature, include both of these headers in outgoing messages:

  • List-Unsubscribe-Post: List-Unsubscribe=One-Click
  • List-Unsubscribe: <https://example.com/unsubscribe/example>

When a recipient unsubscribes using one-click, you receive this POST request:

POST /unsubscribe/example HTTP/1.1
Host: example.com
Content-Type: application/x-www-form-urlencoded
Content-Length: 26
List-Unsubscribe=One-Click

Gmail’s FAQ and Yahoo’s FAQ both clarify that the one-click unsubscribe requirement will not be enforced until June 2024 as long as the bulk sender has a functional unsubscribe link clearly visible in the footer of each message.

Honor unsubscribe requests within 2 days: Verify that your unsubscribe process immediately removes the recipient from receiving similar future messages. Mailbox providers are requiring that bulk senders give recipients the ability to unsubscribe from email in one click, and that the senders process unsubscribe requests within two days.

If you adopt the SES subscription management feature, make sure you integrate the recipient opt-out preferences with the source of your email sending lists. If you implement your own one-click unsubscribe (for example, using Amazon API Gateway and an AWS Lambda function), make sure it designed to suppress sending to email addresses in your source email lists.

Review your email list building practices: Ensure responsible email practices by refraining from purchasing email lists, safeguarding opt-in forms from bot abuse, verifying recipients’ preferences through confirmation messages, and abstaining from automatically enrolling recipients in categories that were not requested.

Having good list opt-in hygiene is the best way to ensure that you don’t have high spam complaint rates before you adhere to the new required best practices. To learn more, read What is a Spam Trap, and Why You Should Care.

3. Monitor spam rates

Mailbox providers will require that all senders keep spam complaint rates below 0.3% to avoid having their email treated as spam by the mailbox provider. The following steps outline how Amazon SES customers can meet the spam complaint rate requirement:

Enroll with Google Postmaster Tools: Amazon SES customers should enroll with Google Postmaster Tools to monitor their spam complaint rates for Gmail recipients.

Gmail recommends spam complaint rates stay below 0.1%. If you send to a mix of Gmail recipients and recipients on other mailbox providers, the spam complaint rates reported by Gmail’s Postmaster Tools are a good indicator of your spam complaint rates at mailbox providers who don’t let you view metrics.

Enable Amazon SES Virtual Deliverability Manager: Enable Virtual Deliverability Manager (VDM) in your Amazon SES account. Customers can use VDM to monitor bounce and complaint rates for many mailbox providers. Amazon SES recommends customers to monitor reputation metrics and stay below a 0.1% complaint rate.

Segregate and secure your sending using configuration sets: In addition to segregating sending use cases by domain, Amazon SES customers should use configuration sets for each sending use case.

Using configuration sets will allow you to monitor your sending activity and implement restrictions with more granularity. You can even pause the sending of a configuration set automatically if spam complaint rates exceed your tolerance threshold.

Conclusion

These changes are planned for February 2024, but be aware that the exact timing and methods used by each mailbox provider may vary. If you experience any deliverability issues with any mailbox provider prior to February, it is in your best interest to adhere to these required best practices as a first step.

We hope that this blog clarifies any areas of confusion on this change and provides you with the information you need to be prepared for February 2024. Happy sending!

Helpful links:

Transforming transactions: Streamlining PCI compliance using AWS serverless architecture

Post Syndicated from Abdul Javid original https://aws.amazon.com/blogs/security/transforming-transactions-streamlining-pci-compliance-using-aws-serverless-architecture/

Compliance with the Payment Card Industry Data Security Standard (PCI DSS) is critical for organizations that handle cardholder data. Achieving and maintaining PCI DSS compliance can be a complex and challenging endeavor. Serverless technology has transformed application development, offering agility, performance, cost, and security.

In this blog post, we examine the benefits of using AWS serverless services and highlight how you can use them to help align with your PCI DSS compliance responsibilities. You can remove additional undifferentiated compliance heavy lifting by building modern applications with abstracted AWS services. We review an example payment application and workflow that uses AWS serverless services and showcases the potential reduction in effort and responsibility that a serverless architecture could provide to help align with your compliance requirements. We present the review through the lens of a merchant that has an ecommerce website and include key topics such as access control, data encryption, monitoring, and auditing—all within the context of the example payment application. We don’t discuss additional service provider requirements from the PCI DSS in this post.

This example will help you navigate the intricate landscape of PCI DSS compliance. This can help you focus on building robust and secure payment solutions without getting lost in the complexities of compliance. This can also help reduce your compliance burden and empower you to develop your own secure, scalable applications. Join us in this journey as we explore how AWS serverless services can help you meet your PCI DSS compliance objectives.

Disclaimer

This document is provided for the purposes of information only; it is not legal advice, and should not be relied on as legal advice. Customers are responsible for making their own independent assessment of the information in this document. This document: (a) is for informational purposes only, (b) represents current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services are provided “as is” without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

AWS encourages its customers to obtain appropriate advice on their implementation of privacy and data protection environments, and more generally, applicable laws and other obligations relevant to their business.

PCI DSS v4.0 and serverless

In April 2022, the Payment Card Industry Security Standards Council (PCI SSC) updated the security payment standard to “address emerging threats and technologies and enable innovative methods to combat new threats.” Two of the high-level goals of these updates are enhancing validation methods and procedures and promoting security as a continuous process. Adopting serverless architectures can help meet some of the new and updated requirements in version 4.0, such as enhanced software and encryption inventories. If a customer has access to change a configuration, it’s the customer’s responsibility to verify that the configuration meets PCI DSS requirements. There are more than 20 PCI DSS requirements applicable to Amazon Elastic Compute Cloud (Amazon EC2). To fulfill these requirements, customer organizations must implement controls such as file integrity monitoring, operating system level access management, system logging, and asset inventories. Using AWS abstracted services in this scenario can remove undifferentiated heavy lifting from your environment. With abstracted AWS services, because there is no operating system to manage, AWS becomes responsible for maintaining consistent time settings for an abstracted service to meet Requirement 10.6. This will also shift your compliance focus more towards your application code and data.

This makes more of your PCI DSS responsibility addressable through the AWS PCI DSS Attestation of Compliance (AOC) and Responsibility Summary. This attestation package is available to AWS customers through AWS Artifact.

Reduction in compliance burden

You can use three common architectural patterns within AWS to design payment applications and meet PCI DSS requirements: infrastructure, containerized, and abstracted. We look into EC2 instance-based architecture (infrastructure or containerized patterns) and modernized architectures using serverless services (abstracted patterns). While both approaches can help align with PCI DSS requirements, there are notable differences in how they handle certain elements. EC2 instances provide more control and flexibility over the underlying infrastructure and operating system, assisting you in customizing security measures based on your organization’s operational and security requirements. However, this also means that you bear more responsibility for configuring and maintaining security controls applicable to the operating systems, such as network security controls, patching, file integrity monitoring, and vulnerability scanning.

On the other hand, serverless architectures similar to the preceding example can reduce much of the infrastructure management requirements. This can relieve you, the application owner or cloud service consumer, of the burden of configuring and securing those underlying virtual servers. This can streamline meeting certain PCI requirements, such as file integrity monitoring, patch management, and vulnerability management, because AWS handles these responsibilities.

Using serverless architecture on AWS can significantly reduce the PCI compliance burden. Approximately 43 percent of the overall PCI compliance requirements, encompassing both technical and non-technical tests, are addressed by the AWS PCI DSS Attestation of Compliance.

Customer responsible
52%
AWS responsible
43%
N/A
5%

The following table provides an analysis of each PCI DSS requirement against the serverless architecture in Figure 1, which shows a sample payment application workflow. You must evaluate your own use and secure configuration of AWS workload and architectures for a successful audit.

PCI DSS 4.0 requirements Test cases Customer responsible AWS responsible N/A
Requirement 1: Install and maintain network security controls 35 13 22 0
Requirement 2: Apply secure configurations to all system components 27 16 11 0
Requirement 3: Protect stored account data 55 24 29 2
Requirement 4: Protect cardholder data with strong cryptography during transmission over open, public networks 12 7 5 0
Requirement 5: Protect all systems and networks from malicious software 25 4 21 0
Requirement 6: Develop and maintain secure systems and software 35 31 4 0
Requirement 7: Restrict access to system components and cardholder data by business need-to-know 22 19 3 0
Requirement 8: Identify users and authenticate access to system components 52 43 6 3
Requirement 9: Restrict physical access to cardholder data 56 3 53 0
Requirement 10: Log and monitor all access to system components and cardholder data 38 17 19 2
Requirement 11: Test security of systems and networks regularly 51 22 23 6
Requirement 12: Support information security with organizational policies 56 44 2 10
Total 464 243 198 23
Percentage 52% 43% 5%

Note: The preceding table is based on the example reference architecture that follows. The actual extent of PCI DSS requirements reduction can vary significantly depending on your cardholder data environment (CDE) scope, implementation, and configurations.

Sample payment application and workflow

This example serverless payment application and workflow in Figure 1 consists of several interconnected steps, each using different AWS services. The steps are listed in the following text and include brief descriptions. They cover two use cases within this example application — consumers making a payment and a business analyst generating a report.

The example outlines a basic serverless payment application workflow using AWS serverless services. However, it’s important to note that the actual implementation and behavior of the workflow may vary based on specific configurations, dependencies, and external factors. The example serves as a general guide and may require adjustments to suit the unique requirements of your application or infrastructure.

Several factors, including but not limited to, AWS service configurations, network settings, security policies, and third-party integrations, can influence the behavior of the system. Before deploying a similar solution in a production environment, we recommend thoroughly reviewing and adapting the example to align with your specific use case and requirements.

Keep in mind that AWS services and features may evolve over time, and new updates or changes may impact the behavior of the components described in this example. Regularly consult the AWS documentation and ensure that your configurations adhere to best practices and compliance standards.

This example is intended to provide a starting point and should be considered as a reference rather than an exhaustive solution. Always conduct thorough testing and validation in your specific environment to ensure the desired functionality and security.

Figure 1: Serverless payment architecture and workflow

Figure 1: Serverless payment architecture and workflow

  • Use case 1: Consumers make a payment
    1. Consumers visit the e-commerce payment page to make a payment.
    2. The request is routed to the payment application’s domain using Amazon Route 53, which acts as a DNS service.
    3. The payment page is protected by AWS WAF to inspect the initial incoming request for any malicious patterns, web-based attacks (such as cross-site scripting (XSS) attacks), and unwanted bots.
    4. An HTTPS GET request (over TLS) is sent to the public target IP. Amazon CloudFront, a content delivery network (CDN), acts as a front-end proxy and caches and fetches static content from an Amazon Simple Storage Service (Amazon S3) bucket.
    5. AWS WAF inspects the incoming request for any malicious patterns, if the request is blocked, the request doesn’t return static content from the S3 bucket.
    6. User authentication and authorization are handled by Amazon Cognito, providing a secure login and scalable customer identity and access management system (CIAM)
    7. AWS WAF processes the request to protect against web exploits, then Amazon API Gateway forwards it to the payment application API endpoint.
    8. API Gateway launches AWS Lambda functions to handle payment requests. AWS Step Functions state machine oversees the entire process, directing the running of multiple Lambda functions to communicate with the payment processor, initiate the payment transaction, and process the response.
    9. The cardholder data (CHD) is temporarily cached in Amazon DynamoDB for troubleshooting and retry attempts in the event of transaction failures.
    10. A Lambda function validates the transaction details and performs necessary checks against the data stored in DynamoDB. A web notification is sent to the consumer for any invalid data.
    11. A Lambda function calculates the transaction fees.
    12. A Lambda function authenticates the transaction and initiates the payment transaction with the third-party payment provider.
    13. A Lambda function is initiated when a payment transaction with the third-party payment provider is completed. It receives the transaction status from the provider and performs multiple actions.
    14. Consumers receive real-time notifications through a web browser and email. The notifications are initiated by a step function, such as order confirmations or payment receipts, and can be integrated with external payment processors through an Amazon Simple Notification Service (Amazon SNS) Amazon Simple Email Service (Amazon SES) web hook.
    15. A separate Lambda function clears the DynamoDB cache.
    16. The Lambda function makes entries into the Amazon Simple Queue Service (Amazon SQS) dead-letter queue for failed transactions to retry at a later time.
  • Use case 2: An admin or analyst generates the report for non-PCI data
    1. An admin accesses the web-based reporting dashboard using their browser to generate a report.
    2. The request is routed to AWS WAF to verify the source that initiated the request.
    3. An HTTPS GET request (over TLS) is sent to the public target IP. CloudFront fetches static content from an S3 bucket.
    4. AWS WAF inspects incoming requests for any malicious patterns, if the request is blocked, the request doesn’t return static content from the S3 bucket. The validated traffic is sent to Amazon S3 to retrieve the reporting page.
    5. The backend requests of the reporting page pass through AWS WAF again to provide protection against common web exploits before being forwarded to the reporting API endpoint through API Gateway.
    6. API Gateway launches a Lambda function for report generation. The Lambda function retrieves data from DynamoDB storage for the reporting mechanism.
    7. The AWS Security Token Service (AWS STS) issues temporary credentials to the Lambda service in the non-PCI serverless account, allowing it to launch the Lambda function in the PCI serverless account. The Lambda function retrieves non-PCI data and writes it into DynamoDB.
    8. The Lambda function fetches the non-PCI data based on the report criteria from the DynamoDB table from the same account.

Additional AWS security and governance services that would be implemented throughout the architecture are shown in Figure 1, Label-25. For example, Amazon CloudWatch monitors and alerts on all the Lambda functions within the environment.

Label-26 demonstrates frameworks that can be used to build the serverless applications.

Scoping and requirements

Now that we’ve established the reference architecture and workflow, lets delve into how it aligns with PCI DSS scope and requirements.

PCI scoping

Serverless services are inherently segmented by AWS, but they can be used within the context of an AWS account hierarchy to provide various levels of isolation as described in the reference architecture example.

Segregating PCI data and non-PCI data into separate AWS accounts can help in de-scoping non-PCI environments and reducing the complexity and audit requirements for components that don’t handle cardholder data.

PCI serverless production account

  • This AWS account is dedicated to handling PCI data and applications that directly process, transmit, or store cardholder data.
  • Services such as Amazon Cognito, DynamoDB, API Gateway, CloudFront, Amazon SNS, Amazon SES, Amazon SQS, and Step Functions are provisioned in this account to support the PCI data workflow.
  • Security controls, logging, monitoring, and access controls in this account are specifically designed to meet PCI DSS requirements.

Non-PCI serverless production account

  • This separate AWS account is used to host applications that don’t handle PCI data.
  • Since this account doesn’t handle cardholder data, the scope of PCI DSS compliance is reduced, simplifying the compliance process.

Note: You can use AWS Organizations to centrally manage multiple AWS accounts.

AWS IAM Identity Center (successor to AWS Single Sign-On) is used to manage user access to each account and is integrated with your existing identify provider. This helps to ensure you’re meeting PCI requirements on identity, access control of card holder data, and environment.

Now, let’s look at the PCI DSS requirements that this architectural pattern can help address.

Requirement 1: Install and maintain network security controls

  • Network security controls are limited to AWS Identity and Access Management (IAM) and application permissions because there is no customer controlled or defined network. VPC-centric requirements aren’t applicable because there is no VPC. The configuration settings for serverless services can be covered under Requirement 6 to for secure configuration standards. This supports compliance with Requirements 1.2 and 1.3.

Requirement 2: Apply secure configurations to all system components

  • AWS services are single function by default and exist with only the necessary functionality enabled for the functioning of that service. This supports compliance with much of Requirement 2.2.
  • Access to AWS services is considered non-console and only accessible through HTTPS through the service API. This supports compliance with Requirement 2.2.7.
  • The wireless requirements under Requirement 2.3 are not applicable, because wireless environments don’t exist in AWS environments.

Requirement 3: Protect stored account data

  • AWS is responsible for destruction of account data configured for deletion based on DynamoDB Time to Live (TTL) values. This supports compliance with Requirement 3.2.
  • DynamoDB and Amazon S3 offer secure storage of account data, encryption by default in transit and at rest, and integration with AWS Key Management Service (AWS KMS). This supports compliance with Requirements 3.5 and 4.2.
  • AWS is responsible for the generation, distribution, storage, rotation, destruction, and overall protection of encryption keys within AWS KMS. This supports compliance with Requirements 3.6 and 3.7.
  • Manual cleartext cryptographic keys aren’t available in this solution, Requirement 3.7.6 is not applicable.

Requirement 4: Protect cardholder data with strong cryptography during transmission over open, public networks

  • AWS Certificate Manager (ACM) integrates with API Gateway and enables the use of trusted certificates and HTTPS (TLS) for secure communication between clients and the API. This supports compliance with Requirement 4.2.
  • Requirement 4.2.1.2 is not applicable because there are no wireless technologies in use in this solution. Customers are responsible for ensuring strong cryptography exists for authentication and transmission over other wireless networks they manage outside of AWS.
  • Requirement 4.2.2 is not applicable because no end-user technologies exist in this solution. Customers are responsible for ensuring the use of strong cryptography if primary account numbers (PAN) are sent through end-user messaging technologies in other environments.

Requirement 5: Protect a ll systems and networks from malicious software

  • There are no customer-managed compute resources in this example payment environment, Requirements 5.2 and 5.3 are the responsibility of AWS.

Requirement 6: Develop and maintain secure systems and software

  • Amazon Inspector now supports Lambda functions, adding continual, automated vulnerability assessments for serverless compute. This supports compliance with Requirement 6.2.
  • Amazon Inspector helps identify vulnerabilities and security weaknesses in the payment application’s code, dependencies, and configuration. This supports compliance with Requirement 6.3.
  • AWS WAF is designed to protect applications from common attacks, such as SQL injections, cross-site scripting, and other web exploits. AWS WAF can filter and block malicious traffic before it reaches the application. This supports compliance with Requirement 6.4.2.

Requirement 7: Restrict access to system components and cardholder data by business need to know

  • IAM and Amazon Cognito allow for fine-grained role- and job-based permissions and access control. Customers can use these capabilities to configure access following the principles of least privilege and need-to-know. IAM and Cognito support the use of strong identification, authentication, authorization, and multi-factor authentication (MFA). This supports compliance with much of Requirement 7.

Requirement 8: Identify users and authenticate access to system components

  • IAM and Amazon Cognito also support compliance with much of Requirement 8.
  • Some of the controls in this requirement are usually met by the identity provider for internal access to the cardholder data environment (CDE).

Requirement 9: Restrict physical access to cardholder data

  • AWS is responsible for the destruction of data in DynamoDB based on the customer configuration of content TTL values for Requirement 9.4.7. Customers are responsible for ensuring their database instance is configured for appropriate removal of data by enabling TTL on DDB attributes.
  • Requirement 9 is otherwise not applicable for this serverless example environment because there are no physical media, electronic media not already addressed under Requirement 3.2, or hard-copy materials with cardholder data. AWS is responsible for the physical infrastructure under the Shared Responsibility Model.

Requirement 10: Log and monitor all access to system components and cardholder data

  • AWS CloudTrail provides detailed logs of API activity for auditing and monitoring purposes. This supports compliance with Requirement 10.2 and contains all of the events and data elements listed.
  • CloudWatch can be used for monitoring and alerting on system events and performance metrics. This supports compliance with Requirement 10.4.
  • AWS Security Hub provides a comprehensive view of security alerts and compliance status, consolidating findings from various security services, which helps in ongoing security monitoring and testing. Customers must enable PCI DSS security standard, which supports compliance with Requirement 10.4.2.
  • AWS is responsible for maintaining accurate system time for AWS services. In this example, there are no compute resources for which customers can configure time. Requirement 10.6 is addressable through the AWS Attestation of Compliance and Responsibility Summary available in AWS Artifact.

Requirement 11: Regularly test security systems and processes

  • Testing for rogue wireless activity within the AWS-based CDE is the responsibility of AWS. AWS is responsible for the management of the physical infrastructure under Requirement 11.2. Customers are still responsible for wireless testing for their environments outside of AWS, such as where administrative workstations exist.
  • AWS is responsible for internal vulnerability testing of AWS services, and supports compliance with Requirement 11.3.1.
  • Amazon GuardDuty, a threat detection service that continuously monitors for malicious activity and unauthorized access, providing continuous security monitoring. This supports the IDS requirements under Requirement 11.5.1, and covers the entire AWS-based CDE.
  • AWS Config allows customers to catalog, monitor and manage configuration changes for their AWS resources. This supports compliance with Requirement 11.5.2.
  • Customers can use AWS Config to monitor the configuration of the S3 bucket hosting the static website. This supports compliance with Requirement 11.6.1.

Requirement 12: Support information security with organizational policies and programs

  • Customers can download the AWS AOC and Responsibility Summary package from Artifact to support Requirement 12.8.5 and the identification of which PCI DSS requirements are managed by the third-party service provider (TSPS) and which by the customer.

Conclusion

Using AWS serverless services when developing your payment application can significantly help reduce the number of PCI DSS requirements you need to meet by yourself. By offloading infrastructure management to AWS and using serverless services such as Lambda, API Gateway, DynamoDB, Amazon S3, and others, you can benefit from built-in security features and help align with your PCI DSS compliance requirements.

Contact us to help design an architecture that works for your organization. AWS Security Assurance Services is a Payment Card Industry-Qualified Security Assessor company (PCI-QSAC) and HITRUST External Assessor firm. We are a team of industry-certified assessors who help you to achieve, maintain, and automate compliance in the cloud by tying together applicable audit standards to AWS service-specific features and functionality. We help you build on frameworks such as PCI DSS, HITRUST CSF, NIST, SOC 2, HIPAA, ISO 27001, GDPR, and CCPA.

More information on how to build applications using AWS serverless technologies can be found at Serverless on AWS.

Want more AWS Security news? Follow us on Twitter.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Serverless re:Post, Security, Identity, & Compliance re:Post or contact AWS Support.

Abdul Javid

Abdul Javid

Abdul is a Senior Security Assurance Consultant and PCI DSS Qualified Security Assessor with AWS Security Assurance Services, and has more than 25 years of IT governance, operations, security, risk, and compliance experience. Abdul leverages his experience and knowledge to advise AWS customers with guidance and advice on their compliance journey. Abdul earned an M.S. in Computer Science from IIT, Chicago and holds various industry recognized sought after certifications in security and program and risk management from prominent organizations like AWS, HITRUST, ISACA, PMI, PCI DSS, and ISC2.

Ted Tanner

Ted Tanner

Ted is a Principal Assurance Consultant and PCI DSS Qualified Security Assessor with AWS Security Assurance Services, and has more than 25 years of IT and security experience. He uses this experience to provide AWS customers with guidance on compliance and security, and on building and optimizing their cloud compliance programs. He is co-author of the Payment Card Industry Data Security Standard (PCI DSS) v3.2.1 on AWS Compliance Guide and the soon-to-be-released v4.0 edition.

Tristan Watty

Tristan Watty

Dr. Watty is a Senior Security Consultant within the Professional Services team of Amazon Web Services based in Queens, New York. He is a passionate Tech Enthusiast, Influencer, and Amazonian with 15+ years of professional and educational experience with a specialization in Security, Risk, and Compliance. His zeal lies in empowering customers to develop and put into action secure mechanisms that steer them towards achieving their security goals. Dr. Watty also created and hosts an AWS Security Show named “Security SideQuest!” that airs on the AWS Twitch Channel.

Padmakar Bhosale

Padmakar Bhosale

Padmakar is a Sr. Technical Account Manager with over 25 years of experience in the Financial, Banking, and Cloud Services. He provides AWS customers with guidance and advice on Payment Services, Core Banking Ecosystem, Credit Union Banking Technologies, Resiliency on AWS Cloud, AWS Accounts & Network levels PCI Segmentations, and Optimization of the Customer’s Cloud Journey experience on AWS Cloud.

How to prevent SMS Pumping when using Amazon Pinpoint or SNS

Post Syndicated from Akshada Umesh Lalaye original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-prevent-sms-pumping-when-using-amazon-pinpoint-or-sns/

SMS fraud is, unfortunately, a common issue that all senders of SMS encounter as they adopt SMS as a communication channel. This post defines the most common types of fraud and provides concrete guidance on how to mitigate or eliminate each of them.

Introduction to SMS Pumping:

SMS Pumping, also known as an SMS Flood attack, or Artificially Inflated Traffic (AIT), occurs when fraudsters exploit a phone number input field to acquire a one-time passcode (OTP), an app download link, or any other content via SMS. In cases where these input forms lack sufficient security measures, attackers can artificially increase the volume of SMS traffic, thereby exploiting vulnerabilities in your application. The perpetrators dispatch SMS messages to a selection of numbers under the jurisdiction of a particular mobile network operator (MNO), ultimately receiving a portion of the resulting revenue. It is essential to understand how to detect these attacks and prevent them.

Common Evidence of SMS Pumping:

  • Dramatic Decrease in Conversion Rates: A common SMS use case is for identity verification through the use of One Time Passwords (OTP) but this could also be seen in other types of use cases where a clear and consistent conversion rate is seen. A drop in a normally stable conversion rate may be caused by an increase in volume that will never convert and can indicate an issue that requires investigation. Setting up an alert for anomalies in conversion rates is always a good practice.
  • SMS Requests or Deliveries from Unknown Countries: If your application normally sends SMS to a defined set of countries and you begin to receive requests for a different country, then then this should be investigated.
  • Spike in Outgoing Messages: A significant and sudden increase in outgoing messages could indicate an issue that requires investigation.
  • Spike in Messages Sent to a Block of Adjacent Numbers: Fraudsters often deploy bots and programmatically loop through numbers in a sequence. You will probably notice an increase in messages to a group of nearby numbers frequently for example, +11111111110, +11111111111

How to Identify and Prevent SMS Pumping Attacks:

Now that we understand the common signs of SMS pumping, lets discuss how to use AWS Services to identify, confirm the fraud and how to place measures in place to prevent it in the first place.

Identify:

Delivery Statistics (UTC)

Delivery Statistics (UTC)

If you are using Amazon Pinpoint, you can use transactional messaging under analytics section to understand the SMS patterns

Transactional Messaging Charts

Transactional Messaging Charts

  • Spikes in Messages Sent to a Block of Adjacent Numbers: If you are using SNS you can use CloudWatch logs to analyse the destination numbers.

You can use CloudWatch Insights query on below log groups

sns/<region>/<Accountnumber>/DirectPublishToPhoneNumber
sns/<region>/<Accountnumber>/DirectPublishToPhoneNumber/failure

The below query will print all the logs that have the destination number like +11111111111
fields @timestamp, @message, @logStream, @log
| filter delivery.destination like '+11111111111'
| limit 20

If you are using Amazon Pinpoint, you can enable event stream to analyse destination numbers.

If you have deployed Digital User Engagement Events Database Solution You can use the below sample Amazon Athena query which displays entries that have the destination number like +11111111111

SELECT * FROM "due_eventdb"."sms_success" where destination_phone_number like '%11111111111%'
SELECT * FROM "due_eventdb"."sms_failure" where destination_phone_number like '%11111111111%'

How to Prevent SMS Pumping: 

      • Example: If you expect only users from India to sign up in your application, you can include rules such as “\+91[0-9]{10}”, which allows only Indian numbers as input.
      • Note: SNS and Pinpoint APIs are not natively integrated with WAF. However, you can connect your application to an Amazon API Gateway with which you can integrate with WAF.
      • How to Create a Regex Pattern Set with WAF – The below Regex Pattern set will allow sending messages to Australia (+61) and India (+91) destination phone numbers
          1. Sign in to the AWS Management Console and navigate to AWS WAF console
          2. In the navigation pane, choose Regex pattern sets and then Create regex pattern set.
          3. Enter a name and description for the regex pattern set. You’ll use these to identify it when you want to use the set. For example, Allowed_SMS_Countries
          4. Select the Region where you want to store the regex pattern set
          5. In the Regular expressions text box, enter one regex pattern per line
          6. Review the settings for the regex pattern set, and choose Create regex pattern set
Regex pattern set details

Regex pattern set details

      • Create a Web ACL with above Regex Pattern Set
          1. Sign in to the AWS Management Console and navigate to AWS WAF console
          2. In the navigation pane, choose Web ACLs and then Create web ACL
          3. Enter a Name, Description and CloudWatch metric name for Web ACL details
          4. Select Resource type as Regional resources
          5. Click Next

            Web ACL details

            Web ACL details

          6. Click on Add Rules > Add my own rules and rule groups
          7. Enter Rule name and select Regular rule

            Web ACL Rule Builder

            Web ACL Rule Builder

          8. Select Inspect > Body, Content type as JSON, JSON match scope as Values, Content to inspect as Full JSON content
          9. Select Match type as Matches pattern from regex pattern set and select the Regex pattern set as “Allowed_SMS_Countries” created above
          10. Select Action as Allow
          11. Click Add Rule  

            Web ACL Rule builder statement

            Web ACL Rule builder statement

          12. Select Block for Default web ACL action for requests that don’t match any rules

            Web ACL Rules

            Web ACL Rules

          13. Set rule priority and Click Next

            Web ACL Rule priority

            Web ACL Rule priority

          14. Configure metrics and Click Next

            Web ACL metrics

            Web ACL metrics

          15. Review and Click Create web ACL

For more information, please refer to WebACL

  • Rate Limit Requests
    • AWS WAF provides an option to rate limit per originating IP. You can define the maximum number of requests allowed in a five-minute period that satisfy the criteria you provide, before limiting the requests using the rule action setting
  • CAPTCHA
    • Implement CAPTCHA in your application request process to protect your application against common bot traffic
  • Turn off “Shared Routes”
  • Exponential Delay Verification Retries
    • Implement a delay between multiple messages to the same phone number. This doesn’t completely eliminate but will help slow down the attack
  • Set CloudWatch Alarm
  • Validate Phone Numbers – You can use the Pinpoint Phone number validate API to check the values for CountryCodeIso2, CountryCodeNumeric, and PhoneType prior to sending SMS and then only send SMS to countries that match your criteria
    Sample API Response:

{
"NumberValidateResponse": {
"Carrier": "ExampleCorp Mobile",
"City": "Seattle",
"CleansedPhoneNumberE164": "+12065550142",
"CleansedPhoneNumberNational": "2065550142",
"Country": "United States",
"CountryCodeIso2": "US",
"CountryCodeNumeric": "1",
"OriginalPhoneNumber": "+12065550142",
"PhoneType": "MOBILE",
"PhoneTypeCode": 0,
"Timezone": "America/Los_Angeles",
"ZipCode": "98101"
}
}

Conclusion:

This post covers the basics of SMS pumping attacks, the different mechanisms that can be used to detect them, and some potential ways to solve for or mitigate them using services and features like Pinpoint Validate API and WAF.

Further Reading:
Review the documentation of WAF with API gateway
here
Review the documentation of Phone number validate
here
Review the Web Access Control lists
here

 

Resources:
Amazon Pinpoint –
https://aws.amazon.com/pinpoint/
Amazon API Gateway –
https://aws.amazon.com/api-gateway/
Amazon Athena –
https://aws.amazon.com/athena/

Automate marketing campaigns with real-time customer data using Amazon Pinpoint

Post Syndicated from Rushabh Lokhande original https://aws.amazon.com/blogs/messaging-and-targeting/automate-marketing-campaigns-with-real-time-customer-data-using-amazon-pinpoint/

Amazon Pinpoint offers marketers and developers one customizable tool to deliver customer communications across channels, segments, and campaigns at scale. Amazon Pinpoint makes it easy to run targeted campaigns and drive customer communications across different channels: email, SMS, push notifications, in-app messaging, or custom channels. Amazon Pinpoint campaigns enables you define which users to target, determine which messages to send, schedule the best time to deliver the messages, and then track the results of your campaign.

In many cases, the customer data resides in a third-party system such as a CRM, Customer Data Platform, Point of Sales, database and data warehouse. This customer data represents a valuable asset for your organization. Your marketing team needs to leverage each piece of this data to elevate the customer experience.

In this blog post we will demonstrate how you can leverage users’ clickstream data stored in database to build user segments and launch campaigns using Amazon Pinpoint. Also, we will showcase the full architecture of the data pipeline including other AWS services such as Amazon RDS, AWS Data Migration Service, Amazon Kinesis and AWS Lambda.

Let us understand our case study with an example: a customer currently has digital touch points such as a Website and a Mobile App to collect the users’ clickstreams and behavioral data where they are storing them in a MySQL database. Marketing teams want to leverage the collected data to deliver a personalized experience by leveraging Amazon Pinpoint capabilities.

You can find below the detail of a specific use case covered by the proposed solution:

  • All the clickstream and customer data are stored in MySQL DB
  • Your marketing team wants to create a personalized Amazon Pinpoint campaigns based on the user status and experience. Ex:
    • Customers who interested in specific offering to activate for them campaign based on their interest
    • Communicate with the preferred language of the user

Please note that this use case is used to showcase the proposed solution capabilities. However, it is not limited to this specific use case since you can leverage any customer collected dimension/attribute to create specific campaign to achieve a specific marketing use case.

In this post, we provide a guided journey on how marketers can collect, segment, and activate audience segments in real-time to increase their agility in managing campaigns.

Overview of solution

The use case covered in this post, focuses on demonstrating the flexibility offered by Amazon Pinpoint in both inbound (Ingestion) and outbound (Activation) stream of customer data. For the inbound stream, Amazon Pinpoint gives you a variety of ways to import your customer data, including:

  1. CSV/JSON import from the AWS console
  2. API operation to create a single or multiple endpoints
  3. Programmatically create and execute import jobs

We will focus on building a real-time inbound stream of customer data available within an Amazon RDS MySQL database specifically. It is important to mention that similar approach can be implemented to ingest data from third-party systems if any.

For the outbound stream, activating customer data using Amazon Pinpoint can be achieved using the following two methods:

  1. Campaign: a campaign is a messaging initiative that engages a specific audience segment.
  2. Journey: a journey is a customized, multi-step engagement experience.

The result of customer data activation cannot be completed without specifying the targeted channel. A channel represents the platform through which you engage your audience segment with messages. For example, Amazon Pinpoint customers can optimize how they target notifications to prospective customers through LINE message and email. They can deliver notifications with more information on prospected customer’s product information such as sales, new products etc. to the appropriate audience.

Amazon Pinpoint supports the following channels:

  • Push notifications
  • Email
  • SMS
  • Voice
  • In-app messages

In addition to these channels, you can also extend the capabilities to meet your specific use case by creating custom channels. You can use custom channels to send messages to your customers through any service that has an API including third-party services. For example, you can use custom channels to send messages through third-party services such as WhatsApp or Facebook Messenger. We will focus on developing an Amazon Pinpoint connector using custom channel to target your customers on third-party services through API.

Solution Architecture

The below diagram illustrates the proposed architecture to address the use case. Moving from left to right:

Fig 1: Architecture Diagram for the Solution

Fig 1: Architecture Diagram for the Solution

  1. Amazon RDS: This hosts customer database where you can have one or many tables contains customer data.
  2. AWS Data Migration Service (DMS): This acts as the glue between Amazon RDS MySQL and the downstream services by replicating any transformation that happens at the record level in the configured customer tables.
  3. Amazon Kinesis Data Streams: This is the destination endpoint for AWS DMS. It will carry all the transformed records for the next stage of the pipeline.
  4. AWS Lambda (inbound): The inbound AWS Lambda triggers the Kinesis Data Streams, process the mutated records, and ingest them in Amazon Pinpoint.
  5. Amazon Pinpoint: This act as the centralized place to define customer segments and launch campaigns.
  6. AWS Lambda (outbound): This act as the custom channel destination for the campaigns activated from Amazon Pinpoint.

To illustrate how to set up this architecture, we’ll walk you through the following steps:

  1. Deploying an AWS CDK stack to provision the following AWS Resources
  2. Validate the Deployment.
  3. Run a Sample Workflow – This workflow will run an AWS Glue PySpark job that uses a custom Python library, and an upgraded version of boto3.
  4. Cleaning up your resources.

Prerequisites

Make sure that you complete the following steps as prerequisites:

The Solution

Launching your AWS CDK Stack

Step 1a: Open your device’s command line or Terminal.

Step1b: Checkout Git repository to a local directory on your device:

git clone https://github.com/aws-samples/amazon-pinpoint-realtime-campaign-optimization-example.git

Step 2: Change directories to the new directory code location:

cd amazon-pinpoint-realtime-campaign-optimization-example

Step 3: Update your AWS account number and region:

  1. Edit config.py with your choice to tool or command line
  2. look for section “Account Setup” and update your account number and region

    Fig 2: Configuring config.py for account-id and region

    Fig 2: Configuring config.py for account-id and region

  3. look for section “VPC Parameters” and update your VPC and subnet info

    Fig 3: Configuring config.py for VPC and subnet information

    Fig 3: Configuring config.py for VPC and subnet information

Step 4: Verify if you are in the directory where app.py file is located:

ls -ltr app.py

Step 5: Create a virtual environment:

macOS/Linux:

python3 -m venv .env

Windows:

python -m venv .env

Step 6: Activate the virtual environment after the init process completes and the virtual environment is created:

macOS/Linux:

source .env/bin/activate

Windows:

.env\Scripts\activate.bat

Step 7: Install the required dependencies:

pip3 install -r requirements.txt

Step 8: Bootstrap the cdk app using the following command:

cdk bootstrap aws://<AWS_ACCOUNTID>/<AWS_REGION>

Replace the place holder AWS_ACCOUNTID and AWS_REGION with your AWS account ID and the region to be deployed.
This step provisions the initial resources, including an Amazon S3 bucket for storing files and IAM roles that grant permissions needed to perform deployments.

Fig 4: Bootstrapping CDK environment

Fig 4: Bootstrapping CDK environment

Please note, if you have already bootstrapped the same account previously, you cannot bootstrap account, in such case skip this step or use a new AWS account.

Step 9: Make sure that your AWS profile is setup along with the region that you want to deploy as mentioned in the prerequisite. Synthesize the templates. AWS CDK apps use code to define the infrastructure, and when run they produce or “synthesize” a CloudFormation template for each stack defined in the application:

cdk synthesize

Step 10: Deploy the solution. By default, some actions that could potentially make security changes require approval. In this deployment, you’re creating an IAM role. The following command overrides the approval prompts, but if you would like to manually accept the prompts, then omit the –require-approval never flag:

cdk deploy "*" --require-approval never

While the AWS CDK deploys the CloudFormation stacks, you can follow the deployment progress in your terminal.

Fig 5: AWS CDK Deployment progress in terminal

Fig 5: AWS CDK Deployment progress in terminal

Once the deployment is successful, you’ll see the successful status as follows:

Fig 6: AWS CDK Deployment completion success

Fig 6: AWS CDK Deployment completion success

Step 11: Log in to the AWS Console, go to CloudFormation, and see the output of the ApplicationStack:

Fig 7: AWS CloudFormation stack output

Fig 7: AWS CloudFormation stack output

Note the values of PinpointProjectId, PinpointProjectName, and RDSSecretName variables. We’ll use them in the next step to upload our artifacts

Testing The Solution

In this section we will create a full data flow using the below steps:

  1. Ingest data in the customer_tb table within the Amazon RDS MySQL DB instance
  2. Validate that AWS Data Migration Service created task is replicating the changes to the Amazon Kinesis Data Streams
  3. Validate that endpoints are created within Amazon Pinpoint
  4. Create Amazon Pinpoint Segment and Campaign and activate data to Webhook.site endpoint URL

Step 1: Connect to MySQL DB instance and create customer database

    1. Sign in to the AWS Management Console and open the AWS Cloud9 console at https://console.aws.amazon.com/cloud9 
    2. Click Create environment
      • Name: mysql-cloud9-01 (for example)
      • Click Next
      • Environment type: Create a new EC2 instance for environment (direct access)
      • Instance type: t2.micro
      • Timeout: 30 minutes
      • Platform: Amazon Linux 2
      • Network settings under VPC settings select the same VPC where the MySQL DB instance was created. (this is the same VPC and Subnet from step 3.3)
      • Click Next
      • Review and click Create environment
    3. Select the created AWS Cloud9 from the AWS Cloud9 console at https://console.aws.amazon.com/cloud9  and click Open in Cloud9. You will have access to AWS Cloud9 Linux shell.
    4. From Linux shell, update the operating system and :
      sudo yum update -y
    5. From Linux shell, update the operating system and :
      sudo yum install -y mysql
    6. To connect to the created MySQL RDS DB instance, use the below command in the AWS Cloud9 Linux shell:
      mysql -h <<host>> -P 3308 --user=<<username>> --password=<<password>>
      • To get values for dbInstanceIdentifier, username, and password
        • Navigate to the AWS Secrets Manager service
        • Open the secret with the name created by the CDK application
        • Select ‘Reveal secret value’ and copy the respective values and replace in your command
      • After you enter the password for the user, you should see output similar to the following.
      • Welcome to the MariaDB monitor.  Commands end with ; or \g.
        Your MySQL connection id is 27
        Server version: 8.0.32 Source distribution
        Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
        Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
        MySQL [(none)]>

Step 2: Ingest data in the customer_tb table within the Amazon RDS MySQL DB instance Once the connection to the MySQL DB instance established, using the same AWS Cloud9 Linux shell connected to the MySQL RDS DB execute following commands.

  • Create database pinpoint-test-db:
    CREATE DATABASE `pinpoint-test-db`;
  • Create table customer-tb:
    Use `pinpoint-test-db`;
    CREATE TABLE `customer_tb` (`userid` int NOT NULL,
                                `email` varchar(150) DEFAULT NULL,
                                `language` varchar(45) DEFAULT NULL,
                                `favourites` varchar(250) DEFAULT NULL,
                                PRIMARY KEY (`userid`);
  • You can verify the schema using the below SQL command:
  1. DESCRIBE `pinpoint-test-db`.customer_tb;
    Fig 8: Verify schema for customer_db table

    Fig 8: Verify schema for customer_db table

    • Insert records in customer_tb table:
    Use `pinpoint-test-db`;
    insert into customer_tb values (1,'[email protected]','english','football');
    insert into customer_tb values (2,'[email protected]','english','basketball');
    insert into customer_tb values (3,'[email protected]','french','football');
    insert into customer_tb values (4,'[email protected]','french','football');
    insert into customer_tb values (5,'[email protected]','french','basketball');
    insert into customer_tb values (6,'[email protected]','french','football');
    insert into customer_tb values (7,'[email protected]','french',null);
    insert into customer_tb values (8,'[email protected]','english','football');
    insert into customer_tb values (9,'[email protected]','english','football');
    insert into customer_tb values (10,'[email protected]','english',null);
    • Verify records in customer_tb table:
    select * from `pinpoint-test-db`.`customer_tb`;
    Fig 9: Verify data for customer_db table

    Fig 9: Verify data for customer_db table

    Step 3: Validate that AWS Data Migration Service created task is replicating the changes to the Amazon Kinesis Data Streams

      1. Sign in to the AWS Management Console and open the AWS DMS console at https://console.aws.amazon.com/dms/v2 
      2. From the navigation panel, choose Database migration tasks.
      3. Click on the created task created by CDK code ‘dmsreplicationtask-*’
      4. Start the replication task

        Fig 10: Starting AWS DMS Replication Task

        Fig 10: Starting AWS DMS Replication Task

      5. Make sure that Status is Replication ongoing

        Fig 11: AWS DMS Replication statistics

        Fig 11: AWS DMS Replication statistics

      6. Navigate to Table Statistics and make sure that the number of Inserts is equal to 10 and Load state is Table completed*

        Fig 12: AWS DMS Replication statistics

        Fig 12: AWS DMS Replication statistics

    Step 4: Validate that endpoints are created within Amazon Pinpoint

    1. Sign in to the AWS Management Console and open the Amazon Pinpoint console at https://console.aws.amazon.com/pinpoint/ 
    2. Click on Amazon Pinpoint Project Demo created by CDK stack “dev-pinpoint-project”
    3. From the left menu, click on Analytics and validate that the Active targetable endpoints are equal to 10 as shown below:
    Fig 13: Amazon Pinpoint endpoint summary

    Fig 13: Amazon Pinpoint endpoint summary

    Step 5: Create Amazon Pinpoint Segment and Campaign

    Step 5.1: Create Amazon Pinpoint Segment

    • Sign in to the AWS Management Console and open the Amazon Pinpoint console at https://console.aws.amazon.com/pinpoint/ 
    • Click on Amazon Pinpoint Project Demo created by CDK stack “dev-pinpoint-project”
    • from the left menu, click on Segments and click Create a segment
    • create Segment using the below configurations:
      • Name: English Speakers
      • Under criteria:
    • Attribute: Language
    • Operator: Conatins
    • Value: english
    Fig 14: Amazon Pinpoint segment summary

    Fig 14: Amazon Pinpoint segment summary

    • Click create segment

    Step 5.2: Create Amazon Pinpoint Campaign

    • from the left menu, click on Campaigns and click Create a campaign
    • set the Campaign name to test campaign and select Custom option for Channel as shown below:
    Fig 15: Amazon Pinpoint create campaign

    Fig 15: Amazon Pinpoint create campaign

    • Click Next
    • Select English Speakers from Segment as shown below and click Next:
    Fig 16: Amazon Pinpoint segment

    Fig 16: Amazon Pinpoint segment

    • Choose Lambda function channel type and select outbound lambda function with name pattern as ApplicationStack-lambdaoutboundfunction* from the dropdown as shown below:
    Fig 17: Amazon Pinpoint message creation

    Fig 17: Amazon Pinpoint message creation

    • Click Next
    • Choose At a specific time option and immediately to send the campaign as show below:
    Fig 18: Amazon Pinpoint campaign scheduling

    Fig 18: Amazon Pinpoint campaign scheduling

    If you push more messages or records into Amazon RDS (from step 2.4), you will need to create a new campaign (from step 4.2) to process the new messages.

    • Click Next, review the configuration and click Launch campaign.
    • Navigate to dev-pinpoint-project and select the campaign created in previous step. You should see status as ‘Complete’
    Fig 19: Amazon Pinpoint campaign processing status

    Fig 19: Amazon Pinpoint campaign processing status

    • Navigate to dev-pinpoint-project dashboard and select your campaign in ‘Campaign metrics’ dashboard, you will see the statistics for the processing.
    Fig 20: Amazon Pinpoint campaign metrics

    Fig 20: Amazon Pinpoint campaign metrics

    Accomplishments

    This is a quick summary of what we accomplished:

    1. Created an Amazon RDS MySQL DB instance and define customer_tb table schema
    2. Created an Amazon Kinesis Data Stream
    3. Replicated database changes from the Amazon RDS MySQL DB to Amazon Kinesis Data Stream
    4. Created an AWS Lambda function triggered by Amazon Kinesis Data Stream to ingest database records in Amazon Pinpoints as User endpoints using AWS SDK
    5. Created an Amazon Pinpoint Project, segment and campaign
    6. Created an AWS Lambda function as custom channel for Amazon Pinpoint campaign
    7. Tested end-to-end data flow from Amazon RDS MySQL DB instance to third party endpoint

    Next Steps

    You have now gained a good understanding of Amazon Pinpoint agnostic data flow but there are still many areas left for exploration. What this workshop hasn’t covered is the operation of other communication channels such as Email, SMS, Push notification and Voice outbound. You can enable the channels that are pertinent to your use case and send messages using campaigns or journeys.

    Clean Up

    Make sure that you clean up all of the other AWS resources that you created in the AWS CDK Stack deployment. You can delete these resources via the AWS CDK Destroy command as follows or the CloudFormation console.

    To destroy the resources using AWS CDK, follow these steps:

    • Follow Steps 1-5 from the ‘Launching your CDK Stack’ section.
    • Destroy the app by executing the following command:
    cdk destroy

    Summary

    In this post, you have now gained a good understanding of Amazon Pinpoint flexible real-time data flow. By implementing the steps detailed in this blog post, you can achieve a seamless integration of your customer data from Amazon RDS MySQL database to Amazon Pinpoint where you can leverage segments and campaigns to activate data using custom channels to third-party services via API. The demonstrated use case focuses on Amazon RDS MySQL database as a data source. However, there are still many areas left for exploration. What this post hasn’t covered is the operation of integrating customer data from other type of data sources such as MongoDB, Microsoft SQL Server, Google Cloud, etc. Also, other communication channels such as Email, SMS, Push notification and Voice outbound can be used in the activation layer. You can enable the channels that are pertinent to your use case and send messages using campaigns or journeys, and get a complete view of their customers across all touchpoints and can lead to less relevant marketing campaigns.

    About the Authors

  2. Bret Pontillo is a Senior Data Architect with AWS Professional Services Analytics Practice. He helps customers implement big data and analytics solutions. Outside of work, he enjoys spending time with family, traveling, and trying new food.
    Rushabh Lokhande is a Data & ML Engineer with AWS Professional Services Analytics Practice. He helps customers implement big data, machine learning, and analytics solutions. Outside of work, he enjoys spending time with family, reading, running, and golf.
    Ghandi Nader is a Senior Partner Solution Architect focusing on the Adtech and Martech industry. He helps customers and partners innovate and align with the market trends related to the industry. Outside of work, he enjoys spending time cycling and watching formula one.

How to implement multi-tenancy with Amazon Pinpoint

Post Syndicated from Tristan Nguyen original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-implement-multi-tenancy-with-amazon-pinpoint/

Navigating Multi-Tenancy in Amazon Pinpoint

Businesses are constantly evolving, often managing multiple product lines, customer segments, or even geographical locations. Furthermore, many business-to-business (B2B) companies that are Independent Software Vendors (ISVs) will often need to manage their customer’s marketing automation environment. This complexity necessitates a robust customer engagement strategy that can adapt and scale efficiently. However, managing disparate systems for each tenant is not only cumbersome but also resource-intensive, leading to increased operational costs and potential data silos. A multi-tenancy setup in Amazon Pinpoint addresses these challenges head-on, allowing businesses to streamline their customer engagement efforts under a unified architecture.

The question is not just whether to adopt multi-tenancy, but how to implement it in a way that aligns with your unique business requirements. Amazon Pinpoint offers multiple approaches to achieve this. This blog explores three:

  • Single Pinpoint Project: Simple but demands careful permissions management.
  • Multiple Pinpoint Projects: Granular control but limited by soft project quotas.
  • Multiple Account & Multi Pinpoint Projects: Highly scalable but needs comprehensive monitoring.

We’ll delve into the pros, cons, and best use-cases for each as well as how to choose the different multi-tenancy configuration depending on your communications channels needs, guiding you to make an informed architectural decision.

In this blog, we’ll cut through the complexity, helping you align your Amazon Pinpoint architecture with your business goals. Let’s get started.

Single Account / Single Project (SA/SP)

Overview

In a Single Pinpoint Project setup, all customer engagement activities reside within one project and multi-tenancy within this context will leverage customer endpoint attributes. This streamlined approach allows for easy management, especially for those new to Amazon Pinpoint. A configuration example for this case is shown below:

Single Account / Single Project (SA/SP)

When preparing one Pinpoint Project and managing information for multiple tenants, tenant information can be managed by using custom user attributes of endpoints. Also, campaign information can be managed for each tenant by using the tag function for campaign information. The elements required to take this configuration are shown below.

  • S3 buckets that hold customer data:
    • Prepare an S3 bucket to store customer information lists to be imported into Pinpoint. Amazon Pinpoint allows you to import CSV files in S3 as segments. In order to make settings for each tenant in Amazon Pinpoint, we will include tenant information as custom user attributes in the CSV file.
  • 1 Amazon Pinpoint Project:
    • Create 1 Amazon Pinpoint Project.
    • Settings for each channel to be distributed are also required.
    • Campaign information can be assigned to tenant information by using the tag function.
  • Amazon Kinesis:
  • Athena and S3 buckets to analyze event data:
    • Store Amazon Pinpoint event data in S3 and analyze it via Athena. Take advantage of this solution.

One thing to keep in mind when adopting this configuration is that customer endpoint information exists in the same Pinpoint Project. It is possible to specify values that can be used to identify each tenant, such as custom attributes, and solve the problem with AWS Identity and Access Management (IAM) policies, but it is necessary to manage access rights and attributes on your own.

Also, to add an endpoint, you’ll need to specify its Channel and Address. Take note that one project cannot have the same channel and address for different endpoints. From the above, if the channel and address of the endpoint do not overlap between tenants, it is possible to construct your own access permission control, then this pattern can be examined.

Since fewer components are required compared to other patterns, the configuration is easier to start with. Some customers that want to build on top of Pinpoint API and want to simplify configuration on the Pinpoint side as much as possible can also choose this option. However, this approach can get complex to manage later on as you onboard more tenants. The issue presents itself when you want to create detailed reporting for your tenant in this configuration. You’ll have to have dedicated tags on each campaigns, journeys to operationalize granular reporting for your Amazon Pinpoint project.

Lastly, take note of service limits per Amazon Pinpoint project/AWS account to ensure your use case will be scalable should the need arise.

Single Account / Multiple Projects (SA/MP)

Overview

For this architecture, you are still using a single AWS account to host your Amazon Pinpoint environment, however, you will be creating multiple projects for each customer or tenant. A configuration example for this case is shown diagram.

Single Account / Multiple Projects (SA/MP)

In this example, we will create multiple Amazon Pinpoint Projects. One major difference from the case of the Single Pinpoint Project is that it is possible to completely separate customer endpoint information. When importing customer data segments, it is possible to manage each tenant in a separate state simply by importing them from S3 into the target Pinpoint Project. This makes it easy to control permissions via IAM policies.

Also, with Amazon Pinpoint, you can use email addresses, SMS numbers, message templates, etc. for transmission obtained with the relevant account in common to all projects, and event data for each project can be aggregated via Amazon Kinesis. By adopting such a configuration, you gain the benefits of separating endpoint information per project while still retaining basic setting information management and operator operations.

An example starter solution architecture to set up this configuration are shown below.

  • S3 buckets that hold customer data:
    • Similar to SA/SP, prepare an S3 bucket to store a list of customer information to be imported into Pinpoint. CSV to be imported must be prepared for each project.
  • Amazon DynamoDB Table:
    • Prepare a DynamoDB (or other key-value database) table to manage Pinpoint project information. Tenant information can also be stored as metadata in the DynamoDB table.
  • AWS Lambda:
    • Create a Pinpoint Project using Lambda. Amazon Pinpoint allows you to create and configure projects using the Amazon Pinpoint API, the AWS SDK, or the AWS Command Line Interface (AWS CLI). Thus, it is possible to automate the creation of the Pinpoint project and associated campaigns/journeys. Tenant information is also registered in DynamoDB at the time of creation.
  • Multiple Amazon Pinpoint Projects:
    • This is a Project created by Lambda above. There will now be a Pinpoint Project for each tenant, and endpoint information will be completely separated. It is also easy to control access rights for each project by using the IAM function.
    • Message templates: templates can be created and shared across projects.
    • By using Amazon Pinpoint’s event stream settings, campaign/journeys/app/channels events can be streamed to Amazon Kinesis. Multiple Amazon Pinpoint projects can all stream to one Amazon Kinesis stream. When setup correctly, event data will be tagged with the relevant tenant information so that an analytics solution can decompose the stream later on.
  • Athena and S3 buckets to analyze event data:
    • Amazon Pinpoint event data is stored in Amazon S3 and analyzed via Amazon Athena. The analytics solution, Amazon Athena in this case will be responsible for filtering event data and according to the tenant. Refer to this solution for more details.

Note that Pinpoint projects have a soft limit of 100 projects per AWS account, which can be increased via raising a Support Ticket, other quotas also apply at the project and the account level which should be taken into account.

From the above, it is necessary to note that there are restrictions on quotas per account when using the SA/MP and more initial configurations would be required to automate the process of project creation for individual tenants. However, when compared to SA/SP architecture,

Multiple Accounts & Multi Pinpoint Projects (MA/MP)

Overview

Before diving into the MA/MP approach, it’s crucial to understand the role of AWS Organizations in this configuration. AWS Organizations allows you to consolidate multiple AWS accounts into an organization to achieve centralized governance and billing. This feature is particularly useful in a MA/MP setup, as it enables streamlined management of multiple AWS accounts and Amazon Pinpoint projects from a single central management AWS account. For more information on AWS Organizations, you can visit the official AWS Organizations documentation.

In an MA/MP setup, we utilize separate AWS Accounts for each customer or tenant. A configuration example for this case is shown below.

In this example, we have created a Management account and prepared multiple AWS accounts under it. The management account manages the AWS account ID and the Pinpoint project ID, and has a configuration created with Lambda. Customer data and Event Stream Data are managed through a Management account, and information on each project is aggregated. A major benefit of this configuration is the ability to segregate actions of individual tenants, preventing the such as noisy neighbours antipattern. It also enables AWS accounts from being freed from quota restrictions that cannot be handled by a single AWS account. Additionally, Amazon Pinpoint has excellent CloudFormation coverage, and it is also possible to deploy highly reproducible architectures automatically.

The elements required to set up this configuration are shown below.

  • AWS Organizations:
    • Set up Organizations to manage multiple accounts. See Best Practices for setting up multiple accounts.
  • Management account:
    • Create an account to manage multiple account information. Here we will set the following elements. Use IAM roles and Service control policies (SCPs) when manipulating resources across accounts. This allows cross-account access. The required elements are the same as the SA/MP described above.
      • S3 buckets that hold customer data: With AWS, you can utilize S3 data across accounts. Set up cross-account settings and securely link customer data to each account.
      • Dynamo DB Table: Holds your AWS account ID, Pinpoint Project ID, and management information associated with it.
      • AWS Lambda: Create a Pinpoint project using Lambda.
      • Athena and S3 buckets to analyze event data: Event information from multiple accounts and Pinpoint projects is aggregated and analyzed.
  • AWS accounts and Pinpoint projects per tenant:
    • Depending on how tenants are separated, prepare an AWS account and Pinpoint Project. You can also consider automating account creation by using AWS CloudFormation.
    • There are cases where it is necessary to set the distribution channel email address, SMS number, etc. for each account. See the next section for details.
    • Amazon Kinesis is prepared for each account, but everything is stored in the same S3 in the Management account for easier bird-eye’s view reporting.

One thing to keep in mind is that since accounts are separated, it becomes necessary to manage each one separately. For example, newly created account will be placed in the sandbox state, and an application for actual use via support tickets is required for each account. Also, since all reputation is done on a single account, it is also necessary to monitor reputation for each account.

Navigating Channels in Amazon Pinpoint: Aligning Service Delivery with Architecture

Beyond choosing a Pinpoint architecture for multi-tenancy, it’s pivotal to decide which channels best deliver your services and how that decision is affected by your choice of multi-tenancy architecture. Below is a non-exhaustive lists of capabilities in Amazon Pinpoint that will help with your multi-channel, multi-tenancy configurations as well as potential blockers that you’d need to be aware of for each channels.

Email

Email is one of the most versatile channels, with integration with Amazon SES’s configuration sets and email suppression list capability, easily fitting into any of the three multi-tenancy models.

  • Configurations Sets: Using configuration sets, you’d be able to segregate your email sending activities using different IP Pools, as well as different event destinations.
    • You can use configuration sets in both Amazon Pinpoint and Amazon SES. Configuration sets rules that you configure in Amazon SES are also applied to email messages that you send using Amazon Pinpoint.
    • SA/SP and SA/MP: Email templates and sending IP addresses needs to be tagged using configuration sets for each tenant in the Pinpoint project.
    • MA/MP: Email templates and sending IP address can be sent using the account default, or follow granular tagging using configuration sets.
  • Email Suppression List: Suppression list is managed automatically at the account level. Alternatively, you can specify whether a specific configuration can override the account-level suppression list.
    • SA/SP and SA/MP:
      • All tenants will also follow the same account suppression list:
        • If any tenant sends to an email address that hard-bounced or complaint, all other tenants will also be unable to send emails to the same address.
        • You will have to manually override the account-level suppression list for each email addresses.
    • MA/MP:
      • If one of your tenant sends an email to a hard-bounced or complaint address, only the AWS account that the tenant belongs to will respect the suppression list i.e. other tenants in other AWS account can still send email to that email address.
  • Noisy Neighbour Threat: Broadly, this occurs when one tenant’s performance is degraded because of the activities of another tenant. Applied to email, the anti-pattern needs to be addressed because you don’t want one bad actor tenant to affect the entire environment’s email sending activity.
    • SA/SP and SA/MP:
      • Because email bounce and complaint rates are tracked at the account level, it is possible your entire account email sending domain to be blocked due to high bounce/complaint incidences from one bad tenant.
      • To mitigate this, it’s best practice to set up dedicated configuration sets and alarms to alert when any individual tenant is exhibiting high bounce/complaint rate.
    • MA/MP:
      • Offers the most segregation and ensure email identities/domains are only usable by one tenant/account.
  • Email Sending Quota:
    • Email daily sending quota and email sending rate live at the account level.
    • SA/SP and SA/MP:
      • You would need to anticipate the total daily sending quota and sending rate for all tenants in your AWS account and raise the service limits accordingly. Therefore, more planning will be involved to estimate the correct service limit threshold.
    • MA/MP:
      • You can raise service limits per individual tenant’s needs since each tenant will be on a separate AWS account.
      • It is best practice to have business process in place for individual tenant to notify of their email sending quota request in advance so that it can be raised accordingly for their AWS account.
  • For further discussion into sending emails in a multi-tenancy environment, refer to this AWS blog on Multi-Tenancy in SES.

SMS

  • Origination Identity procurement: When opting for MA/MP setup, remember that OIDs (phone numbers) are bound to AWS accounts.
  • Since OIDs do not carry across account, you will need to repeat the procurement process for every new AWS account.Number Pooling: This feature groups phone numbers or sender IDs. It’s particularly useful in a Single Project model to segment communications per tenant.
  • Configuration Sets: With the release of the V2 SMS and Voice API, you can now use configuration sets to manage your SMS opt-out lists, OIDs and event streaming destinations for a multi-tenant environment.
  • Noisy Neighbour Threat:
    • SA/SP and SA/MP:
      • Take note that if you do not specify an OID in your API call, Amazon Pinpoint will attempt to use the most suitable (in terms of throughput and deliverability) OID to send your SMS. This
      • Similar to email, you can leverage number pooling and configuration sets to segregate SMS sending activity within a single account. This helps protect’s your SMS OID reputation because it can be costly and time-consuming to request new OIDs.
    • MA/MP:
      • Offers the most segregation and ensure numbers are only usable by one tenant/account.
  • SMS Opt Outs: Similar to the email channel’s suppression list, opt-outs are managed per account and configuration sets. Therefore, in a MA/MP setup, a customer that has opted out from communication in one account can still receive communications from other accounts.

Push Notifications

Amazon Pinpoint integrates with various push services like FCM, APNS, Baidu Cloud Push, and ADM.

  • Project-level Authentication: Authentication information is set at the Pinpoint Project level, requiring separate management.
    • Therefore, you will not be able to use the SA/SP architecture for multiple tenants using different applications.
  • For more information, refer to the Mobile Push Guide

In-app Messages

  • Pinpoint Project Specific: Similar to push notifications, each Pinpoint Project can only house one in-app message application.
    • If you have multiple applications requiring in-app messages, you will not be able to employ the SA/SP architecture.
  • For more information, refer to the In-app Channel Documentation.

Custom Channels

  • Custom channels in Amazon Pinpoint allow you to send messages through any service that has an API, including third-party services. You can interact with APIs by using a webhook, or by calling an AWS Lambda function.If you are using custom channels extensively from Amazon Pinpoint, you’ll need to be aware of service limits in AWS Lambda, , especially if you’re considering SA/SP or SA/MP architectures.

Conclusion

In this blog, we’ve untangled the intricacies of implementing multi-tenancy in Amazon Pinpoint. Our deep dive covered three architectural patterns:

  • Single Account/Single Project (SA/SP): A beginner-friendly approach offering simple management but requiring meticulous permissions handling to segregate sending activity between different tenants.
  • Single Account/Multiple Projects (SA/MP): Offers granular control over customer data with slight increased in management complexity. However, this approach faces soft quotas and potential ‘Noisy Neighbor’ issues.
  • Multiple Accounts/Multiple Projects (MA/MP): Provides the most flexibility and isolation, albeit with increased management complexity.

Each approach comes with its own set of trade-offs related to ease of management/reporting, scalability, and control over customer data. Our discussion didn’t stop at architecture; we also examined how your multi-tenancy decisions will affect your channel configurations in Amazon Pinpoint. From email and SMS to push notifications, the architectural choices you make will have a direct impact on how efficiently you can manage these distribution channels. Armed with this information, you’re now better equipped to make informed decisions that align with your business objectives.

Call to Action

Your next step? Implement and architect your Amazon Pinpoint environment. Use the best practices and architectural guidelines outlined in this blog post as your north star. Going forward, the architectural blueprint you choose should be tailored to your specific needs—be it user count, company size, or distribution channels. Take into account not just the initial setup but also the long-term management aspects, including the respective service limits and quotas.

Relevant Links

About the Authors

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. At work, he specializes in technical implementation of communications services in enterprise systems and architecture/solutions design. In his spare time, he enjoys chess, rock climbing, hiking and triathlon.

Tatsuya Nakamura

Tatsuya Nakamura

Nakamura Tatsuya is a Solutions Architect in charge of enterprise companies at AWS. He is mainly in charge of the trading company industry and the distribution/retail industry, also supporting the implementation of Amazon Pinpoint for Japanese customers. His career so far includes ERP implementation support and multiple new web service launches.

Amazon SES: Email Authentication and Getting Value out of Your DMARC Policy

Post Syndicated from Bruno Giorgini original https://aws.amazon.com/blogs/messaging-and-targeting/email-authenctication-dmarc-policy/

Amazon SES: Email Authentication and Getting Value out of Your DMARC Policy

Introduction

For enterprises of all sizes, email is a critical piece of infrastructure that supports large volumes of communication. To enhance the security and trustworthiness of email communication, many organizations turn to email sending providers (ESPs) like Amazon Simple Email Service (Amazon SES). These ESPs allow users to send authenticated emails from their domains, employing industry-standard protocols such as the Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM). Messages authenticated with SPF or DKIM will successfully pass your domain’s Domain-based Message Authentication, Reporting, and Conformance (DMARC) policy. This blog post will focus on the DMARC policy enforcement mechanism. The blog will explore some of the reasons why email may fail DMARC policy evaluation and propose solutions to fix any failures that you identify. For an introduction to DMARC and how to carefully choose your email sending domain identity, you can refer to Choosing the Right Domain for Optimal Deliverability with Amazon SES The relationship between DMARC compliance and email deliverability rates is crucial for organizations aiming to maintain a positive sender reputation and ensure successful email delivery. There are many advantages when organizations have this correctly setup, these include:

  • Improved Email Deliverability
  • Reduction in Email Spoofing and Phishing
  • Positive Sender Reputation
  • Reduced Risk of Email Marked as Spam
  • Better Email Engagement Metrics
  • Enhanced Brand Reputation

With this foundation, let’s explore the intricacies of DMARC and how it can benefit your organization’s email communication.

What is DMARC?

DMARC is a mechanism for domain owners to advertise SPF and DKIM protection and to tell receivers how to act if those authentication methods fail. The domain’s DMARC policy protects your domain from third parties attempting to spoof the domain in the “From” header of emails. Malicious email messages that aim to send phishing attempts using your domain will be subject to DMARC policy evaluation, which may result in their quarantine or rejection by the email receiving organization. This stringent policy ensures that emails received by email recipients are genuinely from the claimed sending domain, thereby minimizing the risk of people falling victim to email-based scams. Domain owners publish DMARC policies as a TXT record in the domain’s _dmarc.<domain> DNS record. For example, if the domain used in the “From” header is example.com, then the domain’s DMARC policy would be located in a DNS TXT record named _dmarc.example.com. The DMARC policy can have one of three policy modes:

  • A typical DMARC deployment of an existing domain will start with publishing "p=none". A none policy means that the domain owner is in a monitoring phase; the domain owner is monitoring for messages that aren’t authenticated with SPF and DKIM and seeks to ensure all email is properly authenticated
  • When the domain owner is comfortable that all legitimate use cases are properly authenticated with SPF and/or DKIM, they may change the DMARC policy to "p=quarantine". A quarantine policy means that messages which fail to produce a domain-aligned authenticated identifier via SPF or DKIM will be quarantined by the mail receiving organization. The mail receiving organization may filter these messages into Junk folders, or take another action that they feel best protects their recipients.
  • Finally, domain owners who are confident that all of the legitimate messages using their domain are authenticated with SPF or DKIM, may change the DMARC policy to "p=reject". A reject policy means that messages which fail to produce a domain-aligned authenticated identifier via SPF or DKIM will be rejected by the mail receiving organization.

The following are examples of a TXT record that contains a DMARC policy, depending on the desired policy (the ‘p’ tag):

  Name Type Value
1 _dmarc.example.com TXT “v=DMARC1;p=reject;rua=mailto:[email protected]
2 _dmarc.example.com TXT “v=DMARC1;p=quarantine;rua=mailto:[email protected]
3 _dmarc.example.com TXT “v=DMARC1;p=none;rua=mailto:[email protected]
Table 1 – Example DMARC policy

This policy tells email providers to apply the DMARC policy to messages that fail to produce a DKIM or SPF authenticated identifier that is aligned to the domain in the “From” header. Alignment means that one or both of the following occurs:

  • The messages pass the SPF policy for the MAIL FROM domain and the MAIL FROM domain is the same as the domain in the “From” header, or a subdomain. Reference Using a custom MAIL FROM domain to learn more about how to send SPF aligned messages with SES.
  • The messages have a DKIM signature signed by a public key in DNS at a location within the domain of the “From” header. Reference Authenticating Email with DKIM in Amazon SES to learn more about how to send DKIM aligned messages with SES.

DMARC reporting

The rua tag in the domain’s DMARC policy indicates the location to which mail receiving organizations should send aggregate reports about messages that pass or fail SPF and DKIM alignment. Domain owners analyze these reports to discover messages which are using the domain in the “From” header but are not properly authenticated with SPF or DKIM. The domain owner will attempt to ensure that all legitimate messages are authenticated through analysis of the DMARC aggregate reports over time. Mail receiving organizations which support sending DMARC reports typically send these aggregated reports once per day, although these practices differ from provider to provider.

What does a typical DMARC deployment look like?

A DMARC deployment is the process of:

  1. Ensuring that all emails using the domain in the “From” header are authenticated with DKIM and SPF domain-aligned identifiers. Focus on DKIM as the primary means of authentication.
  2. Publishing a DMARC policy (none, quarantine, or reject) for the domain that reflects how the domain owner would like mail receiving organizations to handle unauthenticated email claiming to be from their domain.

New domains and subdomains

Deploying a DMARC policy is easy for organizations that have created a new domain or subdomain for the purpose of a new email sending use case on SES; for example email marketing, transaction emails, or one-time pass codes (OTP). These domains can start with the "p=reject" DMARC enforcement policy because the policy will not affect existing email sending programs. This strict enforcement is to ensure that there is no unauthenticated use of the domain and its subdomains.

Existing domains

For existing domains, a DMARC deployment is an iterative process because the domain may have a history of email sending by one or multiple email sending programs. It is important to gain a complete understanding of how the domain and its subdomains are being used for email sending before publishing a restrictive DMARC policy (p=quarantine or p=reject) because doing so would affect any unauthenticated email sending programs using the domain in the “From” header of messages. To get started with the DMARC implementation, these are a few actions to take:

  • Publish a p=none DMARC policy (sometimes referred to as monitoring mode), and set the rua tag to the location in which you would like to receive aggregate reports.
  • Analyze the aggregate reports. Mail receiving organizations will send reports which contain information to determine if the domain, and its subdomains, are being used for sending email, and how the messages are (or are not) being authenticated with a DKIM or SPF domain-aligned identifier. An easy to use analysis tool is the Dmarcian XML to Human Converter.
  • Avoid prematurely publishing a “p=quarantine” or “p=reject” policy. Doing so may result in blocked or reduced delivery of legitimate messages of existing email sending programs.

The image below illustrates how DMARC will be applied to an email received by the email receiving server and actions taken based on the enforcement policy:

DMARC flow Figure 1 – DMARC Flow

How do SPF and DKIM cause DMARC policies to pass

When you start sending emails using Amazon SES, messages that you send through Amazon SES automatically use a subdomain of amazonses.com as the default MAIL FROM domain. SPF evaluators will see that these messages pass the SPF policy evaluation because the default MAIL FROM domain has a SPF policy which includes the IP addresses of the SES infrastructure that sent the message. SPF authentication will result in an “SPF=PASS” and the authenticated identifier is the domain of the MAIL FROM address. The published SPF record applies to every message that is sent using SES regardless of whether you are using a shared or dedicated IP address. The amazonses.com SPF record lists all shared and dedicated IP addresses, so it is inclusive of all potential IP addresses that may be involved with sending email as the MAIL FROM domain. You can use ‘dig’ to look up the IP addresses that SES will use to send email:

dig txt amazonses.com | grep "v=spf1" amazonses.com. 850 IN TXT "v=spf1 ip4:199.255.192.0/22 ip4:199.127.232.0/22 ip4:54.240.0.0/18 ip4:69.169.224.0/20 ip4:23.249.208.0/20 ip4:23.251.224.0/19 ip4:76.223.176.0/20 ip4:54.240.64.0/19 ip4:54.240.96.0/19 ip4:52.82.172.0/22 ip4:76.223.128.0/19 -all"

Custom MAIL FROM domains

It is best practice for customers to configure a custom MAIL FROM domain, and not use the default amazonses.com MAIL FROM domain. The custom MAIL FROM domain will always be a subdomain of the customer’s verified domain identity. Once you configure the MAIL FROM domain, messages sent using SES will continue to result in an “SPF=PASS” as it does with the default MAIL FROM domain. Additionally, DMARC authentication will result in “DMARC=PASS” because the MAIL FROM domain and the domain in the “From” header are in alignment. It’s important to understand that customers must use a custom MAIL FROM domain if they want “SPF=PASS” to result in a “DMARC=PASS”.

For example, an Amazon SES-verified example.com domain will have the custom MAIL FROM domain “bounce.example.com”. The configured SPF record will be:

dig txt bounce.example.com | grep "v=spf1" "v=spf1 include:amazonses.com ~all"

Note: The chosen MAIL FROM domain could be any sub-domain of your choice. If you have the same domain identity configured in multiple regions, then you should create region-specific custom MAIL FROM domains for each region. e.g. bounce-us-east-1.example.com and bounce-eu-west-2.example.com so that asynchronously bounced messages are delivered directly to the region from which the messages were sent.

DKIM results in DMARC pass

For customers that establish Amazon SES Domain verification using DKIM signatures, DKIM authentication will result in a DKIM=PASS, and DMARC authentication will result in “DMARC=PASS” because the domain that publishes the DKIM signature is aligned to the domain in the “From” header (the SES domain identity).

DKIM and SPF together

Email messages are fully authenticated when the messages pass both DKIM and SPF, and both DKIM and SPF authenticated identifiers are domain-aligned. If only DKIM is domain-aligned, then the messages will still pass the DMARC policy, even if the SPF “pass” is unaligned. Mail receivers will consider the full context of SPF and DKIM when determining how they will handle the disposition of the messages you send, so it is best to fully authenticate your messages whenever possible. Amazon SES has taken care of the heavy lifting of the email authentication process away from our customers, and so, establishing SPF, DKIM and DMARC authentication has been reduced to a few clicks which allows SES customers to get started easily and scale fast.

Why is DMARC failing?

There are scenarios when you may notice that messages fail DMARC, whether your messages are fully authenticated, or partially authenticated. The following are things that you should look out for:

Email Content Modification

Sometimes email content is modified during the delivery to the recipients’ mail servers. This modification could be as a result of a security device or anti-spam agent along the delivery path (for example: the message Subject may be modified with an “[EXTERNAL]” warning to recipients). The modified message invalidates the DKIM signature which causes a DKIM failure. Remember, the purpose of DKIM is to ensure that the content of an email has not been tampered with during the delivery process. If this happens, the DKIM authentication will fail with an authentication error similar to “DKIM-signature body hash not verified“.

Solutions:

  • If you control the full path that the email message will traverse from sender to recipient, ensure that no intermediary mail servers modify the email content in transit.
  • Ensure that you configure a custom MAIL FROM domain so that the messages have a domain-aligned SPF identifier.
  • Keep the DMARC policy in monitoring mode (p=none) until these issues are identified/solved.

Email Forwarding

Email Forwarding There are multiple scenarios in which a message may be forwarded, and they may result in both/either SPF and DKIM failing to produce a domain-aligned authenticated identifier. For SPF, it means that the forwarding mail server is not listed in the MAIL FROM domain’s SPF policy. It is best practice for a forwarding mail server to avoid SPF failures and assume responsibility of mail handling for the messages it forwards by rewriting the MAIL FROM address to be in the domain controlled by the forwarding server. Forwarding servers that do not rewrite the MAIL FROM address pose a risk of impersonation attacks and phishing. Do not add the IP addresses of forwarding servers to your MAIL FROM domain’s SPF policy unless you are in complete control of all sources of mail being forwarded through this infrastructure. For DKIM, it means that the messages are being modified in some way that causes DKIM signature validation failure (see Email Content Modification section above). A responsible forwarding server will rewrite the MAIL FROM domain so that the messages pass SPF with a non-aligned authenticated identifier. These servers will attempt to forward the message without alteration in order to preserve DKIM signatures, but that is sometimes challenging to do in practice. In this scenario, since the messages carry no domain-aligned authenticated identifier, the messages will fail the DMARC policy.

Solution:

  • Email forwarding is an expected type of failure of which you will see in the DMARC aggregate reports. The domain owner must weigh the risk of causing forwarded messages to be rejected against the risk of not publishing a reject DMARC policy. Reference 8.6. Interoperability Considerations. Forwarding servers that wish to forward messages that they know will result in a DMARC failure will commonly rewrite the “From” header address of messages it forwards so that the messages pass a DMARC policy for a domain that the forwarding server is responsible for. The way to identify forwarding servers that rewrite the “From” header in this situation is to publish “p=quarantine pct=0 t=y” in your domain’s DMARC policy before publishing “p=reject”.

Multiple email sending providers are sending using the same domain

Multiple email sending providers: There are situations where an organization will have multiple business units sending email using the same domain, and these business units may be using an email sending provider other than SES. If neither SPF nor DKIM is configured with domain-alignment for these email sending providers, you will see DMARC failures in the DMARC aggregate report.

Solution:

  • Analyze the DMARC aggregate reports to identify other email sending providers, track down the business units responsible for each email sending program, and follow the instructions offered by the email sending provider about how to configure SPF and DKIM to produce a domain-aligned authenticated identifier.

What does a DMARC aggregate report look like?

The following XML example shows the general format of a DMARC aggregate report that you will receive from participating email service providers.

<?xml version="1.0" encoding="UTF-8" ?> 
<feedback> 
  <report_metadata> 
    <org_name>email-service-provider-domain.com</org_name> 
    <email>[email protected]</email> 
    <extra_contact_info>https://email-service-provider-domain.com/> 
    <report_id>620501112281841510</report_id> 
    <date_range> 
      <begin>1685404800</begin> 
      <end>1685491199</end> 
    </date_range> 
  </report_metadata> 
  <policy_published> 
    <domain>example.com</domain>
    <adkim>r</adkim> 
    <aspf>r</aspf> 
    <p>none</p> 
    <sp>none</sp> 
    <pct>100</pct> 
  </policy_published> 
  <record> 
    <row> 
      <source_ip>192.0.2.10</source_ip>
      <count>1</count> 
      <policy_evaluated> 
        <disposition>none</disposition> 
        <dkim>pass</dkim> 
        <spf>fail</spf> 
      </policy_evaluated> 
    </row> 
    <identifiers> 
      <header_from>example.com</header_from>
    </identifiers> 
    <auth_results> 
      <dkim> 
        <domain>example.com</domain> 
        <result>pass</result> 
        <selector>gm5h7da67oqhnr3ccji35fdskt</selector> 
      </dkim> 
      <dkim> 
        <domain>amazonses.com</domain> 
        <result>pass</result> 
        <selector>224i4yxa5dv7c2xz3womw6peua</selector> 
      </dkim> 
      <spf> 
        <domain>amazonses.com</domain> 
        <result>pass</result> 
      </spf> 
    </auth_results> 
  </record> 
</feedback> 

 

How to address DMARC deployment for domains confirmed to be unused for email (dangling or otherwise)

Deploying DMARC for unused or dangling domains is a proactive step to prevent abuse or unauthorized use of your domain. Once you have confirmed that all subdomains being used for sending email have the desired DMARC policies, you can publish a ‘p=reject’ tag on the organizational domain, which will prevent unauthorized usage of unused subdomains without the need to publish DMARC policies for every conceivable subdomain. For more advanced subdomain policy scenarios, read the “tree walk” definitions in https://datatracker.ietf.org/doc/draft-ietf-dmarc-dmarcbis/

Conclusion:

In conclusion, DMARC is not only a technology but also a commitment to email security, integrity, and trust. By embracing DMARC best practices, organizations can protect their users, maintain a positive brand reputation, and ensure seamless email deliverability. Every message from SES passes SPF and DKIM for “amazonses.com”, but the authenticated identifiers are not always in alignment with the domain in the “From” header which carries the DMARC policy. If email authentication is not fully configured, your messages are susceptible to delivery issues like spam filtering, or being rejected or blocked by the recipient ESP. As a best practice, you can configure both DKIM and SPF to attain optimum deliverability while sending email with SES.

 

About the Authors

Bruno Giorgini Bruno Giorgini is a Senior Solutions Architect specializing in Pinpoint and SES. With over two decades of experience in the IT industry, Bruno has been dedicated to assisting customers of all sizes in achieving their objectives. When he is not crafting innovative solutions for clients, Bruno enjoys spending quality time with his wife and son, exploring the scenic hiking trails around the SF Bay Area.
Jesse Thompson Jesse Thompson is an Email Deliverability Manager with the Amazon Simple Email Service team. His background is in enterprise IT development and operations, with a focus on email abuse mitigation and encouragement of authenticity practices with open standard protocols. Jesse’s favorite activity outside of technology is recreational curling.
Sesan Komaiya Sesan Komaiya is a Solutions Architect at Amazon Web Services. He works with a variety of customers, helping them with cloud adoption, cost optimization and emerging technologies. Sesan has over 15 year’s experience in Enterprise IT and has been at AWS for 5 years. In his free time, Sesan enjoys watching various sporting activities like Soccer, Tennis and Moto sport. He has 2 kids that also keeps him busy at home.
Mudassar Bashir Mudassar Bashir is a Solutions Architect at Amazon Web Services. He has over ten years of experience in enterprise software engineering. His interests include web applications, containerization, and serverless technologies. He works with different customers, helping them with cloud adoption strategies.
Priya Priya Singh is a Cloud Support Engineer at AWS and subject matter expert in Amazon Simple Email Service. She has a 6 years of diverse experience in supporting enterprise customers across different industries. Along with Amazon SES, she is a Cloudfront enthusiast. She loves helping customers in solving issues related to Cloudfront and SES in their environment.

 

Handling Bounces and Complaints

Post Syndicated from Tyler Holmes original https://aws.amazon.com/blogs/messaging-and-targeting/handling-bounces-and-complaints/

As you may have seen in Jeff Barr’s blog post or in an announcement, Amazon Simple Email Service (Amazon SES) now provides bounce and complaint notifications via Amazon Simple Notification Service (Amazon SNS). You can refer to the Amazon SES Developer Guide or Jeff’s post to learn how to set up this feature. In this post, we will show you how you might manage your email list using the information you get in the Amazon SNS notifications.

Background

Amazon SES assigns a unique message ID to each email that you successfully submit to send. When Amazon SES receives a bounce or complaint message from an ISP, we forward the feedback message to you. The format of bounce and complaint messages varies between ISPs, but Amazon SES interprets these messages and, if you choose to set up Amazon SNS topics for them, categorizes them into JSON objects.

Scenario

Let’s assume you use Amazon SES to send monthly product announcements to a list of email addresses. You store the list in a database and send one email per recipient through Amazon SES. You review bounces and complaints once each day, manually interpret the bounce messages in the incoming email, and update the list. You would like to automate this process using Amazon SNS notifications with a scheduled task.

Solution

To implement this solution, we will use separate Amazon SNS topics for bounces and complaints to isolate the notification channels from each other and manage them separately. Also, since the bounce and complaint handler will not run 24/7, we need these notifications to persist until the application processes them. Amazon SNS integrates with Amazon Simple Queue Service (Amazon SQS), which is a durable messaging technology that allows us to persist these notifications. We will configure each Amazon SNS topic to publish to separate SQS queues. When our application runs, it will process queued notifications and update the email list. We have provided sample C# code below.

Configuration

Set up the following AWS components to handle bounce notifications:

  1. Create an Amazon SQS queue named ses-bounces-queue.
  2. Create an Amazon SNS topic named ses-bounces-topic.
  3. Configure the Amazon SNS topic to publish to the SQS queue.
  4. Configure Amazon SES to publish bounce notifications using ses-bounces-topic to ses-bounces-queue.

Set up the following AWS components to handle complaint notifications:

  1. Create an Amazon SQS queue named ses-complaints-queue.
  2. Create an Amazon SNS topic named ses-complaints-topic.
  3. Configure the Amazon SNS topic to publish to the SQS queue.
  4. Configure Amazon SES to publish complaint notifications using ses-complaints-topic to ses-complaints-queue.

Ensure that IAM policies are in place so that Amazon SNS has access to publish to the appropriate SQS queues.

Bounce Processing

Amazon SES will categorize your hard bounces into two types: permanent and transient. A permanent bounce indicates that you should never send to that recipient again. A transient bounce indicates that the recipient’s ISP is not accepting messages for that particular recipient at that time and you can retry delivery in the future. The amount of time you should wait before resending to the address that generated the transient bounce depends on the transient bounce type. Certain transient bounces require manual intervention before the message can be delivered (e.g., message too large or content error). If the bounce type is undetermined, you should manually review the bounce and act accordingly.

You will need to define some classes to simplify bounce notification parsing from JSON into .NET objects. We will use the open-source JSON.NET library.

/// <summary>Represents the bounce or complaint notification stored in Amazon SQS.</summary>
class AmazonSqsNotification
{
    public string Type { get; set; }
    public string Message { get; set; }
}

/// <summary>Represents an Amazon SES bounce notification.</summary>
class AmazonSesBounceNotification
{
    public string NotificationType { get; set; }
    public AmazonSesBounce Bounce { get; set; }
}
/// <summary>Represents meta data for the bounce notification from Amazon SES.</summary>
class AmazonSesBounce
{
    public string BounceType { get; set; }
    public string BounceSubType { get; set; }
    public DateTime Timestamp { get; set; }
    public List<AmazonSesBouncedRecipient> BouncedRecipients { get; set; }
}
/// <summary>Represents the email address of recipients that bounced
/// when sending from Amazon SES.</summary>
class AmazonSesBouncedRecipient
{
    public string EmailAddress { get; set; }
}

Sample code to handle bounces:

/// <summary>Process bounces received from Amazon SES via Amazon SQS.</summary>
/// <param name="response">The response from the Amazon SQS bounces queue 
/// to a ReceiveMessage request. This object contains the Amazon SES  
/// bounce notification.</param> 
private static void ProcessQueuedBounce(ReceiveMessageResponse response)
{
    int messages = response.ReceiveMessageResult.Message.Count;
 
    if (messages > 0)
    {
        foreach (var m in response.ReceiveMessageResult.Message)
        {
            // First, convert the Amazon SNS message into a JSON object.
            var notification = Newtonsoft.Json.JsonConvert.DeserializeObject<AmazonSqsNotification>(m.Body);
 
            // Now access the Amazon SES bounce notification.
            var bounce = Newtonsoft.Json.JsonConvert.DeserializeObject<AmazonSesBounceNotification>(notification.Message);
 
            switch (bounce.Bounce.BounceType)
            {
                case "Transient":
                    // Per our sample organizational policy, we will remove all recipients 
                    // that generate an AttachmentRejected bounce from our mailing list.
                    // Other bounces will be reviewed manually.
                    switch (bounce.Bounce.BounceSubType)
                    {
                        case "AttachmentRejected":
                            foreach (var recipient in bounce.Bounce.BouncedRecipients)
                            {
                                RemoveFromMailingList(recipient.EmailAddress);
                            }
                            break;
                        default:
                            ManuallyReviewBounce(bounce);
                            break;
                    }
                    break;
                default:
                    // Remove all recipients that generated a permanent bounce 
                    // or an unknown bounce.
                    foreach (var recipient in bounce.Bounce.BouncedRecipients)
                    {
                        RemoveFromMailingList(recipient.EmailAddress);
                    }
                    break;
            }
        }
    }
}

Complaint Processing

A complaint indicates the recipient does not want the email that you sent them. When we receive a complaint, we want to remove the recipient addresses from our list. Again, define some objects to simplify parsing complaint notifications from JSON to .NET objects.

/// <summary>Represents an Amazon SES complaint notification.</summary>
class AmazonSesComplaintNotification
{
    public string NotificationType { get; set; }
    public AmazonSesComplaint Complaint { get; set; }
}
/// <summary>Represents the email address of individual recipients that complained 
/// to Amazon SES.</summary>
class AmazonSesComplainedRecipient
{
    public string EmailAddress { get; set; }
}
/// <summary>Represents meta data for the complaint notification from Amazon SES.</summary>
class AmazonSesComplaint
{
    public List<AmazonSesComplainedRecipient> ComplainedRecipients { get; set; }
    public DateTime Timestamp { get; set; }
    public string MessageId { get; set; }
}

Sample code to handle complaints is:

/// <summary>Process complaints received from Amazon SES via Amazon SQS.</summary>
/// <param name="response">The response from the Amazon SQS complaint queue 
/// to a ReceiveMessage request. This object contains the Amazon SES 
/// complaint notification.</param>
private static void ProcessQueuedComplaint(ReceiveMessageResponse response)
{
    int messages = response.ReceiveMessageResult.Message.Count;
 
    if (messages > 0)
    {
        foreach (var
  message in response.ReceiveMessageResult.Message)
        {
            // First, convert the Amazon SNS message into a JSON object.
            var notification = Newtonsoft.Json.JsonConvert.DeserializeObject<AmazonSqsNotification>(message.Body);
 
            // Now access the Amazon SES complaint notification.
            var complaint = Newtonsoft.Json.JsonConvert.DeserializeObject<AmazonSesComplaintNotification>(notification.Message);
 
            foreach (var recipient in complaint.Complaint.ComplainedRecipients)
            {
                // Remove the email address that complained from our mailing list.
                RemoveFromMailingList(recipient.EmailAddress);
            }
        }
    }
}

Final Thoughts

We hope that you now have the basic information on how to use bounce and complaint notifications. For more information, please review our API reference and Developer Guide; it describes all actions, error codes and restrictions that apply to Amazon SES.

If you have comments or feedback about this feature, please post them on the Amazon SES forums. We actively monitor the forum and frequently engage with customers. Happy sending with Amazon SES!

How to secure your email account and improve email sender reputation

Post Syndicated from bajavani original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-secure-your-email-account-and-improve-email-sender-reputation/

How to secure your email account and improve email sender reputation

Introduction

Amazon Simple Email Service (Amazon SES) is a cost-effective, flexible, and scalable email service that enables customers to send email from within any application. You can send email using the SES SMTP interface or via HTTP requests to the SES API. All requests to send email must be authenticated using either SMTP or IAM credentials and it is when these credentials end up in the hands of a malicious actor, that customers need to act fast to secure their SES account.

Compromised credentials with permission to send email via SES allows the malicious actor to use SES to send spam and or phishing emails, which can lead to high bounce and or complaint rates for the SES account. A consequence of high bounce and or complaint rates can result in sending for the SES account being paused.

How to identify if your SES email sending account is compromised

Start by checking the reputation metrics for the SES account from the Reputation metrics menu in the SES Console.
A sudden increase or spike in the bounce or complaint metrics should be further investigated. You can start by checking the Feedback forwarding destination, where SES will send bounce and or complaints to. Feedback on bounces and complaints will contain the From, To email addresses as well as the subject. Use these attributes to determine if unintended emails are being sent, for example if the bounce and / or complaint recipients are not known to you that is an indication of compromise. To find out what your feedback forwarding destination is, please see Feedback forwarding mechanism

If SNS notifications are already enabled, check the subscribed endpoint for the bounce and / or complaint notifications to review the notifications for unintended email sending. SNS notifications would provide additional information, such as IAM identity being used to send the emails as well as the source IP address the emails are being sent from.

If the review of the bounces or complaints leads to the conclusion that the email sending is unintended, immediately follow the steps below to secure your account.

Steps to secure your account:

You can follow the below steps in order to secure your SES account:

  1. It is recommended that to avoid any more unintended emails from being sent, to immediately pause the SES account until the root cause has been identified and steps taken to secure the SES account. You can use the below command to pause the email sending for your account:

    aws ses update-account-sending-enabled --no-enabled --region sending_region

    Note: Change the sending_region with the region you are using to send email.

  2. Rotate the credentials for the IAM identity being used to send the unintended emails. If the IAM identity was originally created from the SES Console as SMTP credentials, it is recommended to delete the IAM identity and create new SMTP credentials from the SES Console.
  3. Limit the scope of SMTP/IAM identity to send email only from the specific IP address your email sending originates from.

See controlling access to Amazon SES.

Below is an example of an IAM policy which allows emails from IP Address 1.2.3.4 and 5.6.7.8 only.

————————-

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "RestrictIP",
"Effect": "Allow",
"Action": "ses:SendRawEmail",
"Resource": "*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"1.2.3.4/32",
"5.6.7.8/32"
]
}
}
}
]
}

———————————

When you send an email from IP address apart from the IP mentioned in the policy, then the following error will be observed and the email sending request will fail:

———-

554 Access denied: User arn:aws:iam::123456789012:user/iam-user-name’ is not authorized to perform ses:SendRawEmail’ on resource `arn:aws:ses:eu-west-1:123456789012:identity/example.com’

———-

4.  Once these steps have been taken, the sending for the account can be enabled again, using the command below:

aws ses update-account-sending-enabled --enabled --region sending_region

Conclusion

You can secure your SES email sending account by taking the necessary steps mentioned and also prevent this from happening in the future.

Deploy Amazon QuickSight dashboard for Amazon Pinpoint engagement events.

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/deploy-amazon-quicksight-dashboard-for-amazon-pinpoint-engagement-events/

Abstract

Business intelligence (BI) dashboards provide a graphical representation of metrics and key performance indicators (KPIs) to monitor the health of your business. By leveraging BI dashboards to analyze the performance of your customer communications, you gain valuable insights into how they are engaging with your messages and can make data-driven decisions to improve your marketing and communication strategies.

In this blog post we introduce a solution that automates the deployment of an Amazon QuickSight dashboard that enables marketers to analyze their long-term Amazon Pinpoint customer engagement data. These dashboards can be customized further depending the use case. This solution alleviates the need to create data pipelines for storage and analysis of Amazon Pinpoint’s engagement data, while offering a greater variety of widgets and views across email, SMS, campaigns, journeys and transactional messages when comparing to Amazon Pinpoint’s native dashboards.

Amazon Pinpoint is a flexible, scalable marketing communications service that connects you with customers over email, SMS, push notifications, or voice. The service offers ready to use dashboards to view key performance indicators (KPIs) for the various messaging channels, It provides 90 days of events for analysis. However, the raw events used to populate Amazon Pinpoint’s dashboards, can be streamed using Amazon Kinesis Data Firehose to a destination of your choice. This blog will walk you through leveraging this feature to create a data lake to store and analyze data beyond the initial 90 days.

Amazon QuickSight is a cloud-scale business intelligence (BI) service that you can use to deliver easy-to-understand insights in an interactive visual environment.

The solutions leverages the Amazon Cloud Development Kit (CDK) to deploy the needed infrastructure and dashboards.

Use Case(s)

The Amazon QuickSight dashboards deployed through this solution are designed to serve several use cases. Here are just a few examples:

  • View email and SMS costs per Campaign and Journey.
  • Deep dive into engagement insights and performance. (eg: SMS events, Email events, Campaign events, Journey events).
  • Schedule reports to various business stakeholders.
  • Track individual email & SMS statuses to specific endpoints.
  • Analyze open and click rates based on the message send time.

These are some of the use cases you can use these dashboards for and with all the data points being available in Amazon QuickSight, you can create your own views and widgets based on your specific requirements.

Solution Overview

This solution builds upon the Digital User Engagement (DUE) Event Database AWS Solution. It creates a long-term Amazon Pinpoint event data lake. This solution also builds a QuickSight dashboard to visualize and analyze this data. It leverages several other AWS services to tie i all together. It uses AWS Lambda for processing AWS CloudTrail data, Amazon Athena to build views using SQL for Amazon QuickSight, AWS CloudTrail to record any new campaign, journey and segment updates and Amazon DynamoDB to store the campaign, journey and segment metadata. This solution can be segmented into three logical portions: 1) Pinpoint campaign/journey/segment lookup tables. 2) Amazon Athena Views. 3) Amazon QuickSight resources.

The AWS Cloud Development Kit (CDK) is used to deploy this solution to your account. AWS CDK is an open-source software development framework for defining cloud infrastructure as code with modern programming languages and deploying it through AWS CloudFormation.

Pinpoint campaign/journey/segment lookup tables

architecture-diagram

  1. A CloudFormation AWS Lambda-backed custom resource function adds current Pinpoint campaign, journey and segment meta data to Amazon DynamoDB lookup tables. An AWS CloudFormation custom resource is managed by a Lambda function that runs only upon the deployment, update and deletion of the AWS CloudFormation stack.
  2. AWS CloudTrail logs record API actions to an S3 bucket every 5 minutes.
  3. When an AWS CloudTrail log is written to the S3 bucket an AWS Lambda function is invoked and checks for Amazon Pinpoint campaigns/journeys/segments management events such as create, update and delete.
  4. For every Amazon Pinpoint action the AWS Lambda function finds, it queries Amazon Pinpoint to get the respective resource details.
  5. The AWS Lambda function will create or update records in the Amazon DynamoDB table to reflect the changes.
  6. This solution also deploys an Amazon Athena DynamoDB connector. Amazon Athena uses this to query the Amazon DynamoDB lookup tables to enrich the data in the Amazon Pinpoint event data lake.
  7. The Amazon Athena to Amazon DynamoDB connector requires an Amazon S3 spill bucket for any data that exceeds the AWS Lambda function limits

Amazon Athena views

Amazon Athena views are crucial for querying and organizing the data. These views allow QuickSight to interact with the Pinpoint event data lake through standard SQL queries and views. Here’s how they’re set up:

The application creates several named queries (called saved queries in the Amazon Athena console). Each named query uses a SQL statement to create a database view containing a subset of the data from the Pinpoint event data lake (or joins data from a previous view with the Amazon DynamoDB tables created above. The views are also created using an AWS Lambda-backed custom resource.

Amazon QuickSight resources

quicksight-resources-diagram

  1. This solution creates several Amazon QuickSight resources to support the deployed dashboard. These include data sources, datasets, refresh schedules, and an analysis. The refresh schedule determines the frequency that Amazon QuickSight queries the Amazon Athena views to update the datasets.
  2. Amazon Athena retrieves live data from the DUE event database data lake and the Athena DynamoDB Connector whenever the Amazon QuickSight refresh schedule runs.

Prerequisites

  • Deploy the Digital User Engagement (DUE) Event Database solution before continuing
    • After you have deployed this solution, gather the following data from the stack’s Resources section.
      • DUES3DataLake: You will need the bucket name
      • PinpointProject: You will need the project Id
      • PinpointEventDatabase: This is the name of the Glue Database. You will only need this if you used something other than the default of due_eventdb

Note: If you are installing the DUE event database for the first time as part of these instructions, your dashboard will not have any data to show until new events start to come in from your Amazon Pinpoint project.

Once you have the DUE event database installed, you are ready to begin your deployment.

Implementation steps

Step 1 – Ensure that Amazon Athena is setup to store query results

Amazon Athena uses workgroups to separate users, teams, applications, or workloads, to set limits on amount of data each query or the entire workgroup can process, and to track costs. There is a default workgroup called “primary” However, before you can use this workgroup, it needs to be configured with an Amazon S3 bucket for storing the query results.

  1. If you do not have an existing S3 bucket you can use for the output, create a new Amazon S3 bucket.
  2. Navigate to the Amazon Athena console and from the menu select workgroups > primary > Edit > Query result configuration
    1. Select the Amazon S3 bucket and any specific directory for the Athena query result location

Note: If you choose to use a workgroup other that the default “primary” workgroup. Please take note of the workgroup name to be used later.

Step 2 – Enable Amazon QuickSight

Amazon QuickSight offers two types of data sets: Direct Query data sets, which provides real-time access to data sources, and SPICE (Super-fast, Parallel, In-memory Calculation Engine) data sets, which are pre-aggregated and cached for faster performance and scalability that can be refreshed on a schedule.

This solution uses SPICE datasets set to incrementally refresh on a cycle of your choice (Daily or Hourly). If you have already setup Amazon QuickSight, please navigate to Amazon QuickSight in the AWS Console and skip to step 3.

  1. Navigate to Amazon QuickSight on the AWS console
  2. Setup Amazon QuickSight account by clicking the “Sign up for QuickSight” button.
    1. You will need to setup an Enterprise account for this solution.
    2. To complete the process for the Amazon QuickSight account setup follow the instructions at this link
  3. Ensure you have the Admin Role
    1. Choose the profile icon in the top right corner, select Manage QuickSight and click on Manage Users
    2. Subscription details should display on the screen.
  4. Ensure you have enough SPICE capacity for the datasets
    1. Choose the profile icon, and then select Manage QuickSight
    2. Click on SPICE Capacity
  5. Make sure you enough SPICE for all three datasets
    1. if you are still in the free tier, you should have enough for initial testing.
    2. You will need about 2GB of capacity for every 1,000,000 Pinpoint events that will be ingested in to SPICE
    3. Note: If you do not have enough SPICE capacity, deployment will fail
  6. Please note the Amazon QuickSight username. You can find this by clicking profile icon. Example username: Admin/user-name

Step 3 – Collect the Amazon QuickSight Service Role name in IAM

For Amazon Athena, Amazon S3, and Athena Query Federation connections, Amazon QuickSight uses the following IAM “consumer” role by default: aws-quicksight-s3-consumers-role-v0

If the “consumer” role is not present, then QuickSight uses the following “service” role instead : aws-quicksight-service-role-v0.

The version number at the end of the role could be different in your account. Please validate your role name with the following steps.

  1. Navigate to the Identity and Access Management (IAM) console
  2. Go to Roles and search QuickSight
  3. If the consumer role exists, please note its full name
  4. If you only find the service role, please note its full name

Note: For more details on these service roles, please see the QuickSight User Guide

Step 4 – Prepare the CDK Application

Deploying this solution requires no previous experience with the AWS CDK toolkit. If you would like to familiarize yourself with CDK, the AWS CDK Workshop is a great place to start.

  1. Setup your integrated development environment (IDE)
    1. Option 1 (recommended for first time CDK users): Use AWS Cloud9 – a cloud-based IDE that lets you write, run, and debug your code with just a browser
      1. Navigate to Cloud9 in the AWS console and click the Create Environment button
      2. Provide a descriptive name to your environment (e.g. PinpointAnalysis)
      3. Leave the rest of the values as their default values and click Create
      4. Open the Cloud9 IDE
        1. Node, TypeScript, and CDK should be come pre-installed. Test this by running the following commands in your terminal.
          1. node --version
          2. tsc --version
          3. cdk --version
          4. If dependencies are not installed, follow the Step 1 instructions from this article
        2. Using AWS Cloud 9 will incur a nominal charge if you are no longer Free Tier eligible. However, using AWS Cloud9 will simply setup if you do not already have a local environment with AWS CDK and the AWS CLI installed
    2. Option 2: local IDE such as VS Code
      1. Setup CDK locally using this documentation
      2. Install Node, TypeScript and the AWS CLI
        1. Once the CLI is installed, configure your AWS credentials
          1. aws configure
  2. Clone the Pinpoint Dashboard Solution from your terminal by running the command below:
    1. git clone https://github.com/aws-samples/digital-user-engagement-events-dashboards.git
  3. Install the required npm packages from package.json by running the commands below:
    1. cd digital-user-engagement-events-dashboards
    2. npm install

Open the file at digital-user-engagement-events-dashboards/bin/pinpoint-bi-analysis.ts for editing in your IDE.

Edit the following code block your your solution with the information you have gathered in the previous steps. Please reference Table 1 for a description of each editable field.

const resourcePrefix = "pinpoint_analytics_";

...

new MainApp(app, "PinpointAnalytics", {
  env: {
    region: "us-east-1",
  }
  
  //Attributes to change
  dueDbBucketName: "{bucket-name}",
  pinpointProjectId: "{pinpoint-project-id}",
  qsUserName: "{quicksight-username}",

  //Default settings
  athenaWorkGroupName: "primary",
  dataLakeDbName: "due_eventdb",
  dateRangeNumberOfMonths: 6,
  qsUserRegion: "us-east-1", 
  qsDefaultServiceRole: "aws-quicksight-service-role-v0", 
  spiceRefreshInterval: "HOURLY",

  //Constants
  athena_util: athena_util,
  qs_util: qs_util,
});
Attribute Definition Example
resourcePrefix The prefix for all created Athena and QuickSight resources pinpoint_analytics_
region Where new resources will be deployed. This must be the same region that the DUE event database solution was deployed us-east-1
dueDbBucketName The name of the DUE event database S3 Bucket due-database-xxxxxxxxxxus-east-1
qsUserName The name of your QuickSight User Admin/my-user
athenaWorkGroupName The Athena workgroup that was previously configured primary
dataLakeDbName The Glue database created during the DUE event database solution. By default the database name is “due_eventdb” due_eventdb
dateRangeNumberOfMonths The number of months of data the Athena views will contain. QuickSight SPICE datasets will contain this many months of data initially and on full refresh. The QuickSight dataset will add new data incrementally without deleting historical data. 6
qsUserRegion The region where your quicksight user exists. By default, new users will be created in us-east-1. You can check your user location with the AWS CLI: aws quicksight list-users --aws-account-id {accout-id} --namespace default and look for the region in the arn us-east-1
qsDefaultServiceRole The service role collected during Step 3. aws-quicksight-service-role-v0
spiceRefreshInterval Options Include HOURLY, DAILY – This is how often the SPICE 7-day incremental window will be refreshed DAILY

Step 5 – Deploy

  1. CDK requires you to bootstrap in each region of an account. This creates a S3 bucket for deployment. You only need to bootstrap once per account/region
    1. cdk bootstrap
  2. Deploy the application
    1. cdk deploy

Step 6 – Explore

Once your solution deploys, look for the Outputs provided by the CDK CLI. You will find a link to your new Amazon Quicksight Analysis, or Dashboard, as well as a few other key resources. Also, explore the resources sections of the deployed stacks in AWS CloudFormation for a complete list of deployed resources. In the AWS CloudFormation, you should have two stacks. The main stack will be called PinpointAnalytics and a nested stack.

Pricing

The total cost to run this solution will depend on several factors. To help explore what the costs might look like for you, please look at the following examples.

All costs outlined below will assume the following:

  • 1 Amazon QuickSight author
  • 100 Amazon QuickSight analysis reader sessions
  • 100k write API actions for all services in AWS account
  • A total of 1k Amazon Pinpoint campaigns, journeys, and segments resulting in 1k Amazon DynamoDB records
  • 5 million monthly Amazon Pinpoint events – email send, email delivered, etc.

Base Costs:

  • 1 Amazon QuickSight author – can edit all Amazon QuickSight resources
    • $24 – There is a a 30 day trial for 4 authors in the free tier
  • 100 Amazon QuickSight analysis reader sessions OR 6 readers with unlimited access – max $5 per month per reader
    • $30
  • Total Monthly Costs: $54 / month

Variable Costs:

Even with the assumptions listed above, the costs will vary depending on the chosen data retention window as well as the the refresh schedule.

  • SPICE data storage costs.
    • Total size of storage will depend on how many months you choose to display in the dashboard
    • For the above assumptions, the SPICE datasets will cost roughly $3.25 for each month stored in the datasets.
  • Amazon Athena data volume costs
    • With Athena you are charged for the total number of bytes scanned in a query. The solution implements incremental data resfreshes in SPICE. Amazon QuickSight will only query and updates the most recent 7 days of data during each refresh cycle. This can be adjusted as needed.

Scenario 1 – 6-month data analysis with daily refresh:

  • Fixed costs: $57
  • SPICE datasets: $19.50
  • Athena Scans: $1.25
  • Total Costs: $77.75 / Month

Scenario 2 – 12-month data analysis with daily refresh:

  • Fixed costs: $57
  • SPICE datasets: $39
  • Athena Scans: $1.25
  • Total Costs: $97.25 / Month

Scenario 3 – 12-month data analysis with hourly refresh:

  • Fixed costs: $57
  • SPICE datasets: $39
  • Athena Scans: $27.50
  • Total Costs: $123.5 / Month

Note: Several services were not mentioned in the above scenarios (e.g., DynamoDB, Cloudtrail, Lambda, etc). The limited usage of these services resulted in a combined cost of less than a few US dollars per month. Even at a greater scale, the costs from these services will not increase in any significant way.

Clean up

  • Delete the CDK stack running the following from your command line
    • cdk destroy
  • Delete QuickSight account
  • Delete Athena views
    • Go to Glue > Data Catalog > Databases > Your Database Name
    • This should delete all Athena views no longer needed. Views created will start with the resourcePrefix specified in the bin/athena-quicksight-cdk.ts file
  • Delete S3 buckets
    • DynamoDB cloud watch log bucket
    • Dynamo Athena Connector Spill bucket
    • Athena workgroup output bucket
  • Delete DynamoDB tables
    • This solution creates two DynamoDB lookup tables prefixed with the Stack name

Conclusion

In this blog, you have deployed a solution that visualizes Amazon Pinpoint’s email and SMS engagement data using Amazon QuickSight. This solution provides you with an Amazon QuickSight functional dashboard as well as a foundation to design and build new Amazon QuickSight dashboards that meet your bespoke requirements. Parts of the solution, such as the Amazon Athena views, can be ingested with other business intelligence tools that your business might already be using.

Next steps

This solution can be expanded to include Amazon Pinpoint engagement events from other channels such as push notifications, Amazon Connect outbound calls, in-app and custom events. This will require certain updates on the Amazon Athena views and consequently on the Amazon QuickSight dashboards. Furthermore, the Amazon DynamoDB tables store only campaign, journey and segment meta-data. You can extend this part of the solution to include message template meta-data, which will help to analyze performance per message template.

Considerations / Troubleshooting

  • Pinpoint Standard account can be upgraded to an Enterprise account. Enterprise accounts cannot be downgraded to a Standard account.
  • SPICE capacity is allocated separately for each AWS Region. Default SPICE capacity is automatically allocated to your home AWS Region. For each AWS account, SPICE capacity is shared by all the people using QuickSight in a single AWS Region. The other AWS Regions have no SPICE capacity unless you choose to purchase some.
  • The QuickSight Analysis Event rates are calculated on Pinpoint message_id and endpoint_id grain – click rate will be the same if a user clicks an email link one or more than one times
  • All timestamps are in UTC. To display data in another timezone edit event_timestamp_timezone calculated field in every dataset
  • Data inside Amazon QuickSight will refresh depending on the schedule set during deployment. Current options include hourly and daily refreshes.
  • AWS CloudTrail has 5 cloudtrail trails per AWS account.

About the Authors

Spencer Harrison

Spencer Harrison

Spencer was a 2023 WWPS Solution Architect intern at Amazon Web Services. He will graduate with his Masters of Information Systems Management from Brigham Young University in the spring of 2024. After graduation he is aspiring to find opportunities as a solution architect, cloud engineer, or DevOps engineer. Outside of work, Spencer loves going outdoors to wake surf, downhill ski, and play pickle ball.

Daniel Wells

Daniel Wells

With over 20 years of IT experience, Daniel has held many architecture and director positions supporting a wide variety of technologies. He currently works as an AWS Solutions Architect supporting Education Technology companies striving to make a difference for learners and educators worldwide. Daniel’s interests outside of work include music, family, health, education and anything that allows him to express himself creatively.

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Senior Specialist Solutions Architect at AWS. He enjoys diving deep into customers’ technical issues and help in designing communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Improve email delivery rates with email delivery and engagement history for every email

Post Syndicated from sakoppes original https://aws.amazon.com/blogs/messaging-and-targeting/improve-email-delivery-rates-with-email-delivery-and-engagement-history-for-every-email/

Email is a ubiquitous way to reach customers, whether to stay in touch, offer new services, or inform customers of product changes or transaction status. Amazon Simple Email Service helps customers send hundreds of billions of emails each month, and now offers more tools to improve email delivery rates and explore campaign success. SES’ Virtual Deliverability Manager now supports email delivery and engagement history, giving customers the ability to easily troubleshoot and investigate email delivery activities. Customers can verify the delivery of emails, identify the source of deliverability challenges, and find means to improve their email delivery rates. This reduces the effort needed to debug delivery problems, and lowers the mean time to resolution when responding to email delivery operational events.

How did Amazon SES’ deliverability features work before?

Previously, customers could get powerful insights into their delivery rates using SES Virtual Deliverability Manager, but they could not explore delivery activity at the level of individual messages. Customers could see overall send, delivery, bounce, and open/click rates for their accounts, as well as by mailbox provider (e.g. Gmail or Hotmail), sending email address, and configuration set. These insights were aggregated across multiple emails, making it difficult to find specific examples of failures to support troubleshooting efforts. It was also not possible to verify individual email delivery status and troubleshoot specific failures, which is a common customer support use case. Customers could implement custom solutions tracking delivery events emitted by SES, but this required custom coding and private data store maintenance. Often the associated effort was a barrier to customers, and they simply did not have an effective way to troubleshoot delivery problems at scale.

What new capabilities are available to enhance email deliverability?

Now, customers can see the delivery transaction status and details for every email they have sent in the last 30 days. As an integrated part of the SES Virtual Deliverability Manager experience, customers have a seamless search and drilldown experience built into the AWS SES console, to search for email transaction records and explore delivery status. It’s easy to pivot by multiple criteria, including searches by sending email address, or looking at the emails sent to a specific recipient. Customers can narrow down searches to specific timeframes and sample based on delivery status. The flexible query capabilities of Virtual Deliverability Manager help customers quickly solve a variety of use cases, from verifying whether an order shipment notification has been delivered, to sampling mailbox provider responses to failed email deliveries when investigating a drop in delivery rates.

How to view email search results with a list of emails retrieved based on search criteria.

What can I do with the new email delivery and engagement history capabilities?

Lookup individual email recipient history:

Say you have a customer support team that gets complaints from customers about not getting specific transactional emails, such as a shipping confirmation email. Without a custom solution, it was difficult to confirm whether a specific email sent through SES had indeed reached the target recipient. Now you can look up the recent email history of a single recipient, and quickly see all the emails sent to that recipient with delivery and engagement status. It’s easy to see, for example, if an email bounced because a customer’s mailbox was full. It’s easy to get a fast and concise answer, streamlining investigations into single delivery events.

Understand drops in email delivery rates:

Another interesting case is when you have a drop in deliverability success rates, and you need to find out why. Now you can easily search for emails that bounced within a specific timeframe, and look at the message received from the mailbox provider which describes the bounce reason. These messages often contain useful information, such as identifying messages identified as spam or listed in a public blocklist. You can also see if send actions are failing because recipients are on your blocklist. This helps quickly narrow down the possible root causes to resolve delivery issues quickly.

How to view email delivery transaction details showing the delivery message.

Find your most engaged customers:

It’s also helpful to be able to find lists of customers who might benefit from focused marketing efforts. When you send emails through SES, you can track whether recipients open the emails and whether they click on links in the content. In aggregate these metrics helps show campaign success, but now you can find out who engaged with your emails to drive further outreach efforts. Searching for emails from a specific sender that have had a click event will give you a list of recipients that may be responsive to further actions. It takes just seconds with Virtual Deliverability Manager, and you can use the Export feature to easily pull your search results into a spreadsheet.

How to see email search results filtered by engagement status.

How to get started with email delivery and engagement history:

To get started using email delivery and engagement history, if you have Virtual Deliverability Manager enabled, just open the AWS SES console, navigate to the Virtual Deliverability Manager Dashboard in the left navigation, and click on the “Messages” tab. For customers who wish to enable Virtual Deliverability Manager, just open the AWS SES console and navigate to the Virtual Deliverability Manager page in the left navigation, and follow the instructions to turn on Virtual Deliverability Manager.

If you want to learn more, see the Virtual Deliverability Manager dashboard documentation.

How to enable deliverability features in Amazon SES

Get started with SES visit https://aws.amazon.com/ses/.

Migrating to a cloud ESP: How to onboard to Amazon SES

Post Syndicated from Vinay Ujjini original https://aws.amazon.com/blogs/messaging-and-targeting/migrating-to-amazon-ses-a-comprehensive-guide/

Amazon SES: Email remains a powerful tool for businesses, whether for marketing campaigns, transactional notifications, or other communications. Amazon Simple Email Service (Amazon SES) is a cloud email service provider that can integrate into any application for bulk email sending. Amazon SES is an email service that supports a variety of deployments like transactional emails, system alerts, marketing/promotional/bulk emails, streamlined internal communications, and emails triggered by CRM system as a few examples. When you use Amazon SES to send transactional emails, marketing emails, or newsletter emails, you only pay for what you use. Analytics on sender statistics along with managed services like Virtual Deliverability Manager help businesses make every email count with Amazon SES. You can get reliable, scalable email to communicate with customers at the best industry prices. If you are considering Amazon SES for its scalability, cost-effectiveness, and reliability, this guide will walk you through a systematic migration process.

Scenarios to consider:

When considering a migration to Amazon SES, let’s assess the specific scenarios to consider. These scenarios represent different contexts or situations that a business or individual find themselves in, and each scenario has its unique challenges and considerations. By identifying the appropriate scenario for your situation, you can tailor your migration strategy, anticipate potential challenges, and streamline the transition process. Few common scenarios:

  • Migrating from on-Prem to SES

    • Advantages:

      • Scalability: SES automatically scales with your needs, thus ensuring you don’t face downtimes or need to regularly upgrade your infrastructure.
      • Maintenance/overhead: Maintaining on-Prem email system can be complex and resource-intensive. Some of the tasks include hardware maintenance and scalability, back up or disaster recovery, security, and compliance (relevant to email storage and transmission).
      • Cost-Effectiveness: You only pay for what you send, eliminating overhead costs associated with maintaining and upgrading on-Prem email infrastructure.
      • Security: SES offers built-in security features like email encryption in transit and at rest, and DKIM authentication with automated key rotation, allowing for sending DMARC compliant email.
    • Considerations:

      • Email Sending Limits: SES has sending limits to protect customers from deliverability events resulting from unexpected sending volumes. Customers monitor when they have reached or are approaching their anticipated sending volumes, and may request the limits to be increased.
      • Migration Time: Depending on the volume and complexity migration has to be planned and executed to minimize downtime, maintain data & sending integrity, and maintain high deliverability. This blog goes in detail on the migration process.
      • Email authentication: Setting up email authentication records such as DKIM, SPF, DMARC and BIMI: Ensure you set up domain authentication to allow mailbox providers to build a trusted model based on the messages from your domain. Sending authenticated mail is the best path to deliverability. Additionally adding trust factors to your messages like BIMI (brand indicators for message identification) will help with brand recognition both by the mailbox provider and the end-recipient (ISPs & mailbox providers use DKIM as the authenticated identifier for the trust models to determine if to show the BIMI logo).
  • Migrating from another cloud solution to SES

    • Advantages:

      • Cost Savings: Amazon SES is cost-effective, especially at high volumes.
      • Integration with AWS Services: If you’re using other AWS services, integration is easier with Amazon SES.
      • Expert help: Amazon SES provides email expertise from architectural advise, help with the technical aspects of migrating from one service to another, in addition to email industry experts including deliverability focused specialists.
    • Considerations:

      • Transition Period/migration: Follow the migration path to mitigate transition risks.
      • Update Integrations: Any software or applications integrated with your previous cloud service will need to be reconfigured to work with Amazon SES (ex: SMTP, events, capturing feedback, metrics, etc.).
      • Avoid downtime: You can avoid downtime by ramping up sending gradually by moving each use case into configuration sets and applying warm-up patterns to each campaign as you shift traffic from existing service to Amazon SES.
  • Migrating portion of the load and running a hybrid solution

    • Advantages:

      • Flexibility: You can maintain operations on your existing platform while testing and transitioning to SES, ensuring there’s no disruption.
      • Risk Mitigation: You can monitor your migration progress in multiple steps rather than one single step.
      • Phased Implementation: You can migrate in stages, reducing the complexity of the move.
    • Considerations:

      • Complexity: Running two systems simultaneously will introduce operational & management complexities (For example, maintaining customer opt-out preferences and suppressed email addresses need to be synced into the source lists/database).
      • Cost Implications: While you’re transitioning, you will be paying for two services, which has a cost implication.
      • Consistent Branding: Ensure consistent branding and email design across both platforms to provide a uniform experience for recipients and leverage the same domain identities authenticated with DKIM so that their prior sending reputation is carried over.

Steps for migration:

1. Identify use cases: Before the technicalities, understand and breakdown the types of emails you plan on migrating:

    1. Marketing Campaign emails (e.g., cross-sell, up-sell, new product released)
    2. Transactional Emails (e.g., order confirmations, password resets)
    3. Regular business communications
    4. Inbox use cases
    5. Others (ex: OTP, acquisition, etc.)

2. Architect the flow by splitting marketing and transactional traffic: Differentiate between marketing and transactional emails, ensuring they are distinctly separated. This helps improve email management, deliverability monitoring, and ensures high-priority transactional emails aren’t delayed by large marketing campaigns. It is highly recommended is to split the transactional and marketing email traffic through separate subdomains. Choose whether to use your primary domain (example.com) or a sub-domain (mail.example.com) for sending emails. Using a sub-domain can help divide email traffic and manage domain reputations separately, like marketing.example.com and transactional.example.com. You can create configuration sets, which are sets of rules that are applied to the emails that you send. For example, you can use configuration sets to specify where notifications are sent when an email is delivered, when a recipient opens a message or clicks a link in it, when an email bounces, and when a recipient marks your email as spam. For more information, see Using configuration sets in Amazon SES.

3. Domain verification: Sending authorization policies act as the gatekeeper for authorizing use of a domain identity. Domain verification is a process for Amazon SES to verify the customer owns the domain and causes messages to be signed with a DKIM signature aligned to the domain in the “From” header address of outbound messages. It is a foundational step towards a secure, reputable, and efficient email-sending program. Here’s why domain verification is essential and how it benefits users:

Why is Domain Verification Needed?

  1. Ownership Assurance: Domain verification ensures that the customer is authorized to send emails from the specified domain. By confirming ownership, only customers who have verified a domain identity will have their messages authenticated with a DKIM signature belonging to the domain.
  2. Reduce Spam and Phishing: Ensuring that only verified domain owners can send emails contributes to a trustworthy email ecosystem. Using a verified domain identity ensures that the message is signed with a DKIM signature aligned to the domain in the from header, which means that the message will pass DMARC-style policy enforcement (describes how unauthenticated messages claiming to be from the domain).
  3. Maintain Domain Reputation: If anyone were able to send emails from any domain, it will damage the domain’s reputation that they are sending from, unless they are the owners of it. By sending from a verified domain, it ensures that your domain’s reputation remains intact and is not misused by others.
  4. Compliance with SES Policies: Amazon has set policies to maintain the integrity and reputation of its SES service. Domain verification is in line with these policies, ensuring that all users follow best email practices.

How does domain verification help you?

  1. Enhanced Deliverability: Emails from verified domains are more likely to reach the recipient’s inbox rather than being flagged as spam. Internet Service Providers (ISPs), mailbox providers and email clients trust emails that come from verified sources.
  2. Builds Trust with Recipients: The ability to verify a domain and send from it by proving domain ownership, where recipients trust the messages are actually coming from who they are purporting to be coming from.
  3. Enables Additional Features: In Amazon SES, once your domain is verified, you can also set up domain authentication mechanisms like Sender Policy Framework (SPF), DomainKeys Identified Mail (DKIM), Domain-based Message Authentication Reporting and Conformance (DMARC), and Brand Indicators for Message Identification (BIMI). These further enhance email deliverability and security.
  4. Monitoring and Reporting: By verifying your domain, you can access granular metrics specific to your domain in the SES dashboard. You can use VDM and its out of the box dashboards, which includes metrics specific to verified identities. This helps in monitoring and improving your email sending practices.

4. Testing in sandbox: Amazon SES starts users in a sandbox environment. Here, you can test sending to only verified email without affecting your production environment or domain reputation. Sandbox has a limit of number of emails you can send per day.

5. Request production access: Once ready, request access to production box by following the steps outlined here: https://docs.aws.amazon.com/ses/latest/dg/request-production-access.html

6. Configure domain authentication:  You can configure your domain to use authentication systems such as DKIM and SPF. This step is technically optional, but highly recommended. By setting up either DKIM or SPF (or both) for your domain, you can improve the deliverability of your emails, and increase the amount of trust that your customers have in you. Here are key resources:

7. IP management: When you create a new Amazon SES account, by default your emails are sent from IP addresses that are shared with other SES users. You can use dedicated IP addresses that are reserved for your exclusive use by leasing them for an additional cost. This gives you complete control over your sender reputation and enables you to isolate your reputation for different segments within email programs. Amazon SES 4 ways of IP Management outlined below:

  1. Shared: Emails are sent through shared IPs.
  2. Dedicated: Emails are sent through dedicated IPs.
  3. Managed dedicated: Emails are sent through dedicated IPs and Amazon SES will determine how many dedicated IPs you require based on your sending patterns. Amazon SES will create them for you, and then manage how they scale based on your sending requirements.
  4. BYOIP: Amazon SES includes a feature called Bring Your Own IP (BYOIP), which makes it possible to use your own IP addresses to send email through Amazon SES. If you already use a range of IP addresses to send email, you can request that we make your IP range (minimum range allowed is /24) available for sending email through Amazon SES.

Based on your use case and need, you can make a decision on how to proceed on IPs after reviewing the comparison matrix.

8. IP Warm up: IP warm-up is a crucial process when introducing a new IP address for sending emails. The goal is to progressively increase email volume sent through the new IP address, allowing mailbox providers to gradually recognize and trust this IP as a legitimate email sender. Sending reputation is built with a combination of sending domain and the IP addressed through which they are delivered.

  • Why is IP warm-up necessary? When an (or a set of) IP address is new (or has been dormant for a while), it lacks a reputation with mailbox providers. If you suddenly start sending large volumes of emails from this new IP, mailbox providers perceive this behavior as suspicious, potentially categorizing these emails as spam or even blocking them. Warming up the IP helps establish a positive sending reputation over time so that mailbox providers can build a positive profile for your sending which includes IP reputation.
  • IP warm-up process:
    • Start Small: Begin by sending a low volume of emails on the first day.
    • Gradually Increase Volume: Each subsequent day, increase the volume. A common strategy is to double the volume every other day, but this depends on your ultimate email volume needs.
    • Target Engaged Users First: In the initial stages, send emails to your top engaged users—those who are more likely to open, click, and not mark your emails as spam. Their positive engagement will bolster the IP’s reputation.
    • Monitor Deliverability Metrics: Keep a close eye on key metrics like delivery rates, open rates, bounce rates, and complaint rates. If you notice issues, you need to slow down the warm-up process.
    • Respond to Feedback: Some mailbox providers offer feedback loops where you can see if recipients marked your emails as spam. This feedback is invaluable during the warm-up phase to adjust your email practices.
    • Spread Sends Throughout the Day: Instead of sending all your emails at once, distribute them throughout the day. This creates a more consistent sending pattern that mailbox providers favor.
    • Continue Best Email Practices: While warming up your IP, it’s crucial to maintain best practices like segmenting your list, regularly cleaning your email list, and sending relevant content.
    • Understand your Mailbox Provider and domain distribution breakdown. For example if you send to 65% gmail.com users, you will want to focus heavily on the Gmail postmaster page and also setup tooling available for that specific Mailbox Provider. In the case of Gmail, it would be Google Postmaster Tools.
    • Identify and track any available reputation tooling for Mailbox Providers you send to. Example: Google Postmaster Tools, Hotmail SNDS, Yahoo Performance Feeds.
    • During warm-up, monitor these daily to track reputation progress.

9. Additional considerations:

  • If you are planning on using a dedicated IP, warming up is crucial. For dedicated or managed dedicated IPs, you need to either manually warm them up or you can leverage Amazon SES’s auto warm-up feature. Shared IP pools (used by ESPs for smaller senders) don’t require individual warm-ups since they have an established reputation.
  • The warm-up duration varies. For some, it might be a 3-4 weeks, while for others, it could stretch to a couple of months, depending on the final email volume you intend to reach.
  • Let’s use an example scenario:
    • Number of emails to be migrated – 10M emails/day.
    • Peak volume throughput – 2M/hour.
    • The below table shows a sample warm-up schedule.
Days Emails sent
Day 1 5000
Day 3 10,000
Day 5 20,000
Day 7 40,000
Day 9 80,000
Day 11 160,000
Day 13 320,000
Day 15 640,000
Day 17 1,280,000
Day 19 2,560,000

10. Generate SMTP credentials: If you plan to send email using an application that uses SMTP, you have to generate SMTP credentials. Your SMTP credentials are different from your regular AWS credentials. These credentials are also unique in each AWS Region. For more information on generating your SMTP credentials, see Obtaining Amazon SES SMTP credentials.

11. Connect to SMTP endpoint: If you use a message transfer agent such as postfix or sendmail, you have to update the configuration for that application to refer to an Amazon SES SMTP endpoint. For a complete list of SMTP endpoints, see Connecting to an Amazon SES SMTP endpoint. Note that the SMTP credentials that you created in the previous step are associated with a specific AWS Region. You have to connect to the SMTP endpoint in the region that you created the SMTP credentials in.

12. Monitor email send: When you send email through Amazon SES, it’s important to monitor the bounces and complaints for your account. You can do one or more of the below for monitoring your email send:

  1. Reputation metrics: Amazon SES includes a reputation metrics console page that you can use to keep track of the bounces and complaints for your account. For more information, see Using reputation metrics to track bounce and complaint rates.
  2. CloudWatch alarms: You can also create CloudWatch alarms that alert you when these rates get too high. For more information about creating CloudWatch alarms, see Creating reputation monitoring alarms using CloudWatch.
  3. Virtual Deliverability Manager (VDM): Deliverability, or ensuring your emails reach recipient inboxes instead of spam or junk folders, is a core element of a successful email strategy. Virtual Deliverability Manager is an out of the box Amazon SES feature that helps you enhance email deliverability. It can help in increasing inbox deliverability and email conversions, by providing insights into your sending and delivery data, and giving advice on how to fix the issues that are negatively affecting your delivery success rate and reputation. VDM has dashboards and advisor features that are built-in, Visit this VDM blog to see how you can improve your email deliverability using VDM.

13. Ramp-up ramp-down strategy: Sending email communication along with maintaining the domain and send reputation is key to any business. The ramp-up ramp-down strategy in the context of email migration, especially to a new email sending platform or a new IP address, is a best practice to ensure that your emails maintain a high deliverability rate and don’t end up being flagged as spam. Let’s delve deeper into what this strategy entails and why it’s crucial:

  1. Gradual volume increase: Start by sending a small number of emails (refer to table below in #12 – IP warm up) and then gradually increase this number over days or weeks. This slow increase allows mailbox providers to recognize and trust your new sending source. Ramp up gradually by moving each use case and applying warm-up pattern to each campaign as you shift traffic. Closely monitor deliverability metrics as you ramp-up. If the metrics show any signs of issue, freeze the warm-up to assess the root cause. Sending stable, predictable patters are the key, avoiding unexpected spikes.
  2. Prioritize engaged recipients: Begin your email sends by targeting recipients who are most likely to open and engage with your emails, like your top active subscribers or customers. Positive interactions, like email opens or link clicks, can boost your new IP’s reputation.
  3. Monitor Feedback loops: Utilize feedback loops offered by mailbox providers to understand if recipients are marking your emails as spam. This immediate feedback can help you tweak your sending practices.
  4. Maintain consistency: While you’re ramping up, maintain consistency in your sending patterns. Avoid erratic sending volumes, which can be red flags for mailbox providers.
  5. Maintain Domain/IP Reputation: Even if you’re sending fewer emails, ensure those emails still adhere to best practices to maintain your domain or IP reputation.

14. Final cut over: After rigorous testing, ramping up, and ensuring your emails are being delivered reliably, you can fully transition to Amazon SES. Monitor continuously, especially during the initial days, to catch and address any potential issues promptly.

Deliverability resources:

Conclusion:

Migrating to Amazon SES offers a host of benefits, but like all IT endeavors, it requires careful thought and execution. By following this comprehensive guide, you can pave a path for a smooth transition, allowing your business to leverage the power of Amazon SES effectively.

About the author:

Vinay Ujjini

Vinay Ujjini is an Amazon Pinpoint and Amazon Simple Email Service Worldwide Principal Specialist Solutions Architect at AWS. He has been solving customer’s omni-channel challenges for over 15 years. He is an avid sports enthusiast and in his spare time, enjoys playing tennis & cricket.

10DLC Registration Best Practices to Send SMS with Amazon Pinpoint

Post Syndicated from Tyler Holmes original https://aws.amazon.com/blogs/messaging-and-targeting/10dlc-registration-best-practices-to-send-sms-with-amazon-pinpoint/

What is 10DLC?

Ten-Digit Long Code, or more commonly shortened as 10DLC, is intended specifically for sending Application-to-Person (A2P) SMS in the United States only. If you don’t send text messages to recipients in the US, then 10DLC doesn’t apply to you. 10DLC was designed to cover the volume and throughput middle ground between toll-free numbers on the low end and short codes on the high end. All senders using 10DLC are required to register both their company and their campaign(s), which is managed by a third-party company called The Campaign Registry (TCR). TCR maintains an industry-wide database of companies and use cases that are authorized to send messages, to US registered handsets, using 10DLC phone numbers.

How to Register for 10DLC

Registration must be done within the AWS console as there is currently no programmatic registration possible.

  1. Navigate to Amazon Pinpoint
  2. Select “Phone Numbers” from the left hand rail underneath “SMS and Voice”
  3. Select the “10DLC Campaigns” tab
    1. If you have not already registered a company then click on “Register Company” and proceed with the best practices below.
    2. If you have already successfully registered a company and require additional vetting proceed to “Additional Vetting” below
    3. If you have already successfully registered a company and completed the additional vetting process proceed to “Campaign Registration” below

To help ensure your registration is approved during this vetting process follow these best practices when registering.

Who Should Register for a 10DLC?

The information provided during registration should be for the company from whom SMS messages will be sent from.

  • Examples:
    • Example 1: Company X wants to send their customers alerts via SMS should their account be compromised and there is a need to reset passwords.
      • In this example the company being registered is Company X.
    • Example 2: Company Y is an Independent Software Vendor(ISV) with 100s of their customers using their software platform. Company Z wants to give their customers the ability to send SMS from within their platform.
      • In this example each of Company Y’s customers who want to send SMS will need to provide their information. Each of these customers will need their own separate 10DLC for each use case that Company Y wants to enable for their customers.
      • Company Y should define very clearly for their customers the types of messages that can be sent as each of their customers will be expected to send only messages that align with the Campaign(Use-Case) that they register for.
    • Example 3: Company Z is an Independent Software Vendor(ISV) with 100s of their customers using their software platform. Company Z wants to provide One-Time Password(OTP) codes via SMS.
      • In this example the company being registered will be Company Z.

10DLC Registration Best Practices

As you progress through the steps of 10DLC registration follow these best practices to ensure a smooth process. Begin here if you have not registered your company(ies) yet.

Company Registration Info and Additional Company and Contact Info

Best practices for Company Registration and Additional Company and Contact Info

  • Make sure to enter all information correctly.
  • Dependent on the country in which you have a Tax ID, enter into the Tax ID field one of the following:
    • US=EIN
    • CA=BN
    • Other=VAT
  • Select the vertical that most closely aligns with your business
  • Make sure that your website is publicly accessible. Your registration will be denied if the reviewer cannot access the site.
  • It is a hard requirement to have both a support email and phone number
    • Make sure your support email and support phone number are both active
  • Make sure that your Company name and Email/Website domains
    • If you register the company Amazon Inc. but then list a support email of [email protected] your registration will likely be rejected if you are considered a large enough brand that should have a dedicated email domain.

Additional Company Vetting for Potential Increased Quotas

Once you have completed the initial Company registration you have the following quotas assigned to your business:

  • AT&T: 1.25 Messages Per Second(MPS) or 75 Transactions Per Minute(TPM)
  • T-Mobile = 2000 messages/day

The quotas above do not mean that you cannot message recipients who use other carriers, these are just limits that these carriers have published. If the throughput above isn’t enough for your business’s needs you can apply for external vetting, for a $40 fee, by clicking the “Apply for additional vetting button.” The Campaign Registry, a third-party provider, will then do a deeper vetting of the information you have already provided and will give your company a score that will determine the throughput and volume apportioned to you. Read here for a detailed breakdown of the possible scores and the quotas that are attached to them.
Note: Vetting doesn’t guarantee that your carrier throughput or daily volume will increase. It is possible for the vetting results to decrease carrier throughput and daily volume.

10DLC Campaign Registration

Once you have completed the registration process and the optional additional vetting you will need to register your Campaigns, which should align with your use-case(s). If you would like more detail for each of the 10DLC Campaign types that Amazon Pinpoint supports you can read more here.

Best Practices for Campaign Info

  • Company Name
    • Make sure to select the company name from the drop down for which you want to register your campaign. Any companies that have not completed registration or have not been approved will be greyed out in the drop down.
  • 10DLC Campaign Name
    • Think of this as a “Campaign Description”
      • Provide a clear and comprehensive overview of the campaign’s objectives and interactions the end-user would experience after opting in. Make sure to identify who the sender is, who the recipient is, and why messages are being sent to the intended recipient.
        • Example: One-Time Password messages are sent by Company X to its customers for purposes of authentication to log into our application.
  • Opt-In Workflow
    • The primary purpose of the Opt-in workflow is to demonstrate that the end user explicitly consents to receive text messages and understands the nature of the program. Your application is being reviewed by a 3rd party reviewer so make sure to provide clear and thorough information about how your end-users opt-in to your SMS service and any associated fees or charges. If the reviewer cannot determine how your opt-in process works then your application will be denied and returned
    • The Opt-in workflow ideally is accessible by a 3rd party reviewer. If your Opt-in process requires a log-in, is not yet published publicly, is a verbal opt-in, or if it occurs on printed sources such as fliers and paper forms then make sure to thoroughly document how this process is completed by the end-user receiving messages. Provide a screenshot of the Call to Action in such cases. Host the screen shot on a publicly accessible website (like OneDrive or Google Drive) and provide the URL
    • The description has to be a minimum of 40 characters
    • The Opt-in location must include the following:
      • Program (brand) name
      • Link to a publicly accessible Terms & Conditions page
      • Link to a publicly accessible Privacy Policy page
      • Message frequency disclosure.
      • Customer care contact information
      • Opt-out information
      • “Message and data rates may apply” disclosure.
  • Help Message
    • The “Help message” is the response that is required to be sent to end-users when they text the keyword “HELP” (or similar keywords). The purpose is to provide information to the end-user related to how they can get support or opt-out of the messaging program.
    • The message has to be a minimum of 20 characters and a maximum of 160 characters
    • The message must include:
      • Program (brand) name OR product description.
      • Additional customer care contact information.
        • It is mandatory to include a phone number and/or email for end-user support
    • The following is an example of a HELP response that complies with the requirements of the US mobile carriers:
      • ExampleCorp Account Alerts: For help call 1-888-555-0142 or go to example.com. Msg&data rates may apply. Text STOP to cancel.
  • Stop Message
    • The “Stop message” is the response that is required to be sent to end-users when they text the keyword “STOP” (or similar keywords). End-users are required to be opted out of further messages when they text the STOP (or equivalent) keyword to your number and confirms with them that they will no longer receive messages for the program.
    • The message has to be a minimum of 20 characters and a maximum of 160 characters
    • The message must include:
      • Program (brand) name OR product description
      • Confirmation that no further messages will be delivered
    • The following is an example of a compliant STOP response:
      • You are unsubscribed from ExampleCorp Account Alerts. No more messages will be sent. Reply HELP for help or call 1-888-555-0142.

Number Capabilities

The number capability that you choose for your 10DLC campaign will determine whether or not the numbers you associate to an approved campaign can support voice outbound calling in addition to SMS. If you only require SMS you can leave the default selection of SMS-only. If you require voice calling, you should select voice as well. Selecting voice will increase the registration processing time.

Message Type

The content of your messages need to align with the Campaign Type and Message Type that you select here — if it’s misaligned your registration will be denied. You can’t change the message type on a campaign after it’s in an approved state.

Campaign Use Case

Pinpoint supports all of the standard use cases available to be sent via 10DLC and a single Special use case for communications from a non-religious registered 501(c)(3) charity aimed at providing help and raising money for those in need. For a more detailed listing of the campaign use cases supported visit this page.

Best Practices for Campaign Use Case

  • Select the Campaign that most closely aligns to your use case.
    • All of the information that you provide during this process needs to align with this selection or your registration will be rejected
  • Note: The “Low Volume” and “Mixed” campaigns have lower quotas which are the same as a company that does not opt for the increased vetting detailed above:
    • AT&T: 1.25 Messages Per Second(MPS) or 75 Transactions Per Minute(TPM)
    • T-Mobile = 2000 messages/day

Sample Messages

Sample messages should reflect actual messages to be sent under the campaign you are registering for. It is critical to ensure that there is consistency between the use case, your campaign description, and the content of the messages.

Best Practices for Sample Messages

  • Sample messages should reflect actual messages to be sent under campaign
  • Indicate any templated fields that are variable with brackets and make sure to be clear with what information may be replaced
    • Example: Hi, [FirstName] this is Amazon inc. letting you know that your delivery is ready
  • Each sample message has to be a minimum of 20 characters. If you plan to use multiple message templates for this 10DLC campaign, include them as well
  • Sample messages should identify who is sending the message (brand name)
    • Ensure that at least one sample message includes your business name
  • Include opt-out language to at least 1 sample message
    • Example: You are unsubscribed from ExampleCorp Account Alerts. No more messages will be sent. Reply HELP for help or call 1-888-555-0142.
  • Make sure your messaging does not involve prohibited content such as cannabis, hate speech, etc. and that your use case is compliant with AWS Messaging Policy

Campaign and Content Characteristics

Make sure that your selections here are consistent with the information your have provided so far in this process. If there are any selections that do not align with prior information your registration will be rejected

Best Practices for 10DLC Campaign and Content Characteristics

  • Subscriber opt-in
    • Subscriber opt-in is automatically set to “Yes” on your behalf. Explicit opt-in is required of all end-users regardless of your use case.
  • Subscriber opt-out
  • Subscriber Help
    • Carriers require that your SMS numbers reply to the ‘HELP’ keyword or similar at all times regardless of the numbers opt-in status. More information related to HELP auto-response requirements can be found in Pinpoint best practices documentation here
  • Number Pooling
    • Respond with no here, AWS currently does not support this today on 10DLC campaigns
  • Direct Lending or Loan Arrangement
    • If you are a 1st party lender you can get approval for transactional use cases (loan transaction receipts, OTPs, etc.). If your company is related to the lending business then you must mark this as “yes“
  • Embedded Link
    • If you have supplied messaging examples with an embedded link you must mark this as a “yes.” If this is misaligned with your content then your registration will be rejected
      • Note: Generic link shorteners such as Bitly or TinyURL should not be used and may cause your registration to be rejected. Make sure that any links in your sample messages are branded and consistent with your domain
  • Embedded Phone Number
    • If you have supplied messaging examples with an embedded phone number you must mark this as a “yes.” If this is misaligned with your content then your registration will be rejected
  • Age-Gated Content
    • There is a potential to be rejected or for the campaign to be suspended later if your content includes age gated material and you do not mark “yes” here
    • If they are do they need to do anything different here?

What to do if your 10DLC campaigns are rejected

If your Company registration or Campaign registration is rejected please follow the steps here to create a case and the AWS Support team will provide information about the reasons that your 10DLC campaign registration was rejected in your AWS Support case.

Amazon Pinpoint 10DLC Campaign Types and Quotas for SMS

Post Syndicated from Tyler Holmes original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-pinpoint-10dlc-campaign-types-and-quotas-for-sms/

The following 10DLC Campaigns, or, Use Cases outlined in Table 2 are currently supported by Amazon Pinpoint. As part of the process to register for sending SMS to US based phone numbers you must select at least one Campaign that will be associated with the 10DLC number you procure. If you require more than one Use Case then you will need more than one 10DLC, or you can select the standard mixed use case which supports lower volumes of messages. Throughput is determined based on the company vetting score of the registered sender of the message and what is being sent, not on the amount of numbers associated with the Campaign. For a breakdown of vetting scores and quotas see below:

Throughput and Volume Quotas for 10DLC Vetted Companies

*Note that by default each number associated to a 10DLC campaign supports 1 MPS. In order to increase your numbers to match what your campaign qualifies for by carriers you will be required to submit a MPS increase request.

Table 1

Vetting Score Message Parts per Second (MPS) (AT&T Limits) Maximum daily messages (T-Mobile & Sprint)
High(75-100) 75 200,000
Medium-High(50-74) 40 40,000
Medium-Low(1-49) 4 10,000
Basic(0, Skipped Vetting) 0.2 2,000

Standard 10DLC Campaign Use Cases

Select the campaign that most closely aligns with your use case(s).

Table 2

Campaign/Use-Case Name Intended Use Cases
Account Notifications Status notifications about an account that the recipient is a part of or owns
Customer Care Communications related to support or account management
Delivery Notifications Notifications about the status of a delivery of a product or service
Fraud Alert Messaging Notifications related to potential fraudulent activity
Higher Education Messaging Messaging originating from colleges, universities, or other post-secondary education institutions
Low Volume Small throughput, any combination of use-cases. Examples include: test, demo accounts
Marketing Messaging Promotional content related to sales or other offers
Mixed Use Cases Covers multiple use cases such as Account Notifications and Delivery Notifications. Mixed campaigns have lower throughput than dedicated ones
Polling and Voting – Not for Political Use Delivering messages containing customer surveys or other voting related actions. Not for political use
Public Service Announcements (PSA) Messages intended to raise awareness of a particular topic
Security Alerts Notifications related to a compromised software or hardware system that requires recipients to take an action
Two Factor Authentication(2FA) or One-Time Password(OTP) Authentication, account verifications, or one-time passcode
Special Use Cases Currently Pinpoint supports only the following special use cases. These may require different registration processes and/or fees than the Standard Use Cases above
Charity / 501(c)(3) Nonprofit Communications from a registered company classified as a 501(c)(3). Does not include religious organizations

How to use AWS Verified Access logs to write and troubleshoot access policies

Post Syndicated from Ankush Goyal original https://aws.amazon.com/blogs/security/how-to-use-aws-verified-access-logs-to-write-and-troubleshoot-access-policies/

On June 19, 2023, AWS Verified Access introduced improved logging functionality; Verified Access now logs more extensive user context information received from the trust providers. This improved logging feature simplifies administration and troubleshooting of application access policies while adhering to zero-trust principles.

In this blog post, we will show you how to manage the Verified Access logging configuration and how to use Verified Access logs to write and troubleshoot access policies faster. We provide an example showing the user context information that was logged before and after the improved logging functionality and how you can use that information to transform a high-level policy into a fine-grained policy.

Overview of AWS Verified Access

AWS Verified Access helps enterprises to provide secure access to their corporate applications without using a virtual private network (VPN). Using Verified Access, you can configure fine-grained access policies to help limit application access only to users who meet the specified security requirements (for example, user identity and device security status). These policies are written in Cedar, a new policy language developed and open-sourced by AWS.

Verified Access validates each request based on access policies that you set. You can use user context—such as user, group, and device risk score—from your existing third-party identity and device security services to define access policies. In addition, Verified Access provides you an option to log every access attempt to help you respond quickly to security incidents and audit requests. These logs also contain user context sent from your identity and device security services and can help you to match the expected outcomes with the actual outcomes of your policies. To capture these logs, you need to enable logging from the Verified Access console.

Figure 1: Overview of AWS Verified Access architecture showing Verified Access connected to an application

Figure 1: Overview of AWS Verified Access architecture showing Verified Access connected to an application

After a Verified Access administrator attaches a trust provider to a Verified Access instance, they can write policies using the user context information from the trust provider. This user context information is custom to an organization, and you need to gather it from different sources when writing or troubleshooting policies that require more extensive user context.

Now, with the improved logging functionality, the Verified Access logs record more extensive user context information from the trust providers. This eliminates the need to gather information from different sources. With the detailed context available in the logs, you have more information to help validate and troubleshoot your policies.

Let’s walk through an example of how this detailed context can help you improve your Verified Access policies. For this example, we set up a Verified Access instance using AWS IAM Identity Center (successor to AWS Single Sign-on) and CrowdStrike as trust providers. To learn more about how to set up a Verified Access instance, see Getting started with Verified Access. To learn how to integrate Verified Access with CrowdStrike, see Integrating AWS Verified Access with device trust providers.

Then we wrote the following simple policy, where users are allowed only if their email matches the corporate domain.

permit(principal,action,resource)
when {
    context.sso.user.email.address like "*@example.com"
};

Before improved logging, Verified Access logged basic information only, as shown in the following example log.

    "identity": {
        "authorizations": [
            {
                "decision": "Allow",
                "policy": {
                    "name": "inline"
                }
            }
        ],
        "idp": {
            "name": "user",
            "uid": "vatp-09bc4cbce2EXAMPLE"
        },
        "user": {
            "email_addr": "[email protected]",
            "name": "Test User Display",
            "uid": "[email protected]",
            "uuid": "00u6wj48lbxTAEXAMPLE"
        }
    }

Modify an existing Verified Access instance

To improve the preceding policy and make it more granular, you can include checks for various user and device details. For example, you can check if the user belongs to a particular group, has a verified email, should be logging in from a device with an OS that has an assessment score greater than 50, and has an overall device score greater than 15.

Modify the Verified Access instance logging configuration

You can modify the instance logging configuration of an existing Verified Access instance by using either the AWS Management Console or AWS Command Line Interface (AWS CLI).

  1. Open the Verified Access console and select Verified Access instances.
  2. Select the instance that you want to modify, and then, on the Verified Access instance logging configuration tab, select Modify Verified Access instance logging configuration.
    Figure 2: Modify Verified Access logging configuration

    Figure 2: Modify Verified Access logging configuration

  3. Under Update log version, select ocsf-1.0.0-rc.2, turn on Include trust context, and select where the logs should be delivered.
    Figure 3: Verified Access log version and trust context

    Figure 3: Verified Access log version and trust context

After you’ve completed the preceding steps, Verified Access will start logging more extensive user context information from the trust providers for every request that Verified Access receives. This context information can have sensitive information. To learn more about how to protect this sensitive information, see Protect Sensitive Data with Amazon CloudWatch Logs.

The following example log shows information received from the IAM Identity Center identity provider (IdP) and the device provider CrowdStrike.

"data": {
    "context": {
        "crowdstrike": {
            "assessment": {
                "overall": 21,
                "os": 53,
                "sensor_config": 4,
                "version": "3.6.1"
            },
            "cid": "7545bXXXXXXXXXXXXXXX93cf01a19b",
            "exp": 1692046783,
            "iat": 1690837183,
            "jwk_url": "https://assets-public.falcon.crowdstrike.com/zta/jwk.json",
            "platform": "Windows 11",
            "serial_number": "ec2dXXXXb-XXXX-XXXX-XXXX-XXXXXX059f05",
            "sub": "99c185e69XXXXXXXXXX4c34XXXXXX65a",
            "typ": "crowdstrike-zta+jwt"
        },
        "sso": {
            "user": {
                "user_id": "24a80468-XXXX-XXXX-XXXX-6db32c9f68fc",
                "user_name": "XXXX",
                "email": {
                    "address": "[email protected]",
                    "verified": false
                }
            },
            "groups": {
                "04c8d4d8-e0a1-XXXX-383543e07f11": {
                    "group_name": "XXXX"
                }
            }
        },
        "http_request": {
            "hostname": "sales.example.com",
            "http_method": "GET",
            "x_forwarded_for": "52.XX.XX.XXXX",
            "port": 80,
            "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/115.0",
            "client_ip": "52.XX.XX.XXXX"
        }
    }
}

The following example log shows the user context information received from the OpenID Connect (OIDC) trust provider Okta. You can see the difference in the information provided by the two different trust providers: IAM Identity Center and Okta.

"data": {
    "context": {
        "http_request": {
            "hostname": "sales.example.com",
            "http_method": "GET",
            "x_forwarded_for": "99.X.XX.XXX",
            "port": 80,
            "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.5 Safari/605.1.15",
            "client_ip": "99.X.XX.XXX"
        },
        "okta": {
            "sub": "00uXXXXXXXJNbWyRI5d7",
            "name": "XXXXXX",
            "locale": "en_US",
            "preferred_username": "[email protected]",
            "given_name": "XXXX",
            "family_name": "XXXX",
            "zoneinfo": "America/Los_Angeles",
            "groups": [
                "Everyone",
                "Sales",
                "Finance",
                "HR"
            ],
            "exp": 1690835175,
            "iss": "https://example.okta.com"
        }
    }
}

The following is a sample policy written using the information received from the trust providers.

permit(principal,action,resource)
when {
  context.idcpolicy.groups has "<hr-group-id>" &&
  context.idcpolicy.user.email.address like "*@example.com" &&
  context.idcpolicy.user.email.verified == true &&
  context has "crdstrikepolicy" &&
  context.crdstrikepolicy.assessment.os > 50 &&
  context.crdstrikepolicy.assessment.overall > 15
};

This policy only grants access to users who belong to a particular group, have a verified email address, and have a corporate email domain. Also, users can only access the application from a device with an OS that has an assessment score greater than 50, and has an overall device score greater than 15.

Conclusion

In this post, you learned how to manage Verified Access logging configuration from the Verified Access console and how to use improved logging information to write AWS Verified Access policies. To get started with Verified Access, see the Amazon VPC console.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Ankush Goyal

Ankush Goyal

Ankush is an Enterprise Support Lead in AWS Enterprise Support who helps Enterprise Support customers streamline their cloud operations on AWS. He enjoys working with customers to help them design, implement, and support cloud infrastructure. He is a results-driven IT professional with over 18 years of experience.

Anbu Kumar Krishnamurthy

Anbu Kumar Krishnamurthy

Anbu is a Technical Account Manager who specializes in helping clients integrate their business processes with the AWS Cloud to achieve operational excellence and efficient resource utilization. Anbu helps customers design and implement solutions, troubleshoot issues, and optimize their AWS environments. He works with customers to architect solutions aimed at achieving their desired business outcomes.

How to Send SMS Using Configurations Sets with Amazon Pinpoint

Post Syndicated from Tyler Holmes original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-send-sms-using-configurations-sets-with-amazon-pinpoint/

In a previous blog post we walked through how to manage opt-outs for SMS in Amazon Pinpoint using the V2 SMS and Voice API. The post detailed a scenario where a user needed to manage multiple use-cases such as marketing and One-Time Password (OTP) or Multi-Factor Authentication (MFA). This works great if all of your data can be streamed to a single location, but what if you have multiple business units or you are an Independent Software Vendor (ISV) and need to manage SMS sending for multiple customers? You need a way to not only manage multiple use-cases and opt-out lists, but also sender details and separate event destinations for metrics. Read on to learn how to combine SMS Opt-Out Lists with Configuration Sets to simplify your sending and solve your multi-tenant challenge.

Prerequisites

  • In order to manage Configuration Sets you need to use the V2 API for SMS and voice
  • You must Purchase/Register an Origination Identity (OID) for each use-case in each country you plan to support.
    • Example: If you are sending marketing materials and OTP messages in the US and are using a short code then you will need to purchase at least two short codes (one for each use-case) and register each use-case.
    • If you need help determining what OID you need use this guide.

The Scenario:
For the sake of simplicity our scenario will use two different senders that need to manage two distinct use-cases but these steps can scale as you need.

SMS Sender 1 Details:

  • Sending only in the US
  • Sending OTP via a US Short Code
  • Sending Marketing messages via a 10DLC
    • Send text events to an Amazon Kinesis Data Firehose destination
    • Send text events to an Amazon CloudWatch destination

SMS Sender 2 Details:

  • Sending SMS Globally
  • Sending OTP via multiple country specific originators
    • Send events to an Amazon Kinesis Data Firehose destination
    • Send all events to an Amazon CloudWatch destination

The V2 SMS and Voice API has several helpful actions to configure this scenario above, some of which will expand upon our previous blog post that covered managing SMS opt-outs so make sure to read that one first and have it handy as you review.

What is a Configuration Set for SMS?

A Configuration Set is a container that is used to hold information about Event Destinations as well as rules that you apply to the SMS messages that you send. Configuration Sets are used when sending messages with the SendTextMessage Action in the V2 API for SMS and voice. When you use SendTextMessage you can specify a Configuration Set that determines how the messages are treated and where the events from that particular send are streamed. The image below explains the concepts we will walk through in this post.


How to Create Configuration Sets and Send SMS
Below we will walk through the steps needed to configure each of the above scenarios. Note that the default quota for Configuration Sets is 25 per account but this can be increased if needed

  • Scenario 1 –
    • Short Code Configuration
      • Create a Pool for the US Short Code delivering OTP messages
        • Associate the short code to that Pool by setting “OriginationIdentity” using the PhoneNumberArn of your US Short Code
          • You can use DescribePhoneNumbers to find the values for PhoneNumberArn
          • Note: You can have multiple OIDs per Pool if necessary
          • Note: Opt-Out Lists of OIDs and Pools must match. If you previously associated an Opt-Out List to any OIDs you may need to update those OIDs to match that of the Pool prior to associating it with the Pool
        • Set the IsoCountryCode to “US”
      • Use the “UpdatePool” action to ensure we only send to US phone numbers as well as to create an Opt-Out List specifically for the OTP use-case
        • Set “SharedRoutesEnabled” to False. This will ensure that only the OIDs in this pool will be used to send messages.
          • Since we will only have a US Shortcode in this pool then only US based phone numbers will be sent messages, other destination phone numbers will generate a ConflictException error
            • An error occurred (ConflictException) when calling the SendTextMessage operation: Conflict Occurred – Reason=”NO_ORIGINATION_IDENTITIES_FOUND”
        • Set an Opt-Out List for the Pool by specifying the “OptOutListName”
      • Use the “PutKeyword” action to create at least one Opt-In Keyword
        • This will allow destination numbers to opt back into your use-case
      • Create a Configuration Set
        • This is a container for your Event Destinations which you will set up next. Each configuration set can contain between 0 and 5 event destinations. Each event destination can contain a reference to a single destination, such as a CloudWatch or Kinesis Data Firehose destination
        • Give your Configuration a descriptive name by setting “ConfigurationSetName”
      • Create an SNS Topic that will receive all of the events. Dependent on your needs you can decide where you want to publish these events. Your options are:
      • Create a CloudWatch Log Group that will receive all of the events you would like to log
      • Create Event Destinations – Each event destination can contain a reference to a single destination, since we are adding two destinations (SNS and CloudWatch) we will need to make this call twice, once for each destination
        • Create the SNS destination.
          • Set the “ConfigurationSetName” to the Configuration Set you just created
          • Set “MatchingEventTypes” to the event types you are wanting to log
          • Set the SNS Event Destination
            • Set the “TopicArn”
        • Create the CloudWatch Destination
          • Set the “ConfigurationSetName” to the Configuration Set you just created
          • Set “MatchingEventTypes” to the event types you are wanting to log
          • Set “IamRoleArn” to the ARN of an Amazon Identity and Access Management (IAM) role that is able to write event data to an Amazon CloudWatch destination
          • Set the “LogGroupArn” to the Log Group in CloudWatch you want the events to stream to
    • 10DLC Configuration
        • Create a Pool for the 10DLC delivering Marketing messages
          • Associate the 10DLC to that Pool by setting “OriginationIdentity” using the PhoneNumberArn of your 10DLC
            • You can use DescribePhoneNumbers to find the values for PhoneNumberArn
            • Note: You can have multiple OIDs per Pool if necessary
            • Note: Opt-Out Lists of OIDs and Pools must match, so if you previously associated an Opt-Out List to any OIDs you may need to update those OIDs to match that of the Pool prior to associating it with the Pool
          • Set the IsoCountryCode to “US”
        • Use the “UpdatePool” action to ensure we only send to US phone numbers as well as to create an Opt-Out List specifically for the Marketing use-case
          • Set “SharedRoutesEnabled” to False. This will ensure that only the OIDs in this pool will be used to send messages.
            • Since we will only have a 10DLC in this pool then only US based phone numbers will be sent messages, other destination phone numbers will generate an error
          • Set an Opt-Out List for the Pool by specifying the “OptOutListName”
        • Use the “PutKeyword” action to create at least one Opt-In Keyword
        • Create a Configuration Set
          • Give your Configuration a descriptive name by setting “ConfigurationSetName”
        • Create a Kinesis Data Firehose Delivery Stream that will receive all of the events.
        • Create a CloudWatch Log Group that will receive all of the events you would like to log
        • Create Event Destinations – Each event destination can contain a reference to a single destination. We are adding two destinations (SNS and CloudWatch) so we need to make this call twice, once for each destination
          • Create the Kinesis destination.
            • Set the “ConfigurationSetName” to the Configuration Set you just created
            • Set “MatchingEventTypes” to the event types you are wanting to log
            • Set the Kinesis Event Destination
              • Set the “DeliveryStreamArn” to the Stream you created earlier
              • Set the “IamRoleArn” to the ARN of an IAM role that is able to write event data to an Amazon Firehose destination
          • Create the CloudWatch Destination
            • Set the “ConfigurationSetName” to the Configuration Set you just created
            • Set “MatchingEventTypes” to the event types you are wanting to log
            • Set “IamRoleArn” to an IAM role that is able to write event data to an Amazon CloudWatch destination
            • Set the “LogGroupArn” to the Log Group in CloudWatch you want the events to stream to
  • Scenario 2
    • Global OTP Configuration
      • Create a Pool for delivering the OTP messages
        • Associate all of your OIDs being used to that Pool
          • You can use DescribePhoneNumbers to find the values for PhoneNumberArn
          • Note: Opt-Out Lists of OIDs and Pools must match, so if you previously associated an Opt-Out List to any OIDs you may need to update those OIDs to match that of the Pool prior to associating it with the Pool
      • Use the “UpdatePool” action to create an Opt-Out List specifically for the OTP use-case.
        • Set an Opt-Out List for the Pool by specifying the “OptOutListName”
      • Use the “PutKeyword” action to create at least one Opt-In Keyword
        • This will allow destination numbers to opt back into your use-case, in this case OTP
      • Create a Configuration Set
        • Give your Configuration a descriptive name by setting “ConfigurationSetName”
      • Create a Kinesis Data Firehose Delivery Stream that will receive all of the events
      • Create a CloudWatch Log Group that will receive all of the events you would like to log
      • Create Event Destinations – Each event destination can contain a reference to a single destination, since we are adding two destinations (SNS and CloudWatch) we will need to make this call twice, once for each destination
        • Create the Kinesis destination.
          • Set the “ConfigurationSetName” to the Configuration Set you just created
          • Set “MatchingEventTypes” to the event types you are wanting to log
          • Set the Kinesis Event Destination
            • Set the “DeliveryStreamArn” to the Stream you created earlier
            • Set the “IamRoleArn” to the ARN of an IAM role that is able to write event data to an Amazon Firehose destination
        • Create the CloudWatch Destination
          • Set the “ConfigurationSetName” to the Configuration Set you just created
          • Set “MatchingEventTypes” to the event types you are wanting to log
          • Set “IamRoleArn” to an IAM role that is able to write event data to an Amazon CloudWatch destination
          • Set the “LogGroupArn” to the Log Group in CloudWatch you want the events to stream to

Your configuration should look like this once you have completed the above steps

How to Send Your Messages

  • Send your SMS with the “SendTextMessage” action
    • Set the “ConfigurationSetName” using either the ConfigurationSetName or ConfigurationSetArn
      • You can find these using the “DescribeConfigurationSets” action
      • This field is used for any country-specific registration requirements. Currently, this setting is only used when you send messages to recipients in India using a sender ID.
    • Use either PoolId, or PoolArn for “OriginationIdentity”

Conclusion

In this post you have learned how to create Configuration Sets that give you more control over how you send SMS. Using Configuration Sets allows you to simplify your sending while maintaining multiple sending configurations and event destinations . The V2 API for SMS and Voice has many more useful actions not possible with the V1 API so we encourage you to explore how it can further help you simplify and automate your applications.

Review the documentation for the V2 SMS and Voice API here
Confirm the origination IDs you will need here
Check out the support tiers comparison here

Resources
https://docs.aws.amazon.com/pinpoint/latest/apireference_smsvoicev2/Welcome.html
https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-originating-identities-choosing.html
https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-limitations-opt-out.html
https://docs.aws.amazon.com/pinpoint/latest/developerguide/sms-voice-v2-pools.html
https://docs.aws.amazon.com/pinpoint/latest/developerguide/sms-voice-v2-configuration-sets.html
https://docs.aws.amazon.com/pinpoint/latest/developerguide/sms-voice-v2-keywords.html

Amazon Simple Email Service adds email delivery features to revised free tier

Post Syndicated from sakoppes original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-simple-email-service-adds-email-delivery-analysis-features-to-revised-free-tier/

On August 1st, 2023, Amazon Simple Email Service (SES) will launch a revised, more flexible free tier that allows AWS customers to try more SES features without commitment or cost. SES customers will be able to send or receive up to 3,000 messages each month for a year after they begin using SES, free of charge[1]. Customers can now try advanced SES capabilities, like deliverability analytics and optimization through Virtual Deliverability Manager (VDM), in the free tier. With access to these new features, customers can use the free tier to build full proof-of-concept workloads to experiment with SES’ powerful tools.

How did the SES free tier work previously?

Previously, the SES free tier only covered outbound messages sent from AWS compute services such as EC2 instances. Customers using other types of computing services for sending outbound messages had no SES free tier available. Customers could also receive up to 1,000 inbound email messages free each month. Customers evaluating SES had to pay to explore more advanced features like Virtual Deliverability Manager, a suite of tools customers use to improve delivery rates for outbound emails. This made it difficult to avoid charges when exploring advanced SES use cases, such as when building prototype email sending workloads to explore ways to monitor and optimize email delivery success and engagement rates.

New email deliverability features in the SES free tier

The revised SES free tier offers a more flexible model, introducing a shared limit which applies to pay-as-you-go message charges including inbound email messages, outbound email messages sent from any source, and email charges for Virtual Deliverability Manager. This model makes it easier to choose the right combination of features to fit your use cases when exploring SES features end-to-end without commitment. The revised free tier includes up to 3,000 messages each month for 12 months after you start using SES, which are shared across the features included in the revised SES free tier (note that Virtual Deliverability Manager counts separately from outbound messages). Here some examples to illustrate the revised free tier (all numbers are messages per month), note the 3,000 message free tier is applied first to more expensive charges (e.g. outbound messages) in situations where multiple products are in use (inbound, outbound, Virtual Deliverability Manager):

A few examples of how the Simple Email Service (SES) revised free tier is applied.

What can you do with the revised free tier?

The revised SES free tier makes it easier to build proof-of-concept workflows to demonstrate SES’ advanced deliverability optimization capabilities without commitment. For example, you could set up a pilot workload to show how SES can help you interpret the results of A/B testing using configuration sets. Imagine creating a few versions of a marketing email, then sending each version to a sample set of recipients to test response rates. You could track each version of the email separately in Virtual Deliverability Manager using configuration sets (essentially a campaign), then use VDM to analyze the differences in deliverability metrics for each campaign. You can look at the bounce rates, open, and click rates of each campaign to determine which version performed best before sending to all your target customers. This helps you see what SES can do, before deciding whether you want to build production workloads on SES.

What’s next?

The revised SES free tier will be active on August 1st, 2023 for all SES customers; no action is required. Customers who are using SES today will benefit from the revised free tier for one year (until August 2024). Customers who start using SES after August 1st, 2023, will benefit from the revised free tier for one year from the month they start using SES. The revised free tier replaces the current free tier, and we are not able to offer an opportunity to continue using current free tier. To start using the SES free tier, just create and verify an email address to send outbound email messages, and/or set up a receipt rule for receiving inbound email messages. To see advanced analytics with deliverability recommendations and traffic shaping through Virtual Deliverability Manager, just click on “Virtual Deliverability Manager” in the SES console navigation and follow the steps to enable it.

Get started with SES free tier at https://aws.amazon.com/ses/.

[1] Data transfer charges for emails sent and attachment charges still apply.

How quirion created nested email templates using Amazon Simple Email Service (SES)

Post Syndicated from Dominik Richter original https://aws.amazon.com/blogs/messaging-and-targeting/how-quirion-created-nested-email-templates-using-amazon-simple-email-service-ses/

This is part two of the two-part guest series on extending Simple Email Services with advanced functionality. Find part one here.

quirion, founded in 2013, is an award-winning German robo-advisor with more than 1 billion Euro under management. At quirion, we send out five thousand emails a day to more than 60,000 customers.

Managing many email templates can be challenging

We chose Amazon Simple Email Service (SES) because it is an easy-to-use and cost-effective email platform. In particular, we benefit from email templates in SES, which ensure a consistent look and feel of our communication. These templates come with a styled and personalized HTML email body, perfect for transactional emails. However, managing many email templates can be challenging. Several templates share common elements, such as the company’s logo, name or imprint. Over time, some of these elements may change. If they are not updated across all templates, the result is an inconsistent set of templates. To overcome this problem, we created an application to extend the SES template functionality with an interface for creating and managing nested templates.

This post shows how you can implement this solution using Amazon Simple Storage Service (Amazon S3), Amazon API Gateway, AWS Lambda and Amazon DynamoDB.

Solution: compose email from nested templates using AWS Lambda

The solution we built is fully serverless, which means we do not have to manage the underlying infrastructure. We use AWS Cloud Development Kit (AWS CDK) to deploy the architecture.

The figure below describes the architecture diagram for the proposed solution.

  1. The entry point to the application is an API Gateway that routes requests to a Lambda function. A request consists of an HTML file that represents a part of an email template and metadata that describes the structure of the template.
  2. The Lambda function is the key component of the application. It takes the HTML file and the metadata and stores them in a S3 Bucket and a DynamoDB table.
  3. Depending on the metadata, it takes an existing template from storage, inserts the HTML from the request into it and creates a SES email template.

Architecture diagram of the solution: new templates in Amazon SES are created by a Lambda function accessed through API Gateway. THe Lambda function reads and writes HTML from S3 and reads and writes metadata from DynamoDB.

The solution is simplified for this blog post and is used to show the possibilities of SES. We will not discuss the code of the Lambda function as there are several ways to implement it depending on your preferred programming language.

Prerequisites

Walkthrough

Step 1: Use the AWS CDK to deploy the application
To download and deploy the application run the following commands:

$ git clone https://github.com/quirionit/aws-ses-examples.git
$ cd aws-ses-examples/projects/go-src
$ go mod tidy
$ cd ../../projects/template-api
$ npm install
$ cdk deploy

Step 2: Create nested email templates

To create a nested email template, complete the following steps:

  1. On the AWS Console, choose the API Gateway.
  2. You should see an API with a name that includes SesTemplateApi.
    Console screenshot displaying the SesTemplateApi
  3. Click on the name and note the Invoke URL from the details page.

    AWS console showing the invoke URL of the API

  4. In your terminal, navigate to aws-ses-examples/projects/template-api/files and run the following command. Note that you must use your gateway’s Invoke URL.
    curl -F [email protected] -F "isWrapper=true" -F "templateName=m-full" -F "child=content" -F "variables=FIRSTNAME" -F "variables=LASTNAME" -F "plain=Hello {{.FIRSTNAME}} {{.LASTNAME}},{{template \"content\" .}}" YOUR INVOKE URL/emails

    The request triggers the Lambda function, which creates a template in DynamoDB and S3. In addition, the Lambda function uses the properties of the request to decide when and how to create a template in SES. With “isWrapper=true” the template is marked as a template that wraps another template and therefore no template is created in SES. “child=content” specifies the entry point for the child template that is used within m-full.html. It also uses FIRSTNAME and LASTNAME as replacement tags for personalization.

  5. In your terminal, run the following command to create a SES email template that uses the template created in step 4 as a wrapper.

Step 3: Analyze the result

  1. On the AWS Console, choose DynamoDB.
  2. From the sidebar, choose Tables.
  3. Select the table with the name that includes SesTemplateTable.
  4. Choose Explore table items. It should now return two new items.
    Screenshot of the DynamoDB console, displaying two items: m-full and order-confirmation.
    The table stores the metadata that describes how to create a SES email template. Creating an email template in SES is initiated when an element’s Child attribute is empty or null. This is the case for the item with the name order-confirmation. It uses the BucketKey attribute to identify the required HTML stored in S3 and the Parent attribute to determine the metadata from the parent template. The Variables attribute is used to describe the placeholders that are used in the template.
  5. On the AWS Console, choose S3.
  6. Select the bucket with the name that starts with ses-email-templates.
  7. Select the template/ folder. It should return two objects.
    Screenshot of the S3 console, displaying two items: m-full and order-confirmation.
    The m-full.html contains the structure and the design of an email template and is used with the order-confirmation.html which contains the content.
  8. On the AWS Console, choose the Amazon Simple Email Service.
  9. From the sidebar, choose Email templates. It should return the following template.
    Screenshot of the SES console, displaying the order confirmation template

Step 4: Send an email with the created template

  1. Open the send-order-confirmation.json file from aws-ses-examples/projects/template-api/files in a text editor.
  2. Set a verified email address as Source and ToAddresses and save the file.
  3. Navigate your terminal to aws-ses-examples/projects/template-api/files and run the following command:
    aws ses send-templated-email --cli-input-json file://send-order-confirmation.json
  4. As a result, you should get an email.

Step 5: Cleaning up

  1. Navigate your terminal to aws-ses-examples/projects/template-api.
  2. Delete all resources with cdk destroy.
  3. Delete the created SES email template with:
    aws ses delete-template --template-name order-confirmation

Next Steps

There are several ways to extend this solution’s functionality, including the ones below:

  • If you send an email that contains invalid personalization content, Amazon SES might accept the message, but won’t be able to deliver it. For this reason, if you plan to send personalized email, you should configure Amazon SES to send Rendering Failure event notifications.
  • The Amazon SES template feature does not support sending attachments, but you can add the functionality yourself. See part one of this blog series for instructions.
  • When you create a new Amazon SES account, by default your emails are sent from IP addresses that are shared with other SES users. You can also use dedicated IP addresses that are reserved for your exclusive use. This gives you complete control over your sender reputation and enables you to isolate your reputation for different segments within email programs.

Conclusion

In this blog post, we explored how to use Amazon SES with email templates to easily create complex transactional emails. The AWS CLI was used to trigger SES to send an email, but that could easily be replaced by other AWS services like Step Functions. This solution as a whole is a fully serverless architecture where we don’t have to manage the underlying infrastructure. We used the AWS CDK to deploy a predefined architecture and analyzed the deployed resources.

About the authors

Mark Kirchner is a backend engineer at quirion AG. He uses AWS CDK and several AWS services to provide a cloud backend for a web application used for financial services. He follows a full serverless approach and enjoys resolving problems with AWS.
Dominik Richter is a Solutions Architect at Amazon Web Services. He primarily works with financial services customers in Germany and particularly enjoys Serverless technology, which he also uses for his own mobile apps.

The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

How quirion sends attachments using email templates with Amazon Simple Email Service (SES)

Post Syndicated from Dominik Richter original https://aws.amazon.com/blogs/messaging-and-targeting/how-quirion-sends-attachments-using-email-templates-with-amazon-simple-email-service-ses/

This is part one of the two-part guest series on extending Simple Email Services with advanced functionality. Find part two here.

quirion is an award-winning German robo-advisor, founded in 2013, and with more than 1 billion euros under management. At quirion, we send out five thousand emails a day to more than 60,000 customers.

We chose Amazon Simple Email Service (SES) because it is an easy-to-use and cost-effective email platform. In particular, we benefit from email templates in SES, which ensure a consistent look and feel of our communication. These templates come with a styled and personalized HTML email body, perfect for transactional emails. Sometimes it is necessary to add attachments to an email, which is currently not supported by the SES template feature. To overcome this problem, we created a solution to use the SES template functionality and add file attachments.

This post shows how you can implement this solution using Amazon Simple Storage Service (Amazon S3), Amazon EventBridge, AWS Lambda and AWS Step Functions.

Solution: orchestrate different email sending options using AWS Step Functions

The solution we built is fully serverless, which means we do not have to manage the underlying infrastructure. We use AWS Cloud Development Kit (AWS CDK) to deploy the architecture and analyze the resources.

The solution extends SES to send attachments using email templates. SES offers three possibilities for sending emails:

  • Simple  — A standard email message. When you create this type of message, you specify the sender, the recipient, and the message body, and Amazon SES assembles the message for you.
  • Raw — A raw, MIME-formatted email message. When you send this type of email, you have to specify all of the message headers, as well as the message body. You can use this message type to send messages that contain attachments. The message that you specify has to be a valid MIME message.
  • Templated — A message that contains personalization tags. When you send this type of email, Amazon SES API v2 automatically replaces the tags with values that you specify.

In this post, we will combine the Raw and the Templated options.

The figure below describes the architecture diagram for the proposed solution.

  1. The entry point to the application is an EventBridge event bus that routes incoming events to a Step Function workflow.
  2. An event consists of the personalization parameters, the sender and recipient addresses, the template name and optionally the document-related properties such as a reference to the S3 bucket in which the document is stored. Depending on whether the event contains document-related properties, the Step Function workflow decides how the email is prepared and sent.
  3. In case the event does not contain document-related properties, it uses the SendEmail action to send a templated email. The action requires the template name and the data to replace the personalization tags.
  4. If the event contains document-related properties, the raw sending option of the SendEmail action must be used. If we also want to use an email template, we need to use that as a raw MIME message. So, we use the TestRenderEmailTemplate action to get the raw MIME message from the template and use a Lambda function to get and add the document. The Lambda function then triggers SES to send the email.

The solution is simplified for this blog post and is used to show the possibilities of SES. We will not discuss the code of the lambda function as there are several ways to implement it depending on your preferred programming language.

Architecture diagram of the solution: an AWS Step Functions workflow is triggered by EventBridge. If the event contains no document, the workflow triggers Amazon SES SendEmail. Otherwise, it uses SES TestRenderEmailTemplate as input for a Lambda function, which gets the document from S3 and then sends the email.

Prerequisites

Walkthrough

Step 1: Use the AWS CDK to deploy the application

To download and deploy the application run the following commands:

$ git clone [email protected]:quirionit/aws-ses-examples.git
$ cd aws-ses-examples/projects/go-src
$ go mod tidy
$ cd ../../projects/email-sender
$ npm install
$ cdk deploy

Step 2: Create a SES email template

In your terminal, navigate to aws-ses-examples/projects/email-sender and run:

aws ses create-template --cli-input-json file://files/hello_doc.json

Step 3: Upload a sample document to S3

To upload a document to S3, complete the following steps:

  1. On the AWS Console, choose the S3.
  2. Select the bucket with the name that starts with ses-documents.
  3. Copy and save the bucket name for later.
  4. Create a new folder called test.
  5. Upload the hello.txt from aws-ses-examples/projects/email-sender/files into the folder.

Screenshot of Amazon S3 console, showing the ses-documents bucket containing the file tes/hello.txt

Step 4: Trigger sending an email using Amazon EventBridge

To trigger sending an email, complete the following steps:

  1. On the AWS Console, choose the Amazon EventBridge.
  2. Select Event busses from the sidebar.
  3. Select Send events.
  4. Create an event as the following image shows. You can copy the Event detail from aws-ses-examples/projects/email-sender/files/event.json. Don’t forget to replace the sender, recipient and bucket with your values.
    Screenshot of EventBridge console, showing how the sample event with attachment is sent.
  5. As a result of sending the event, you should receive an email with the document attached.
  6. To send an email without attachment, edit the event as follows:
    Screenshot of EventBridge console, showing how the sample event without attachment is sent.

Step 5: Analyze the result

  1. On the AWS Console, choose Step Functions.
  2. Select the state machine with the name that includes EmailSender.
  3. You should see two Succeeded executions. If you select them the dataflows should look like this:
    Screenshot of Step Functions console, showing the two successful invocations.
  4. You can select each step of the dataflows and analyze the inputs and outputs.

Step 6: Cleaning up

  1. Navigate your terminal to aws-ses-examples/projects/email-sender.
  2. Delete all resources with cdk destroy.
  3. Delete the created SES email template with:

aws ses delete-template --template-name HelloDocument

Next Steps

There are several ways to extend this solution’s functionality, see some of them below:

  • If you send an email that contains invalid personalization content, Amazon SES might accept the message, but won’t be able to deliver it. For this reason, if you plan to send personalized email, you should configure Amazon SES to send Rendering Failure event notifications.
  • You can create nested templates to share common elements, such as the company’s logo, name or imprint. See part two of this blog series for instructions.
  • When you create a new Amazon SES account, by default your emails are sent from IP addresses that are shared with other SES users. You can also use dedicated IP addresses that are reserved for your exclusive use. This gives you complete control over your sender reputation and enables you to isolate your reputation for different segments within email programs.

Conclusion

In this blog post, we explored how to use Amazon SES to send attachments using email templates. We used an Amazon EventBridge to trigger a Step Function that chooses between sending a raw or templated SES email. This solution uses a full serverless architecture without having to manage the underlying infrastructure. We used the AWS CDK to deploy a predefined architecture and analyzed the deployed resources.

About the authors

Mark Kirchner is a backend engineer at quirion AG. He uses AWS CDK and several AWS services to provide a cloud backend for a web application used for financial services. He follows a full serverless approach and enjoys resolving problems with AWS.
Dominik Richter is a Solutions Architect at Amazon Web Services. He primarily works with financial services customers in Germany and particularly enjoys Serverless technology, which he also uses for his own mobile apps.

The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.