Amazon Simple Email Service (Amazon SES) turns 10 years old today. Back on January 25th 2011, Amazon Web Services (AWS) had only 15 services. Today, AWS has grown to over 180 services. Jeff Barr launched Amazon SES as part of his web evangelist blog. Much of what he wrote about then is still true today. Even 10 years later, email is an important channel for customer communications. Developers still want to rely on a trusted global partner to deliver email at scale. However, mailbox providers are even more protective of their end users’ security. They actively work to ensure that any perceived, unwanted email doesn’t make it to the inbox.
Inbox providers use several factors to determine the legitimacy of email traffic. Over the last decade, we have worked diligently to measure many of those factors in Amazon SES to help our customers achieve great deliverability. The focus for much of that work has been a combination of investments into reputation, engagement, and trust. I want to outline what we’ve accomplished to improve your email sending over the last 10 years.
Reputation
Reputation is the measurement mailbox providers use to determine how closely you follow their sending standards. Amazon SES measures perceived reputation through metrics such as bounce rate or complaint rate in the reputation dashboard. The reputation dashboard also shares overall Amazon SES account sending status like “Healthy” or “Under Review.” Some Inbox providers, or ISPs, also provide feedback to help us measure the effectiveness of a specific IP or domain in sending trustworthy traffic.
You can influence reputation in Amazon SES through:
Setting up dedicated IPs: Set up IPs in Amazon SES for your own specific sending with appropriate warm-up plans. Split IPs out by use case such as separating password resets from marketing messages.
Customer owned IPs (New in 2020): You can now transition IPs you’ve invested in through your own data center or with another ESP to Amazon SES without interruption.
Following sending volume best practices: Nothing can flag your IP addresses faster than non-predictable sending patterns. We help you manage this through sending quotas.
Engagement is the rate by which customers are interacting with your content. Amazon SES helps you measure engagement through conversion rates (such as open or click-through) and unsubscribe rates. These are measured in the event publishing click stream. This area is more of an art in our deliverability calculus because success varies by industry and use case.
You can influence engagement in Amazon SES through:
Customizing content as much as possible, but follow content best practices to avoid setting off content filters. Mailbox providers often utilize behavioral content filtering using AI to determine if your content is relevant based on engagement behavior.
Use consent and list management (New in 2020) with customized topics and opt-out pages. It’s important to offer recipients a way to select what emails they want to receive from you and give them an option to opt-out. This is a great new feature that we’ve added based on customer feedback.
Earning trust on email sending is done through the adherence to proper sending behavior, as measured by both individual ISPs as well as industry watch-groups. Trust is closely related to reputation. We measure trust through messages in the reputation dashboard based on feedback loops, Real-time Blocklists (RBLs), and spam-traps. You can also see the complaint rate associated to your sending in the complaint area of the reputation dashboard. It has statuses like healthy or under review.
You can influence trust in Amazon SES through:
Appearing as though you are who you say you are, through proper configurations of authentication such as Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) records in DNS.
Deliverability is a multi-dimensional part of email sending, beyond just setting up an SMTP (Simple Mail Transfer Protocol) endpoint, with constant complexities. But, we’re here to help. In addition to these investments in deliverability, we’ve also expanded Amazon SES to 18 regions, including the government cloud. It’s been an exciting time at AWS, and we look forward to supporting all of our customers in the years to come with Amazon SES.
Data is a crucial part of every business and is used for strategic decision making at all levels of an organization. To extract value from their data more quickly, Amazon Web Services (AWS) customers are building automated data pipelines—from data ingestion to transformation and analytics. As part of this process, my customers often ask how to prevent sensitive data, such as personally identifiable information, from being ingested into data lakes when it’s not needed. They highlight that this challenge is compounded when ingesting unstructured data—such as files from process reporting, text files from chat transcripts, and emails. They also mention that identifying sensitive data inadvertently stored in structured data fields—such as in a comment field stored in a database—is also a challenge.
In this post, I show you how to integrate Amazon Macie as part of the data ingestion step in your data pipeline. This solution provides an additional checkpoint that sensitive data has been appropriately redacted or tokenized prior to ingestion. Macie is a fully managed data security and privacy service that uses machine learning and pattern matching to discover sensitive data in AWS.
When Macie discovers sensitive data, the solution notifies an administrator to review the data and decide whether to allow the data pipeline to continue ingesting the objects. If allowed, the objects will be tagged with an Amazon Simple Storage Service (Amazon S3) object tag to identify that sensitive data was found in the object before progressing to the next stage of the pipeline.
This combination of automation and manual review helps reduce the risk that sensitive data—such as personally identifiable information—will be ingested into a data lake. This solution can be extended to fit your use case and workflows. For example, you can define custom data identifiers as part of your scans, add additional validation steps, create Macie suppression rules to archive findings automatically, or only request manual approvals for findings that meet certain criteria (such as high severity findings).
Solution overview
Many of my customers are building serverless data lakes with Amazon S3 as the primary data store. Their data pipelines commonly use different S3 buckets at each stage of the pipeline. I refer to the S3 bucket for the first stage of ingestion as the raw data bucket. A typical pipeline might have separate buckets for raw, curated, and processed data representing different stages as part of their data analytics pipeline.
Typically, customers will perform validation and clean their data before moving it to a raw data zone. This solution adds validation steps to that pipeline after preliminary quality checks and data cleaning is performed, noted in blue (in layer 3) of Figure 1. The layers outlined in the pipeline are:
Ingestion – Brings data into the data lake.
Storage – Provides durable, scalable, and secure components to store the data—typically using S3 buckets.
Processing – Transforms data into a consumable state through data validation, cleanup, normalization, transformation, and enrichment. This processing layer is where the additional validation steps are added to identify instances of sensitive data that haven’t been appropriately redacted or tokenized prior to consumption.
Consumption – Provides tools to gain insights from the data in the data lake.
Figure 1: Data pipeline with sensitive data scan
The application runs on a scheduled basis (four times a day, every 6 hours by default) to process data that is added to the raw data S3 bucket. You can customize the application to perform a sensitive data discovery scan during any stage of the pipeline. Because most customers do their extract, transform, and load (ETL) daily, the application scans for sensitive data on a scheduled basis before any crawler jobs run to catalog the data and after typical validation and data redaction or tokenization processes complete.
You can expect that this additional validation will add 5–10 minutes to your pipeline execution at a minimum. The validation processing time will scale linearly based on object size, but there is a start-up time per job that is constant.
If sensitive data is found in the objects, an email is sent to the designated administrator requesting an approval decision, which they indicate by selecting the link corresponding to their decision to approve or deny the next step. In most cases, the reviewer will choose to adjust the sensitive data cleanup processes to remove the sensitive data, deny the progression of the files, and re-ingest the files in the pipeline.
Additional considerations for deploying this application for regular use are discussed at the end of the blog post.
Application components
The following resources are created as part of the application:
S3 buckets store data in various stages of processing: A raw data bucket for uploading objects for the data pipeline, a scanning bucket where objects are scanned for sensitive data, a manual review bucket holding objects where sensitive data was discovered, and a scanned data bucket for starting the next ingestion step of the data pipeline.
Lambda functions execute the logic to run the sensitive data scans and workflow.
Note: the application uses various AWS services, and there are costs associated with these resources after the Free Tier usage. See AWS Pricing for details. The primary drivers of the solution cost will be the amount of data ingested through the pipeline, both for Amazon S3 storage and data processed for sensitive data discovery with Macie.
The architecture of the application is shown in Figure 2 and described in the text that follows.
Figure 2: Application architecture and logic
Application logic
Objects are uploaded to the raw data S3 bucket as part of the data ingestion process.
A scheduled EventBridge rule runs the sensitive data scan Step Functions workflow.
triggerMacieScan Lambda function moves objects from the raw data S3 bucket to the scan stage S3 bucket.
triggerMacieScan Lambda function creates a Macie sensitive data discovery job on the scan stage S3 bucket.
checkMacieStatus Lambda function checks the status of the Macie sensitive data discovery job.
isMacieStatusCompleteChoiceStep Functions Choice state checks whether the Macie sensitive data discovery job is complete.
If yes, the getMacieFindingsCount Lambda function runs.
getMacieFindingsCount Lambda function counts all of the findings from the Macie sensitive data discovery job.
isSensitiveDataFound Step Functions Choice state checks whether sensitive data was found in the Macie sensitive data discovery job.
If there was sensitive data discovered, run the triggerManualApproval Lambda function.
If there was no sensitive data discovered, run the moveAllScanStageS3Files Lambda function.
moveAllScanStageS3Files Lambda function moves all of the objects from the scan stage S3 bucket to the scanned data S3 bucket.
triggerManualApproval Lambda function tags and moves objects with sensitive data discovered to the manual review S3 bucket, and moves objects with no sensitive data discovered to the scanned data S3 bucket. The function then sends a notification to the ApprovalRequestNotification Amazon SNS topic as a notification that manual review is required.
Email is sent to the email address that’s subscribed to the ApprovalRequestNotification Amazon SNS topic (from the application deployment template) for the manual review user with the option to Approve or Deny pipeline ingestion for these objects.
Manual review user assesses the objects with sensitive data in the manual review S3 bucket and selects the Approve or Deny links in the email.
The decision request is sent from the Amazon API Gateway to the receiveApprovalDecision Lambda function.
manualApprovalChoice Step Functions Choice state checks the decision from the manual review user.
If denied, run the deleteManualReviewS3Files Lambda function.
If approved, run the moveToScannedDataS3Files Lambda function.
deleteManualReviewS3Files Lambda function deletes the objects from the manual review S3 bucket.
moveToScannedDataS3Files Lambda function moves the objects from the manual review S3 bucket to the scanned data S3 bucket.
The next step of the automated data pipeline will begin with the objects in the scanned data S3 bucket.
Prerequisites
For this application, you need the following prerequisites:
You can use AWS Cloud9 to deploy the application. AWS Cloud9 includes the AWS CLI and AWS SAM CLI to simplify setting up your development environment.
Deploy the application with AWS SAM CLI
You can deploy this application using the AWS SAM CLI. AWS SAM uses AWS CloudFormation as the underlying deployment mechanism. AWS SAM is an open-source framework that you can use to build serverless applications on AWS.
To deploy the application
Initialize the serverless application using the AWS SAM CLI from the GitHub project in the aws-samples repository. This will clone the project locally which includes the source code for the Lambda functions, Step Functions state machine definition file, and the AWS SAM template. On the command line, run the following:
sam init – location gh: aws-samples/amazonmacie-datapipeline-scan
Alternatively, you can clone the Github project directly.
Deploy your application to your AWS account. On the command line, run the following:
sam deploy – guided
Complete the prompts during the guided interactive deployment. The first deployment prompt is shown in the following example.
Configuring SAM deploy
======================
Looking for config file [samconfig.toml] : Found
Reading default arguments : Success
Setting default arguments for 'sam deploy'
=========================================
Stack Name [maciepipelinescan]:
Settings:
Stack Name – Name of the CloudFormation stack to be created.
AWS Region – Region—for example, us-west-2, eu-west-1, ap-southeast-1—to deploy the application to. This application was tested in the us-west-2 and ap-southeast-1 Regions. Before selecting a Region, verify that the services you need are available in those Regions (for example, Macie and Step Functions).
Parameter StepFunctionName – Name of the Step Functions state machine to be created—for example, maciepipelinescanstatemachine).
Parameter BucketNamePrefix – Prefix to apply to the S3 buckets to be created (S3 bucket names are globally unique, so choosing a random prefix helps ensure uniqueness).
Parameter ApprovalEmailDestination – Email address to receive the manual review notification.
Parameter EnableMacie – Whether you need Macie enabled in your account or Region. You can select yes or no; select yes if you need Macie to be enabled for you as part of this template, select no, if you already have Macie enabled.
Confirm changes and provide approval for AWS SAM CLI to deploy the resources to your AWS account by responding y to prompts, as shown in the following example. You can accept the defaults for the SAM configuration file and SAM configuration environment prompts.
#Shows you resources changes to be deployed and require a 'Y' to initiate deploy
Confirm changes before deploy [y/N]: y
#SAM needs permission to be able to create roles to connect to the resources in your template
Allow SAM CLI IAM role creation [Y/n]: y
ReceiveApprovalDecisionAPI may not have authorization defined, Is this okay? [y/N]: y
ReceiveApprovalDecisionAPI may not have authorization defined, Is this okay? [y/N]: y
Save arguments to configuration file [Y/n]: y
SAM configuration file [samconfig.toml]:
SAM configuration environment [default]:
Note: This application deploys an Amazon API Gateway with two REST API resources without authorization defined to receive the decision from the manual review step. You will be prompted to accept each resource without authorization. A token (Step Functions taskToken) is used to authenticate the requests.
This creates an AWS CloudFormation changeset. Once the changeset creation is complete, you must provide a final confirmation of y to Deploy the changeset? [y/N] when prompted as shown in the following example.
Changeset created successfully. arn:aws:cloudformation:ap-southeast-1:XXXXXXXXXXXX:changeSet/samcli-deploy1605213119/db681961-3635-4305-b1c7-dcc754c7XXXX
Previewing CloudFormation changeset before deployment
======================================================
Deploy this changeset? [y/N]:
Your application is deployed to your account using AWS CloudFormation. You can track the deployment events in the command prompt or via the AWS CloudFormation console.
After the application deployment is complete, you must confirm the subscription to the Amazon SNS topic. An email will be sent to the email address entered in Step 3 with a link that you need to select to confirm the subscription. This confirmation provides opt-in consent for AWS to send emails to you via the specified Amazon SNS topic. The emails will be notifications of potentially sensitive data that need to be approved. If you don’t see the verification email, be sure to check your spam folder.
Test the application
The application uses an EventBridge scheduled rule to start the sensitive data scan workflow, which runs every 6 hours. You can manually start an execution of the workflow to verify that it’s working. To test the function, you will need a file that contains data that matches your rules for sensitive data. For example, it is easy to create a spreadsheet, document, or text file that contains names, addresses, and numbers formatted like credit card numbers. You can also use this generated sample data to test Macie.
We will test by uploading a file to our S3 bucket via the AWS web console. If you know how to copy objects from the command line, that also works.
Upload test objects to the S3 bucket
Navigate to the Amazon S3 console and upload one or more test objects to the <BucketNamePrefix>-data-pipeline-raw bucket. <BucketNamePrefix> is the prefix you entered when deploying the application in the AWS SAM CLI prompts. You can use any objects as long as they’re a supported file type for Amazon Macie. I suggest uploading multiple objects, some with and some without sensitive data, in order to see how the workflow processes each.
Start the Scan State Machine
Navigate to the Step Functions state machines console. If you don’t see your state machine, make sure you’re connected to the same region that you deployed your application to.
Choose the state machine you created using the AWS SAM CLI as seen in Figure 3. The example state machine is maciepipelinescanstatemachine, but you might have used a different name in your deployment.
Figure 3: AWS Step Functions state machines console
Select the Start execution button and copy the value from the Enter an execution name – optional box. Change the Input – optional value replacing <execution id> with the value just copied as follows:
{
“id”: “<execution id>”
}
In my example, the <execution id> is fa985a4f-866b-b58b-d91b-8a47d068aa0c from the Enter an execution name – optional box as shown in Figure 4. You can choose a different ID value if you prefer. This ID is used by the workflow to tag the objects being processed to ensure that only objects that are scanned continue through the pipeline. When the EventBridge scheduled event starts the workflow as scheduled, an ID is included in the input to the Step Functions workflow. Then select Start execution again.
Figure 4: New execution dialog box
You can see the status of your workflow execution in the Graph inspector as shown in Figure 5. In the figure, the workflow is at the pollForCompletionWait step.
Figure 5: AWS Step Functions graph inspector
The sensitive discovery job should run for about five to ten minutes. The jobs scale linearly based on object size, but there is a start-up time per job that is constant. If sensitive data is found in the objects uploaded to the <BucketNamePrefix>-data-pipeline-upload S3 bucket, an email is sent to the address provided during the AWS SAM deployment step, notifying the recipient requesting of the need for an approval decision, which they indicate by selecting the link corresponding to their decision to approve or deny the next step as shown in Figure 6.
Figure 6: Sensitive data identified email
When you receive this notification, you can investigate the findings by reviewing the objects in the <BucketNamePrefix>-data-pipeline-manual-review S3 bucket. Based on your review, you can either apply remediation steps to remove any sensitive data or allow the data to proceed to the next step of the data ingestion pipeline. You should define a standard response process to address discovery of sensitive data in the data pipeline. Common remediation steps include review of the files for sensitive data, deleting the files that you do not want to progress, and updating the ETL process to redact or tokenize sensitive data when re-ingesting into the pipeline. When you re-ingest the files into the pipeline without sensitive data, the files will not be flagged by Macie.
The workflow performs the following:
If you select Approve, the files are moved to the <BucketNamePrefix>-data-pipeline-scanned-data S3 bucket with an Amazon S3 SensitiveDataFound object tag with a value of true.
If you select Deny, the files are deleted from the <BucketNamePrefix>-data-pipeline-manual-review S3 bucket.
If no action is taken, the Step Functions workflow execution times out after five days and the file will automatically be deleted from the <BucketNamePrefix>-data-pipeline-manual-review S3 bucket after 10 days.
Clean up the application
You’ve successfully deployed and tested the sensitive data pipeline scan workflow. To avoid ongoing charges for resources you created, you should delete all associated resources by deleting the CloudFormation stack. In order to delete the CloudFormation stack, you must first delete all objects that are stored in the S3 buckets that you created for the application.
To delete the application
Empty the S3 buckets created in this application (<BucketNamePrefix>-data-pipeline-raw S3 bucket, <BucketNamePrefix>-data-pipeline-scan-stage, <BucketNamePrefix>-data-pipeline-manual-review, and <BucketNamePrefix>-data-pipeline-scanned-data).
Before using this application in a production data pipeline, you will need to stop and consider some practical matters. First, the notification mechanism used when sensitive data is identified in the objects is email. Email doesn’t scale: you should expand this solution to integrate with your ticketing or workflow management system. If you choose to use email, subscribe a mailing list so that the work of reviewing and responding to alerts is shared across a team.
Second, the application is run on a scheduled basis (every 6 hours by default). You should consider starting the application when your preliminary validations have completed and are ready to perform a sensitive data scan on the data as part of your pipeline. You can modify the EventBridge Event Rule to run in response to an Amazon EventBridge event instead of a scheduled basis.
Third, the application currently uses a 60 second Step Functions Wait state when polling for the Macie discovery job completion. In real world scenarios, the discovery scan will take 10 minutes at a minimum, likely several orders of magnitude longer. You should evaluate the typical execution times for your application execution and tune the polling period accordingly. This will help reduce costs related to running Lambda functions and log storage within CloudWatch Logs. The polling period is defined in the Step Functions state machine definition file (macie_pipeline_scan.asl.json) under the pollForCompletionWait state.
Fourth, the application currently doesn’t account for false positives in the sensitive data discovery job results. Also, the application will progress or delete all objects identified based on the decision by the reviewer. You should consider expanding the application to handle false positives through automation rather than manual review / intervention (such as deleting the files from the manual review bucket or removing the sensitive data tags applied).
Last, the solution will stop the ingestion of a subset of objects into your pipeline. This behavior is similar to other validation and data quality checks that most customers perform as part of the data pipeline. However, you should test to ensure that this will not cause unexpected outcomes and address them in your downstream application logic accordingly.
Conclusion
In this post, I showed you how to integrate sensitive data discovery using Macie as an additional validation step in an automated data pipeline. You’ve reviewed the components of the application, deployed it using the AWS SAM CLI, tested to validate that the application functions as expected, and cleaned up by removing deployed resources.
You now know how to integrate sensitive data scanning into your ETL pipeline. You can use automation and—where required—manual review to help reduce the risk of sensitive data, such as personally identifiable information, being inadvertently ingested into a data lake. You can take this application and customize it to fit your use case and workflows, such as using custom data identifiers as part of your scans, adding additional validation steps, creating Macie suppression rules to define cases to archive findings automatically, or only request manual approvals for findings that meet certain criteria (such as high severity findings).
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Macie forum.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
Over the last 12 months, organizations of all types have increasingly needed to stay connected to their customers. With the move to virtual interactions accelerating across industries, email has remained a trusted channel for customer communications. Amazon Simple Email Service (SES) has seen record outbound email traffic in 2020, supporting critical customer communications during COVID and commercial moments like Black Friday and Cyber Monday.
The importance of email during COVID
Unlike real-time communications like voice or live chat, email is asynchronous. It can be read and consumed at the customer’s leisure. In some geographies like North America, email also represents an individual’s unique identity, persisting longer than mobile phone numbers or social networking accounts. Even with the importance of email established before 2020, it was important to most organizations to send only the right messages during the COVID crisis.
Many organizations chose to decrease promotional or marketing emails during the pandemic voluntarily. This decrease in sending was to recognize the increased stress most individuals were facing in their personal lives. However, even with the drop in marketing emails across organizational types, there was an increased need to communicate and maintain customer engagement. Most organizations went through three distinct customer communication phases with email in 2020: React, Respond, and Reimagine.
React – These were the initial emails sent to acknowledge the COVID crisis, occurring early in 2020. These emails included messages reinforcing commitment to customer health, employee safety, or communicating new cleaning protocols.
Respond – These messages often included communication on the status of the business or event. Most businesses needed to communicate their transition to remote work, temporary closures, and many in-person events canceled.
Reimagine – Throughout the crisis, organizations were reimagining how to do business. Healthcare started operating video consultations, and restaurants shifted to pick up/take out only. Email communication was vital to take customers on the journey into this “new normal,” even as some businesses started to reopen.
To send these customer communications at scale, many organizations worked with Amazon SES.
How Amazon SES scaled and supported customers in 2020
Amazon SES saw several sending spikes that aligned with organizations working to communicate with their customers during COVID. Nine times in 2020, transactions per second (TPS) in Amazon SES exceeding 150% of the previous record held by 2019 Black Friday. This over 150% TPS spike also occurred on 2020 Black Friday and Cyber Monday.
In addition to supporting those upsurges in throughput, the Amazon SES team also responded to customer feedback on increasing the global footprint of Amazon SES. Since January, Amazon SES increased the total number of regions supported from 7 to 14, including the US government cloud. These additional regions were deployed during 2020 as the team worked remotely. This regional expansion enabled customers to adhere to local data sovereignty requirements for email sending while also improving performance.
Customers also told us they needed tools to help them manage compliance with important governance laws like CAN-SPAM and GDPR. Amazon SES released list management to help organizations manage their customer’s contact information and preferences.
Looking forward
As we move into 2021, email will remain at the forefront of customer communication channels. Enterprise customers like Netflix and Duolingo rely on Amazon SES to deliver their email at scale. For more information on how you can use Amazon SES, visit our website.
Email is a popular channel for applications, used in both marketing campaigns and other outbound customer communications. The challenge with email is that it can become increasingly complex to manage for companies that must send large quantities of messages per month. This complexity is especially true when companies need to measure detailed email engagement metrics to track campaign success.
As a marketer, you want to monitor several metrics, including open rates, click-through rates, bounce rates, and delivery rates. If you do not track your email results, you could potentially be wasting your campaign resources. Monitoring and interpreting your sending results can help you deliver the best content possible to your subscribers’ inboxes, and it can also ensure that your IP reputation stays high. Mailbox providers prioritize inbox placement for senders that deliver relevant content. As a business professional, tracking your emails can also help you stay on top of hot leads and important clients. For example, if someone has opened your email multiple times in one day, it might be a good idea to send out another follow-up email to touch base.
Building a large-scale email solution is a complex and expensive challenge for any business. You would need to build infrastructure, assemble your network, and warm up your IP addresses. Alternatively, working with some third-party email solutions require contract negotiations and upfront costs.
Fortunately, Amazon Simple Email Service (SES) has a highly scalable and reliable backend infrastructure to reduce the preceding challenges. It has improved content filtering techniques, reputation management features, and a vast array of analytics and reporting functions. These features help email senders reach their audiences and make it easier to manage email channels across applications. Amazon SES also provides API operations to monitor your sending activities through simple API calls. You can publish these events to Amazon CloudWatch, Amazon Kinesis Data Firehose, or by using Amazon Simple Notification Service (SNS).
In this post, you learn how to build and automate a serverless architecture that analyzes email events. We explore how to track important metrics such as open and click rate of the emails.
Solution overview
The metrics that you can measure using Amazon SES are referred to as email sending events. You can use Amazon CloudWatch to retrieve Amazon SES event data. You can also use Amazon SNS to interpret Amazon SES event data. However, in this post, we are going to use Amazon Kinesis Data Firehose to monitor our user sending activity.
Enable Amazon SES configuration sets with open and click metrics and publish email sending events to Amazon Kinesis Data Firehose as JSON records. A Lambda function is used to parse the JSON records and publish the content in the Amazon S3 bucket.
Ingested data lands in an Amazon S3 bucket that we refer to as the raw zone. To make that data available, you have to catalog its schema in the AWS Glue data catalog. You create and run the AWS Glue crawler that crawls your data sources and construct your Data Catalog. The Data Catalog uses pre-built classifiers for many popular source formats and data types, including JSON, CSV, and Parquet.
When the crawler is finished creating the table definition and schema, you analyze the data using Amazon Athena. It is an interactive query service that makes it easy to analyze data in Amazon S3 using SQL. Point to your data in Amazon S3, define the schema, and start querying using standard SQL, with most results delivered in seconds.
Now you can build visualizations, perform ad hoc analysis, and quickly get business insights from the Amazon SES event data using Amazon QuickSight. You can easily run SQL queries using Amazon Athena on data stored in Amazon S3, and build business dashboards within Amazon QuickSight.
Deploying the architecture:
Configuring Amazon Kinesis Data Firehose to write to Amazon S3:
Navigate to the Amazon Kinesis in the AWS Management Console. Choose Kinesis Data Firehose and create a delivery stream.
Enter delivery stream name as “SES_Firehose_Demo”.
Under the source category, select “Direct Put or other sources”.
On the next page, make sure to enable Data Transformation of source records with AWS Lambda. We use AWS Lambda to parse the notification contents that we only process the required information as per the use case.
Click the “Create New” Lambda function.
Click on “General Kinesis Data FirehoseProcessing” Lambda blueprint and this opens up the Lambda console. Enter following values in Lambda
Name: SES-Firehose-Json-Parser
Execution role: Create a new role with basic Lambda permissions.
Click “Create Function”. Now replace the Lambda code with the following provided code and save the function.
For this blog, we are only filtering out three fields i.e. Eventname, destination_Email, and SourceIP. If you want to store other parameters you can modify your code accordingly. For the list of information that we receive in notifications, you may check out the following document.
You may utilize the above values in the Amazon S3 prefix and error prefix. If you use your own prefixes make sure to accordingly update the target values in AWS Glue which you will see in further process.
Keep the Amazon S3 backup option disabled and click “Next”.
On the next page, under the Permissions section, select create a new role. This opens up a new tab and then click “Allow” to create the role.
Navigate back to the Amazon Kinesis Data Firehose console and click “Next”.
Review the changes and click on “Create delivery stream”.
Configure Amazon SES to publish event data to Kinesis Data Firehose:
Navigate to Amazon SES console and select “Email Addresses” from the left side.
Click on “Verify a New Email Address” on the top. Enter your email address to which you send a test email.
Go to your email inbox and click on the verify link. Navigate back to the Amazon SES console and you will see verified status on the email address provided.
Open the Amazon SES console and select Configuration set from the left side.
Create a new configuration set. Enter “SES_Firehose_Demo” as the configuration set name and click “Create”.
Choose Kinesis Data Firehose as the destination and provide the following details.
Name: OpenClick
Event Types: Open and Click
In the IAM Role field, select ‘Let SES make a new role’. This allows SES to create a new role and add sufficient permissions for this use case in that role.
Click “Save”.
Sending a Test email:
Navigate to Amazon SES console, click on “Email Addresses” on the left side.
Select your verified email address and click on “Send a Test email”.
Make sure you select the raw email format. You may use the following format to send out a test email from the console. Make sure you send out this email to a recipient inbox to which you have the access.
X-SES-CONFIGURATION-SET: SES_Firehose_Demo
X-SES-MESSAGE-TAGS: Email=NULL
From: [email protected]
To: [email protected]
Subject: Test email
Content-Type: multipart/alternative;
boundary="----=_boundary"
------=_boundary
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit
This is a test email.
<a href="https://aws.amazon.com/">Amazon Web Services</a>
------=_boundary
Once the email is received in the recipient’s inbox, open the email and click the link present in the same. This generates a click and open event and send the response back to SES.
Creating Glue Crawler:
Navigate to the AWS Glue console, select “crawler” from the left side, and then click on “Add crawler” on the top.
Enter the crawler name as “SES_Firehose_Crawler” and click “Next”.
Under Crawler source type, select “Data stores” and click “Next”.
Select Amazon S3 as the data source and prove the required path. Include the path until the “fhbase” folder.
Select “no” under Add another data source section.
In the IAM role, select the option to ‘Create an IAM role’. Enter the name as “SES_Firehose-Crawler”. This provides the necessary permissions automatically to the newly created role.
In the frequency section, select run on demand and click “Next”. You may choose this value as per your use case.
Click on add Database and provide the name as “ses_firehose_glue_db”. Click on create and then click “Next”.
Review your Glue crawler setting and click on “Finish”.
Run the above-created crawler. This crawls the data from the specified Amazon S3 bucket and create a catalog and table definition.
Now navigate to “tables” on the left, and verify a “fhbase” table is created after you run the crawler.
If you want to analyze the data stored until now, you can use Amazon Athena and test the queries. If not, you can move to the Amazon Quicksight directly.
Analyzing the data using Amazon Athena:
Open Athena console and select the database, which is created using AWS Glue
Click on “setup a query result location in Amazon S3” as shown in the following screenshot.
Navigate to the Amazon S3 bucket created in earlier steps and create a folder called “AthenaQueryResult”. We store our Athena query result in this bucket.
Now navigate back to Amazon Athena and select the Amazon S3 bucket with the folder location as shown in the following screenshot and click “Save”.
Run the following query to test the sample output and accordingly modify your SQL query to get the desired output.
Select * from “ses_firehose_glue_db”.”fhbase”
Note: If you want to track the opened emails by unique Ip addresses then you can modify your SQL query accordingly. This is because every time an email gets opened, you will receive a notification even if the same email was previously opened.
Visualizing the data in Amazon QuickSight dashboards:
Now, let’s analyze this data using Amazon Athena via Amazon Quicksight.
Log into Amazon Quicksight and choose Manage data, New dataset. Choose Amazon Athena as a new data source.
Enter the data source name as “SES-Demo” and click on “Create the data source”.
Select your database from the drop-down as “ses_firehose_glue_db” and table “fhbase” that you have created in AWS Glue.
And add a custom SQL based on your use case and click on “Confirm query”. Refer to the example below.
You can perform ad hoc analysis and modify your query according to your business needs as shown in the following image. Click “Save & Visualize”.
You can now visualize your event data on Amazon Quicksight dashboard. You can use various graphs to represent your data. For this demo, the default graph is used and two fields are selected to populate on the graph, as shown below.
Conclusion:
This architecture shows how to track your email sending activity at a granular level. You set up Amazon SES to publish event data to Amazon Kinesis Data Firehose based on fine-grained email characteristics that you define. You can also track several types of email sending events, including sends, deliveries, bounces, complaints, rejections, rendering failures, and delivery delays. This information can be useful for operational and analytical purposes.
Chirag Oswal is a solutions architect and AR/VR specialist working with the public sector India. He works with AWS customers to help them adopt the cloud operating model on a large scale.
Apoorv Gakhar is a Cloud Support Engineer and an Amazon SES Expert. He is working with AWS to help the customers integrate their applications with various AWS Services.
If you’re an AWS Directory Service administrator, you can reset your directory users’ passwords from the AWS console or the CLI when their passwords expire. However, you can improve your efficiency by reducing the number of requests for password resets. You can also help improve the security of your organization by having your users proactively reset their directory passwords before they expire. In this post, I describe the steps you can take to set up a solution to send regular reminders to your AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) users to prompt them to change their password before it expires. This will help prevent users from being locked out when their passwords expire and also reduce the number of reset requests sent to administrators.
Solution Overview
When users’ passwords expire, they typically contact their directory service administrator to help them reset their password. For security reasons, they then need to reset their password again on their computer so that the administrator has no knowledge of the new password. This process is time-consuming and impacts productivity. In this post, I present a solution to remind users automatically to reset AWS Managed Microsoft AD passwords. The following diagram and description explains how the solution works.
Figure 1: Solution architecture
A script running on an AWS Managed Microsoft AD domain-joined Amazon Elastic Compute Cloud (Amazon EC2) instance (Notification Server) searches the AWS Managed Microsoft AD for all enabled user accounts and retrieves their names, email addresses, and password expiry dates.
Using the permissions of the IAM role attached to the Notification Server, the script obtains the SES SMTP credentials stored in AWS Secrets Manager.
With the SMTP credentials obtained in Step 2, the script then securely connects to Amazon Simple Email Service (Amazon SES.)
Based on your preferences, Amazon SES sends domain password expiry notifications to the users’ mailboxes.
A separate process for updating the SES credentials stored in AWS Secrets Manager occurs as follows:
A CloudWatch rule triggers a Lambda function.
The Lambda function generates new SES SMTP credentials from the SES IAM Username.
The Lambda function then updates AWS Secrets Manager with the new SES credentials.
The Lambda function then deletes the previous IAM access key.
Prerequisites
The instructions in this post assume that you’re familiar with how to create Amazon EC2 for Windows Server instances, use Remote Desktop Protocol (RDP) to log in to the instances, and have completed the following tasks:
Remove Amazon EC2 throttling on port 25 for your EC2 instance.
Remove your Amazon SES account from the Amazon SES sandbox so you can also send email to unverified recipients.
Note: You can use your AWS Microsoft Directory management instance as the Notification Server. For the steps below, use any account that is a member of the AWS delegated Administrators’ group.
Summary of the steps
Verify an Amazon SES email address.
Create Amazon SES SMTP credentials.
Store the Amazon SES SMTP credentials in AWS Secrets Manager.
Create an IAM role with read permissions to the secret in AWS Secrets Manager.
Set up and test the notification script.
Set up Windows Task Scheduler.
Configure automatic rotation of the SES Credentials stored in Secrets Manager.
STEP 1: Verify an Amazon SES email address
To prevent unauthorized use, Amazon SES requires that you verify the email address that you use as a “From,” “Source,” “Sender,” or “Return-Path”.
To verify the email address you will use as the sending address, complete the following steps:
In the navigation pane, under Identity Management, select Email Addresses.
Select Verify a New Email Address, and then enter the email address.
Select Verify This Email Address.
An email will be sent to the specified email address with a link to verify the email address. Once you verify the email, you’ll see the Verification Status as verified in the SES console.
In the image below, I have four verified email addresses:
In the content pane, make a note of the Server Name as you will use this when sending the email in Step 5. Select Create My SMTP Credentials.
Figure 3: Make a note of the SES SMTP Server Name
Specify a value for the IAM User Name field. Make a note of this IAM User Name as you will need in Step 7 later. In this post, I use the placeholder, ses-smtp-user-eu-west-1, as the user name (as shown below):
Figure 4: Make a note of SES IAM User Name
Select Create.
Make a note of the SMTP Username and SMTP Password you created because you’ll use these in later steps. This is as shown below in my example.
Figure 5: Make a note of the SES SMTP Username and SMTP Password
STEP 3: Store the Amazon SES SMTP credentials in AWS Secrets Manager
In this step, use AWS Secrets Manager to store the Amazon SES SMTP credentials created in Step 2. You will reference this credential when you execute the script in the Notification Server.
Complete the following steps to store the Amazon SES SMTP credentials in AWS Secrets Manager:
Select Store a new secret, and then select Other types of secrets.
Under Secret Key/value, enter the Amazon SES SMTP Username in the left box and the Amazon SES SMTP Password in the right box, and then select Next.
Figure 6: Enter the Amazon SES SMTP user name and password
In the next screen, enter the string AWS-SES as the name of the secret. Enter an optional description for the secret and add an optional tag and select Next.
Note: I recommend using AWS-SES as the name of your secret. If you choose to use some other name, you will have to update PowerShell script in Step 5. I also recommend creating the secret in the same region as the Notification Server. If you create your secret in a different region, you will also have to update PowerShell script in Step 5.
Figure 7: Enter “AWS-SES” as the secret name
On next screen, leave the default setting as Disable automatic rotation and select Next. You will come back later in Step 7 where you will use a Lambda function to rotate the secret at specified intervals.
To store the secret, in the last screen, select Store. Now select the secret and make a note of the ARN of the secret as shown in in Figure 8.
Figure 8: Make a note of the Secret ARN
Step 4: Create IAM role with permissions to read the secret
Create an IAM role that grants permissions to read the secret created in Step 3. Then, attach this role to the Notification Server to enable your script to read this secret. Complete the following steps:
Here is how it looks in my example after I replace with the ARN of my Secrets Manager secret:
Figure 9: Example policy
Select Review policy.
On the next screen, specify a name for the policy. In my example, I have specified Access-Ses-Secret as the name of the policy. Also specify a description for the policy, and then select Create policy.
In the navigation pane, select Roles.
In the content pane, select Create role.
On the next page, select EC2, and then select Next: Permissions.
Select the policy you created, and then select Next: Tags.
Select Next: Review, provide a name for the role. In my example, I have specified SecretsManagerReadAccessRole as the name. Select Create Role.
Now, complete the following steps to attach the role to the Notification Server:
Select Actions, select Instance Settings, and then select Attach/Replace IAM Role.
Figure 10: Select “Attach/Replace IAM Role”
On the Attach/Replace IAM Role page, choose the role to attach from the drop-down list. For this post, I choose SecretsManagerReadAccessRole and select Apply.
Here is how it looks in my example:
Figure 11: Example “Attach/Replace IAM Role”
STEP 5: Setup and Test the Notification Script
In this section, you’re going to test the script by sending a sample notification email to an end user to remind the user to change their password. To test the script, log into your Notification Server using your AWS Microsoft Managed AD default Admin account. Then, complete the following steps:
Install the PowerShell Module for Active Directory by opening PowerShell as Administrator and run the following command:
Install-WindowsFeature -Name RSAT-AD-PowerShell
Download the script to the Notification Server. In my example, I downloaded the script and stored in the location
Note: Make sure to clear the User must change password at next logon check box when creating the user; otherwise, you will get an invalid output from the command in the next step.
For this example, I created a test user named RandomUser in Active Directory.
In the PowerShell Window, execute the following command to determine the number of days remaining before the password for the user expires. In this example, I run the following to determine the number of days remaining before the RandomUser account password expires:
A new email will be sent to user’s mailbox. Verify the user has received the email.
Note: I can update these instructions to send multiple email reminders to users. For example, if I want to notify users on three occasions (first notification 15 days before password expiration, then 7 days, and one more when there is only 1 day) I would execute the following:
Now that you have tested the script and confirmed that the solution is working as expected, you can set up a Windows Scheduled Task to execute the script daily. To do this:
Open Task Scheduler.
Right-click Task Scheduler Library, and then select Create Task.
Specify a name for the task.
On the Triggers tab, select New.
Select Daily, and then select OK.
On the Actions tab, select New.
Inside Program/Script, type PowerShell.exe
In the Add arguments (optional) box, type the following command, including the full path to the script.
Select OK twice, and then enter your password when prompted to complete the steps.
The script will now run daily at the specified time and will send password expiration email notifications to your AWS Managed Microsoft AD users. In my example, a password expiration reminder email is sent to my AWS Managed Microsoft AD users 15 days before expiration, 7 days before expiration, and then 1 day before expiration.
Here is a sample email:
Figure 12: Sample password expiration email
Note: You can edit the script to change the notification message to suit your requirements.
Step 7: Configure automatic update of the SES credentials
In this final section, you’re going to setup the configuration to automatically update the secret (that is, the SES credentials stored in AWS Secrets Manager) at regular intervals. To achieve this, you will use an Amazon Lambda function that will do the following:
Create a new access key using the IAM user you used to create the SES SMTP Credentials in Step 2 (ses-smtp-user-eu-west-1 in my example).
Update the SES credentials stored in AWS Secrets Manager.
Delete the old IAM access key.
Complete the following steps to enable automatic update of the SES credentials:
First you will create the IAM policy which you will attach to a role that will be assumed by the lambda function. This policy will provide the permissions to create new access keys for the SES IAM user and permissions to update the SES credentials stored in AWS Secrets Manager.
Log in to the IAM Console, and in the navigation bar, select Policies.
In the content pane, select Create Policy, and then select JSON.
Replace the content with the following script while specifying the ARN of the IAM user we used to create the SES SMTP credentials in Step 2 and the ARN of the secret stored in Secrets Manager that you noted in Step 3.
Select Review Policy, and then specify a name and a description for the policy. In my example, I have specified the name of the policy as iam-secretsmanager-access-for-lambda.
Here is how it looks in my example:
Figure 14: Specify a name and description for the policy
Select Create Policy
Now, create an IAM role and attach this policy.
In the navigation bar, select Roles and select Create Role.
Under the Choose the service that will use this role, select Lambda, and then select Next: Permissions.
On the next page, select the policy you just created and select Next: Tags. Add an optional tag and select Next: Review.
Specify a name for the role and description, and then select Create role. In my example, I have named the role: LambdaRoleRotatateSesSecret.
Now, you will create a Lambda function that will assume the created role:
Specify a name for the function, and then, under Runtime, select Python 3.7.
Under execution role, select User an existing role, and then select the role you created earlier.
Here are the settings I used in my example:
Figure 15: Settings on the “Create function” page
Select Create function, copy the following Python code, and then paste it in the Function Code section.
importboto3importos#required to fetch environment variablesimporthmac#required to compute the HMAC keyimporthashlib#required to create a SHA256 hashimportbase64#required to encode the computed keyimportsys#required for system functionsiam= boto3.client('iam')
sm= boto3.client('secretsmanager')
SES_IAM_USERNAME= os.environ['SES_IAM_USERNAME']SECRET_ID= os.environ['SECRET_ID']deflambda_handler(event,context):print("Getting current credentials...")
old_key= iam.list_access_keys(UserName=SES_IAM_USERNAME)['AccessKeyMetadata'][0]['AccessKeyId']print("Creating new credentials...")
new_key= iam.create_access_key(UserName=SES_IAM_USERNAME)
print("New credentials created...")
smtp_username='%s'% (new_key['AccessKey']['AccessKeyId'])
iam_sec_access_key='%s'% (new_key['AccessKey']['SecretAccessKey'])
# These variables are used when calculating the SMTP password.message='SendRawEmail'version='\x02'# Compute an HMAC-SHA256 key from the AWS secret access key.signatureInBytes= hmac.new(iam_sec_access_key.encode('utf-8'),message.encode('utf-8'),hashlib.sha256).digest()
# Prepend the version number to the signature.signatureAndVersion= version.encode('utf-8') + signatureInBytes# Base64-encode the string that contains the version number and signature.smtpPassword= base64.b64encode(signatureAndVersion)
# Decode the string and print it to the console.ses_smtp_pass= smtpPassword.decode('utf-8')
secret_string='{"%s": "%s"}'% (new_key['AccessKey']['AccessKeyId'],ses_smtp_pass)
print("Updating credentials in SecretsManager...")
sm_res= sm.update_secret(
SecretId=SECRET_ID,SecretString=secret_string
)
print(sm_res)
print("Deleting old key")
del_res= iam.delete_access_key(
UserName=SES_IAM_USERNAME,AccessKeyId=old_key
)
print(del_res)
Here is what it will look like:
Figure 16: The Python code pasted in the “Function Code” section
In the Environment variables section, specify the two environment variables required by the Lambda Python code as follows:
In the navigation pane, select Rules and, in the content pane, select Create Rule.
Under Event Source, select Schedule, and then select Fixed rate of. Specify how often you would like CloudWatch to trigger the Lambda function. In my example, I have chosen to update the SES credentials every 30 days.
Under Targets, select Add Target, and then select Lambda Function.
In Function, select the Lambda function you just created, and then select Configure details.
Figure 18: Create new CloudWatch rule
Specify a name for the rule, enter a description, make sure the State check box is selected, and then select Create rule.
The SES credentials stored in AWS Secrets Manager will now be updated based on the scheduled intervals you specified in CloudWatch.
Conclusion
In this post, I showed how you can set up a solution to remind your AWS Directory Service for Microsoft Active Directory users to change their passwords before expiration. I demonstrated how you can achieve this using a combination of a script and Amazon SES. I also showed you how you can configure rotation of the Amazon SES credentials on your preferred schedule.
If you have comments about this post, submit them in the “Comments” section below. If you have questions or suggestions, please start a new thread on the Amazon SES forum.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
Connect with the AWS Customer Engagement team at events around the world to learn how our technology can to help you better engage with your customers. Get demos on recent feature releases, discover how you can use Pinpoint for your specific use case, and attend informative sessions to hear how companies around the world are using AWS Customer Engagement solutions to deliver better experiences for their customers. Plus, read below to find out how Amazon Pinpoint and Amazon SES both enable you to create innovative email experiences with the recent AMP Project launch.
AWS Customer Engagement in the news: Amazon SES and Amazon Pinpoint support build the future of email with AMP
The AMP Project’s mission is to enable more user-first experiences on the web, including web-based technology like email. On March 26, the AMP Project announced that they are bringing AMP technology to email in order to give users an interactive, real-time experience that also keeps inboxes safe.
Amazon Pinpoint and Amazon SES both provide out-of-the-box support for AMP for email with no additional configuration. This allows you to easily create experiences for your customers such as submitting RSVPs to events, filling out questionnaires, browsing catalogs, or responding to comments right within the email.
Amazon Pinpoint has been busy building. You can now:
Learn how to set up an email preference management web page that enables customers to manage their email subscription preferences. Read now.
Learn how to set up a web form that collects information from new customers, and then sends them an SMS message to confirm that they want to receive content from you. Read now.
Use Amazon Pinpoint in the US West (Oregon), EU (Frankfurt), and EU (Ireland) regions in addition to the US East (Virginia) region. Learn more.
Deliver voice messages to your users with Amazon Pinpoint Voice. Learn more.
Set up campaigns that auto-send messages to your customers when they take specific actions. Learn more.
Detect and understand issues impacting your email deliverability with the Amazon Pinpoint Deliverability Dashboard. Learn more.
Meet an Amazon Pinpoint expert at these upcoming events. We will teach you how to take advantage of recent updates so that you can create better engagement experiences for your customers. Plus, we can give you an inside look on what’s on our roadmap, and we’ll be giving out custom Pinpoint swag!
April 10, 2019 Singapore Expo Convention & Exhibition Centre Amazon Pinpoint will host an informative session about our Customer Engagement solutions at the AWS Singapore Summit. In this session, we will describe how AWS enables companies to better understand and engage their customers with personalized, timely, and relevant communications on multiple channels. You will also learn how Disney Streaming Services is using Amazon Pinpoint to engage their users. Register for the Summit here.
April 24, 2019 AWS San Francisco Loft Join us for an engaging day of discussion and education. Amazon Pinpoint experts will host the following sessions:
2:30pm – 3:30pm: How Do You Measure Customer Success? Featuring Amazon Pinpoint.
3:30pm – 4:30pm: Using ML to Create Enhance Your Marketing. Featuring Amazon Pinpoint and Amazon Personalize.
May 1-2, 2019 International Convention Centre (ICC), Darling Harbour, Sydney Don’t miss the customer engagement session on April 30th. This session, part of Amazon’s Innovation Day event, features a keynote address by Neil Lindsay, Vice President of Global Marketing at Amazon. The session explores how AWS technologies power organizations that deliver customer-centric innovations. Learn about how Australia’s largest brands and digital agencies use AWS technologies to engage customers, build new business models, and transform customer experiences. Register for the Summit here
May 15, 2019 Bombay Exhibition Center, Mumbai The Amazon Pinpoint team will be at the “Ask an Expert” booth. Stop by to meet the team, ask questions, and pick up Amazon Pinpoint swag! Register for the summit here
On the Messaging and Targeting team, we’re constantly inspired by the new and novel ways that customers use our services. For example, last year we took an in-depth look at a customer who built a fully featured email marketing platform based on Amazon SES and other AWS Services.
This week, our friends on the AWS Machine Learning team published a blog post that brings together the worlds of data science and customer engagement. Their solution uses Amazon SageMaker (a platform for building and deploying machine learning models) to create a system that makes purchasing predictions based on customers’ past behaviors. It then uses Amazon Pinpoint to send campaigns to customers based on these predictions.
The blog post is an interesting read that includes a primer on the process of creating a useful Machine Learning solution. It then goes in-depth, discussing the real-world considerations that are involved in implementing the solution.
In December 2016, we launched the Outbound Network Access functionality for Amazon RDS for Oracle, enabling customers to use their RDS for Oracle database instances to communicate with external web endpoints using the utl_http and utl tcp packages, and sending emails through utl_smtp. We extended the functionality by adding the option of using custom DNS servers, allowing such outbound network accesses to make use of any DNS server a customer chooses to use. These releases enabled HTTP, TCP and SMTP communication originating out of RDS for Oracle instances – limited to non-secure (non-SSL) mediums.
To overcome the limitation over SSL connections, we recently published a whitepaper, that guides through the process of creating customized Oracle wallet bundles on your RDS for Oracle instances. By making use of such wallets, you can now extend the Outbound Network Access capability to have external communications happen over secure (SSL/TLS) connections. This opens up new use cases for your RDS for Oracle instances.
With the right set of certificates imported into your RDS for Oracle instances (through Oracle wallets), your database instances can now:
Communicate with a HTTPS endpoint: Using utl_http, access a resource such as https://status.aws.amazon.com/robots.txt
Download files from Amazon S3 securely: Using a presigned URL from Amazon S3, you can now download any file over SSL
Extending Oracle Database links to use SSL: Database links between RDS for Oracle instances can now use SSL as long as the instances have the SSL option installed
Sending email over SMTPS:
You can now integrate with Amazon SES to send emails from your database instances and any other generic SMTPS with which the provider can be integrated
These are just a few high-level examples of new use cases that have opened up with the whitepaper. As a reminder, always ensure to have best security practices in place when making use of Outbound Network Access (detailed in the whitepaper).
About the Author
Surya Nallu is a Software Development Engineer on the Amazon RDS for Oracle team.
Regular visitors to this blog may have noticed that its name has changed from the Amazon SES Blog to the AWS Messaging and Targeting Blog. The Amazon SES team has been working closely with the Amazon Pinpoint team in recent months, so we decided to create a single source of information for both products.
If you’re a dedicated Amazon SES user, don’t worry—Amazon SES isn’t going anywhere. However, as the goals of our two teams started to overlap more and more, we realized that we had lots of ideas for blog posts that would be relevant to users of both products.
If you’re not familiar with Amazon Pinpoint yet, allow us to make a brief introduction. Amazon Pinpoint was originally created to help mobile app developers analyze the ways that their customers used their apps, and to send mobile push messages to those users. Over time, the capabilities of Amazon Pinpoint grew to include the ability to send transactional messages (such as order confirmations, welcome messages, and one-time passwords), segment your audience, schedule campaign execution, and send messages using other channels (including SMS and email). In short, Amazon Pinpoint helps you deliver the right message to the right customers at the right time using the right channel.
In the past, this blog focused mainly on providing information about new features and common issues. Our new blog will include that same information, as well as practical tips, industry best practices, and the exciting things our customers have done using Amazon SES and Amazon Pinpoint. We hope you enjoy it!
If you have any questions, or if there’s anything you’d like to see us cover in the blog, please let us know in the comments section.
Amazon Cognito lets you easily add user sign-up, sign-in, and access control to your mobile and web apps. You can use fully managed user directories, called Amazon Cognito user pools, to create accounts for your users, allow them to sign in, and update their profiles. Your users also can sign in by using external identity providers (IdPs) by federating with Amazon, Google, Facebook, SAML, or OpenID Connect (OIDC)–based IdPs. If your app is backed by resources, Amazon Cognito also gives you tools to manage permissions for accessing resources through AWS Identity and Access Management (IAM) roles and policies, and through integration with Amazon API Gateway.
In this post, I explain some new advanced security features (in beta) that were launched at AWS re:Invent 2017 for Amazon Cognito user pools and how to use them. Note that separate prices apply to these advanced security features, as described on our pricing page.
The new advanced security features of Amazon Cognito
Security is the top priority for Amazon Cognito. We handle user authentication and authorization to control access to your web and mobile apps, so security is vital. The new advanced security features add additional protections for your users that you manage in Amazon Cognito user pools. In particular, we have added protection against compromised credentials and risk-based adaptive authentication.
Compromised credentials protection
Our compromised credentials feature protects your users’ accounts by preventing your users from reusing credentials (a user name and password pair) that have been exposed elsewhere. This new feature addresses the issue of users reusing the same credentials for multiple websites and apps. For example, a user might use the same email address and password to sign in to multiple websites.
A security best practice is to never use the same user name password in different systems. If an attacker is able to obtain user credentials through a breach of one system, they could use those user credentials to access other systems. AWS has been able to form partnerships and programs so that Amazon Cognito is informed when a set of credentials has been compromised elsewhere. When you use compromised credentials protection in Amazon Cognito, you can prevent users of your application from signing up, signing in, and changing their password with credentials that are recognized as having been compromised. If a user attempts to use credentials that we detect have been compromised, that user is required to choose a different password.
Risk-based adaptive authentication
The other major advanced security feature we launched at AWS re:Invent 2017 is risk-based adaptive authentication. Adaptive authentication protects your users from attempts to compromise their accounts—and it does so intelligently to minimize any inconvenience for your customers. With adaptive authentication, Amazon Cognito examines each user pool sign-in attempt and generates a risk score for how likely the sign-in request is to be from a malicious attacker.
Amazon Cognito examines a number of factors, including whether the user has used the same device before, or has signed in from the same location or IP address. A detected risk is rated as low, medium, or high, and you can determine what actions should be taken at each risk level. You can choose to block the request if the risk level is high, or you can choose to require a second factor of authentication, in addition to the password, for the user to sign in using multi-factor authentication (MFA). With adaptive authentication, users continue to sign in with just their password when the request has characteristics of successful sign-ins in the past. Users are prompted for a second factor only when some risk is detected with a sign-in request.
Now that I’ve described the new advanced security features, I will show how to configure them for your mobile or web app. You have to create an Amazon Cognito user pool in the console and save it before you can see the advanced security settings.
First you must create and configure an Amazon Cognito user pool:
Go to the Amazon Cognito console, and choose Manage your User Pools to get started. If you already have a user pool that you can work with, choose that user pool. Otherwise, choose Create a user pool to create a new one.
On the MFA and verifications tab (see the following screenshot), enable MFA as Optional so that your individual users can choose to configure second factors of authentication, which are needed for adaptive authentication. (If you were to choose Required as the MFA setting for your user pool instead, all sign-ins would require a second factor of authentication. This would effectively disable adaptive authentication because a second factor of authentication would always be required.)
You should also enable at least one second factor of authentication. As shown in the following screenshot, I have enabled both SMS text message and Time-based One-time Password (TOTPs).
On the App clients tab, create an app client by choosing add an app client, entering a name, and choosing Create app client.
Second, configure the advanced security features:
After you’ve configured and saved your user pool, you will see the Advanced security tab, as shown in the following screenshot. You can choose one of three modes for enabling the advanced security features: Yes, Audit only, and No:
If you choose No, the advanced features are all turned off.
If you choose Audit only, Amazon Cognito logs all related events to CloudWatch metrics so that you can see what risks are detected, but Amazon Cognito doesn’t take any explicit actions to protect your users. Use the Audit only mode to understand what events are happening before you fully turn on the advanced security features.
If you choose Yes, you turn on the advanced security features. We recommend that you initially run the advanced security features in Audit only mode for two weeks before choosing Yes.
When you choose Yes to turn on the advanced security features, configuration options appear, as shown in the following screenshot:
First, choose if you want to configure default settings for all of your app clients, or if you want to configure settings for a specific app client. As shown in the following screenshot, you can see that I’ve chosen global default settings for all my app clients.
Next, choose the action you want to take when compromised credentials are detected. You can either Allow compromised credentials, or you can Block use of them. If you want to protect your users, you should choose Block use. However, you first can watch the metrics in CloudWatch without taking action by choosing Allow. You also can choose Customize when compromised credentials are blocked, which allows you to choose for which operations—sign up, sign in, and forgotten password—Amazon Cognito will detect and block use of compromised credentials.
The next section on the Advanced security tab includes the configuration for adaptive authentication. For each risk level (Low, Medium, and High), you can require a second factor for MFA or you can block the request, and you can notify users about the events through email. You have two MFA choices for each risk level:
Optional MFA – Requires a second factor at that risk level for all users who have configured either SMS or TOTP as a second factor of authentication. Users who haven’t configured a second factor are allowed to sign in without a second factor. For optional MFA, you should encourage your users to configure a second factor of authentication for added security, but users who haven’t configured a second factor aren’t blocked from signing in.
Require MFA – Requires a second factor of authentication from all users when a risk is detected, so any users who haven’t configured a second factor are blocked from signing in at any risk level that requires MFA.
Block – Blocks the sign-in attempt.
Notify users – Sends an email to the users to notify them about the sign-in attempt. You can customize the emails as described below.
In the next section on the Advanced security tab, you can customize the email notifications that Amazon Cognito sends to your users if you have selected Notify users. Amazon Cognito sends these notification emails through Amazon Simple Email Service (Amazon SES). If you haven’t already, you should go to the Amazon SES console to configure and verify an email address or domain so that you can use it as the FROM email address for the notification emails that Amazon Cognito sends.
You can customize the email subject and body for the email notifications with both HTML and plain text versions, as shown in the following screenshot.
Optionally, you can enter IP addresses that you either want to Always allow by bypassing the compromised credentials and adaptive authentication features, or to Always block. For example, if you have a site where you do testing and development, you might want to include the IP address range from that site in the Always allow list so that it doesn’t get mistaken as a risky sign-in attempt.
That’s all it takes to configure the advanced security features in the Amazon Cognito console.
Enabling the advanced security features from you app
After you have configured the advanced security features for your user pool, you need to enable them in your mobile or web app. First you need to include a version of our SDK that is recent enough to support the features, and second in some cases, you need to set some values for iOS, Android, and JavaScript.
iOS: If you’re building your own user interface to sign in users and integrating the Amazon Cognito Identity Provider SDK, use at least version 2.6.7 of the SDK. If you’re using the Amazon Cognito Auth SDK to incorporate the customizable, hosted user interface to sign in users, also use at least version 2.6.7. If you’re configuring the Auth SDK by using Info.plist, add the PoolIdForEnablingASF key to your Amazon Cognito user pool configuration, and set it to your user pool ID. If you’re configuring the Auth SDK by using AWSCognitoAuthConfiguration, use this initializer and specify your user pool ID as userPoolIdForEnablingASF. For more details, see the CognitoAuth sample app.
JavaScript: If you’re using the Amazon Cognito Auth JS SDK to incorporate the customizable, hosted UI to sign in users, use at least version 1.1.0 of the SDK. To configure the advanced security features, add the AdvancedSecurityDataCollectionFlag parameter and set it as true. Also add the UserPoolId parameter and set it to your user pool ID. In your application, you need to include "https://amazon-cognito-assets.<region>.amazoncognito.com/amazon-cognito-advanced-security-data.min.js" to collect data about requests. For more details, see the README.md of the Auth JavaScript SDK and the SAMPLEREADME.md of the web app sample. If you’re using the Amazon Cognito Identity SDK to build your own UI, use at least version 1.28.0 of the SDK.
Some examples of the advanced security features in action
Now that I have configured these advanced security features, let’s look at them in action. I’m using the customizable, hosted sign-up and sign-in screens that are built into Amazon Cognito user pools. I’ve done some minimal customization, and my sign-up page is shown in the following screenshot.
With the compromised credentials feature, if a user tries to sign up with credentials that have been exposed at another site, the user is told they cannot use that password for security reasons.
If a user signs in, Amazon Cognito detects a risk, and you have configured adaptive authentication, the user is asked for a second factor of authentication. The following screenshot shows an example of an SMS text message used for MFA. After the user enters a valid code from their phone, they’re successfully signed in.
As I mentioned earlier in this post, Amazon Cognito also can notify your users whenever there’s a sign-in attempt that’s determined to have some risk. The following screenshot shows a basic example of a notification message, and you can customize these messages, as described previously.
The advanced security features also provide aggregate metrics and event histories for individual users. You can view the aggregate metrics in the CloudWatch console. Navigate to the Metrics section under Cognito. When you’re graphing, choose the Graphed metrics tab and choose Sum as the Statistic.
You can view the event histories for users in the Amazon Cognito console on the Users and groups tab. When you choose an individual user, you see that user’s event history listed under their profile information. As the following screenshot shows, you can see information about users’ events, including the date and time, the event type, the risk detected, and location. The event history includes the Risk level that indicates the Low, Medium, or High ratings described earlier and the Risk decision that indicates if a risk was detected and what type.
When you choose an entry, you see the event details and the option to Mark event asvalid if it was from the user, or Mark event as invalid if it wasn’t.
Summary
You can use these advanced security features of Amazon Cognito user pools to protect your users from compromised credentials and attempts to compromise their user pool–based accounts in your app. You also can customize the actions taken in response to different risks, or you can use audit mode to gather metrics on detected risks without taking action. For more information about using these features, see the Amazon Cognito Developer Guide.
If you have comments about this post, submit them in the “Comments” section below. If you have questions about how to configure or use these features, start a new thread on the Amazon Cognito forum or contact AWS Support.
Over the past year, we’ve released several features that make it easier to track the metrics that are associated with your Amazon SES account. The first of these features, launched in November of last year, was event publishing.
Initially, event publishing let you capture basic metrics related to your email sending and publish them to other AWS services, such as Amazon CloudWatch and Amazon Kinesis Data Firehose. Some examples of these basic metrics include the number of emails that were sent and delivered, as well as the number that bounced or received complaints. A few months ago, we expanded this feature by adding engagement metrics—specifically, information about the number of emails that your customers opened or engaged with by clicking links.
As a former Cloud Support Engineer, I’ve seen Amazon SES customers do some amazing things with event publishing, but I’ve also seen some common issues. In this article, we look at some of these issues, and discuss the steps you can take to resolve them.
Before we begin
This post assumes that your Amazon SES account is already out of the sandbox, that you’ve verified an identity (such as an email address or domain), and that you have the necessary permissions to use Amazon SES and the service that you’ll publish event data to (such as Amazon SNS, CloudWatch, or Kinesis Data Firehose).
We also assume that you’re familiar with the process of creating configuration sets and specifying event destinations for those configuration sets. For more information, see Using Amazon SES Configuration Sets in the Amazon SES Developer Guide.
Amazon SNS event destinations
If you want to receive notifications when events occur—such as when recipients click a link in an email, or when they report an email as spam—you can use Amazon SNS as an event destination.
Occasionally, customers ask us why they’re not receiving notifications when they use an Amazon SNS topic as an event destination. One of the most common reasons for this issue is that they haven’t configured subscriptions for their Amazon SNS topic yet.
A single topic in Amazon SNS can have one or more subscriptions. When you subscribe to a topic, you tell that topic which endpoints (such as email addresses or mobile phone numbers) to contact when it receives a notification. If you haven’t set up any subscriptions, nothing will happen when an email event occurs.
If you want to store your Amazon SES event data for the long term, choose Amazon Kinesis Data Firehose as a destination for Amazon SES events. With Kinesis Data Firehose, you can stream data to Amazon S3 or Amazon Redshift for storage and analysis.
The process of setting up Kinesis Data Firehose as an event destination is similar to the process for setting up Amazon SNS: you choose the types of events (such as deliveries, opens, clicks, or bounces) that you want to export, and the name of the Kinesis Data Firehose stream that you want to export to. However, there’s one important difference. When you set up a Kinesis Data Firehose event destination, you must also choose the IAM role that Amazon SES uses to send event data to Kinesis Data Firehose.
When you set up the Kinesis Data Firehose event destination, you can choose to have Amazon SES create the IAM role for you automatically. For many users, this is the best solution—it ensures that the IAM role has the appropriate permissions to move event data from Amazon SES to Kinesis Data Firehose.
Customers occasionally run into issues with the Kinesis Data Firehose event destination when they use an existing IAM role. If you use an existing IAM role, or create a new role for this purpose, make sure that the role includes the firehose:PutRecord and firehose:PutRecordBatch permissions. If the role doesn’t include these permissions, then the Amazon SES event data isn’t published to Kinesis Data Firehose. For more information, see Controlling Access with Amazon Kinesis Data Firehose in the Amazon Kinesis Data Firehose Developer Guide.
CloudWatch event destinations
By publishing your Amazon SES event data to Amazon CloudWatch, you can create dashboards that track your sending statistics in real time, as well as alarms that notify you when your event metrics reach certain thresholds.
The amount that you’re charged for using CloudWatch is based on several factors, including the number of metrics you use. In order to give you more control over the specific metrics you send to CloudWatch—and to help you avoid unexpected charges—you can limit the email sending events that are sent to CloudWatch.
When you choose CloudWatch as an event destination, you must choose a value source. The value source can be one of three options: a message tag, a link tag, or an email header. After you choose a value source, you then specify a name and a value. When you send an email using a configuration set that refers to a CloudWatch event destination, it only sends the metrics for that email to CloudWatch if the email contains the name and value that you specified as the value source. This requirement is commonly overlooked.
For example, assume that you chose Message Tag as the value source, and specified “CategoryId” as the dimension name and “31415” as the dimension value. When you want to send events for an email to CloudWatch, you must specify the name of the configuration set that uses the CloudWatch destination. You must also include a tag in your message. The name of the tag must be “CategoryId” and the value must be “31415”.
Troubleshooting event publishing for open and click data
Occasionally, customers ask why they’re not seeing open and click data for their emails. This issue most often occurs when the customer only sends text versions of their emails. Because of the way Amazon SES tracks open and click events, you can only see open and click data for emails that are sent as HTML. For more information about how Amazon SES modifies your emails when you enable open and click tracking, see Amazon SES Email Sending Metrics FAQs in the Amazon SES Developer Guide.
The process that you use to send HTML emails varies based on the email sending method you use. The Code Examples section of the Amazon SES Developer Guide contains examples of several methods of sending email by using the Amazon SES SMTP interface or an AWS SDK. All of the examples in this section include methods for sending HTML (as well as text-only) emails.
If you encounter any issues that weren’t covered in this post, please open a case in the Support Center and we’d be more than happy to assist.
Amazon Web Services is moving the certificates for our services—including Amazon SES—to use our own certificate authority, Amazon Trust Services. We have carefully planned this change to minimize the impact it will have on your workflow. Most customers will not have to take any action during this migration.
About the Certificates
The Amazon Trust Services Certificate Authority (CA) uses the Starfield Services CA, which has been valid since 2005. The Amazon Trust Services certificates are available in most major operating systems released in the past 10 years, and are also trusted by all modern web browsers.
If you send email through the Amazon SES SMTP interface using a mail server that you operate, we recommend that you confirm that the appropriate certificates are installed. You can test whether your server trusts the Amazon Trust Services CAs by visiting the following URLs (for example, by using cURL):
If you see a message stating that the certificate issuer is not recognized, then you should install the appropriate root certificate. You can download individual certificates from https://www.amazontrust.com/repository. The process of adding a trusted certificate to your server varies depending on the operating system you use. For more information, see “Adding New Certificates,” below.
AWS SDKs and CLI
Recent versions of the AWS SDKs and the AWS CLI are not impacted by this change. If you use an AWS SDK or a version of the AWS CLI released prior to February 5, 2015, you should upgrade to the latest version.
Potential Issues
If your system is configured to use a very restricted list of root CAs (for example, if you use certificate pinning), you may be impacted by this migration. In this situation, you must update your pinned certificates to include the Amazon Trust Services CAs.
Adding New Root Certificates
The following sections list the steps you can take to install the Amazon Root CA certificates on your systems if they are not already present.
Change the file extension for the file you downloaded from .pem to .crt.
At the command prompt, type the following command to install the certificate: sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain /path/to/certificatename.crt, replacing /path/to/certificatename.crt with the full path to the certificate file.
Change the file extension for the file you downloaded from .pem to .crt.
At the command prompt, type the following command to install the certificate: certutil -addstore -f "ROOT" c:\path\to\certificatename.crt, replacing c:\path\to\certificatename.crt with the full path to the certificate file.
Ubuntu
To install a new certificate on an Ubuntu (or similar) server
In August, we launched the reputation dashboard, which helps you track important metrics that could impact your ability to send emails. By monitoring the metrics in this dashboard, you can protect your sender reputation, which can increase the likelihood that the emails you send will reach your customers’ inboxes.
Today, we’re launching two features that build upon the capabilities of the reputation dashboard. The first is the ability to temporarily pause email sending, either at the configuration set level, or across your entire Amazon SES account. The second is the ability to export reputation metrics for individual configuration sets.
Email Pausing
Today’s update includes new API operations that can temporarily pause your ability to send email using Amazon SES. To disable email sending across your entire Amazon SES account, you can use the UpdateAccountSendingEnabled operation. To pause sending only for emails sent using a specific configuration set, you can use the UpdateConfigurationSetSendingEnabled operation.
Email pausing is helpful because Amazon SES uses automatic enforcement policies. If the bounce or complaint rates for your account are too high, your account is automatically placed on probation. If the bounce or complaint issues continue after the probation period has ended, your account may be suspended.
With email pausing, you can temporarily halt your ability to send email before your account is placed on probation. While your ability to send email is paused, you can identify the issues that were causing your account to register high bounce or complaint rates. You can then resume sending after the issues are resolved.
Email pausing helps ensure that your ability to send email using Amazon SES is not interrupted because of enforcement issues. It helps ensure that your sender reputation won’t be damaged by mistakes or unforeseen issues.
Amazon SES automatically publishes the bounce and complaint rates for your account to Amazon CloudWatch. In CloudWatch, you can monitor these metrics over time, and create alarms that notify you when your reputation metrics cross certain thresholds.
With today’s update, you can also publish reputation metrics for individual configuration sets to CloudWatch. This feature gives you additional information about the messages you send using Amazon SES. For example, if you send all of your marketing emails using one configuration set, and your transactional emails using a different configuration set, you can view distinct reputation metrics for each type of email.
Because we anticipate that this feature will lead to the creation of many new configuration sets, we’re increasing the maximum number of configuration sets you can create from 50 to 10,000.
You can use AWS services—including Amazon SNS, AWS Lambda, and Amazon CloudWatch—to create a solution that automatically pauses email sending for your account when your overall reputation metrics cross a certain threshold. Or, to minimize disruption to your email sending program, you can pause email sending for a specific configuration set when the metrics for that configuration set cross a threshold. The following image illustrates the processes that occur when you implement these solutions.
For more information on both of these solutions, see Automatically Pausing Email Sending in the Amazon Simple Email Service Developer Guide.
We’re always looking for ways to help safeguard the reputation you’ve worked hard to build. If you have suggestions, questions, or comments, we’d love to hear from you in the comments below, or in the Amazon SES Forum.
These features are now available in the following AWS Regions: US West (Oregon), US East (N. Virginia), and EU (Ireland).
So many launches and cloud innovations, that you simply may not believe. In order to catch up on some service launches and features, this post will be a round-up of some cool releases that happened this summer and through the end of September.
The launches and features I want to share with you today are:
AWS IAM for Authenticating Database Users for RDS MySQL and Amazon Aurora
Amazon SES Reputation Dashboard
Amazon SES Open and Click Tracking Metrics
Serverless Image Handler by the Solutions Builder Team
AWS Ops Automator by the Solutions Builder Team
Let’s dive in, shall we!
AWS IAM for Authenticating Database Users for RDS MySQL and Amazon Aurora
Wished you could manage access to your Amazon RDS database instances and clusters using AWS IAM? Well, wish no longer. Amazon RDS has launched the ability for you to use IAM to manage database access for Amazon RDS for MySQL and Amazon Aurora DB.
What I like most about this new service feature is, it’s very easy to get started. To enable database user authentication using IAM, you would select a checkbox Enable IAM DB Authentication when creating, modifying, or restoring your DB instance or cluster. You can enable IAM access using the RDS console, the AWS CLI, and/or the Amazon RDS API.
After configuring the database for IAM authentication, client applications authenticate to the database engine by providing temporary security credentials generated by the IAM Security Token Service. These credentials can be used instead of providing a password to the database engine.
You can learn more about using IAM to provide targeted permissions and authentication to MySQL and Aurora by reviewing the Amazon RDS user guide.
Amazon SES Reputation Dashboard
In order to aid Amazon Simple Email Service customers’ in utilizing best practice guidelines for sending email, I am thrilled to announce we launched the Reputation Dashboard to provide comprehensive reporting on email sending health. To aid in proactively managing emails being sent, customers now have visibility into overall account health, sending metrics, and compliance or enforcement status.
The Reputation Dashboard will provide the following information:
Account status: A description of your account health status.
Healthy – No issues currently impacting your account.
Probation – Account is on probation; Issues causing probation must be resolved to prevent suspension
Pending end of probation decision – Your account is on probation. Amazon SES team member must review your account prior to action.
Shutdown – Your account has been shut down. No email will be able to be sent using Amazon SES.
Pending shutdown – Your account is on probation and issues causing probation are unresolved.
Bounce Rate: Percentage of emails sent that have bounced and bounce rate status messages.
Complaint Rate: Percentage of emails sent that recipients have reported as spam and complaint rate status messages.
Notifications: Messages about other account reputation issues.
Amazon SES Open and Click Tracking Metrics
Another exciting feature recently added to Amazon SES is support for Email Open and Click Tracking Metrics. With Email Open and Click Tracking Metrics feature, SES customers can now track when email they’ve sent has been opened and track when links within the email have been clicked. Using this SES feature will allow you to better track email campaign engagement and effectiveness.
How does this work?
When using the email open tracking feature, SES will add a transparent, miniature image into the emails that you choose to track. When the email is opened, the mail application client will load the aforementioned tracking which triggers an open track event with Amazon SES. For the email click (link) tracking, links in email and/or email templates are replaced with a custom link. When the custom link is clicked, a click event is recorded in SES and the custom link will redirect the email user to the link destination of the original email.
You can take advantage of the new open tracking and click tracking features by creating a new configuration set or altering an existing configuration set within SES. After choosing either; Amazon SNS, Amazon CloudWatch, or Amazon Kinesis Firehose as the AWS service to receive the open and click metrics, you would only need to select a new configuration set to successfully enable these new features for any emails you want to send.
The AWS Solution Builder team has been hard at work helping to make it easier for you all to find answers to common architectural questions to aid in building and running applications on AWS. You can find these solutions on the AWS Answers page. Two new solutions released earlier this fall on AWS Answers are Serverless Image Handler and the AWS Ops Automator. Serverless Image Handler was developed to provide a solution to help customers dynamically process, manipulate, and optimize the handling of images on the AWS Cloud. The solution combines Amazon CloudFront for caching, AWS Lambda to dynamically retrieve images and make image modifications, and Amazon S3 bucket to store images. Additionally, the Serverless Image Handler leverages the open source image-processing suite, Thumbor, for additional image manipulation, processing, and optimization.
AWS Ops Automator solution helps you to automate manual tasks using time-based or event-based triggers to automatically such as snapshot scheduling by providing a framework for automated tasks and includes task audit trails, logging, resource selection, scaling, concurrency handling, task completion handing, and API request retries. The solution includes the following AWS services:
AWS CloudFormation: a templates to launches the core framework of microservices and solution generated task configurations
Amazon DynamoDB: a table which stores task configuration data to defines the event triggers, resources, and saves the results of the action and the errors.
Amazon CloudWatch Logs: provides logging to track warning and error messages
Amazon SNS: topic to send messages to a subscribed email address to which to send the logging information from the solution
Leaves are crunching under my boots, Halloween is tomorrow, and pumpkin is having its annual moment in the sun – it’s fall everybody! And just in time to celebrate, we have whipped up a fresh batch of pumpkin spice Tech Talks. Grab your planner (Outlook calendar) and pencil these puppies in. This month we are covering re:Invent, serverless, and everything in between.
November 2017 – Schedule
Noted below are the upcoming scheduled live, online technical sessions being held during the month of November. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.
Amazon SES now adds DMARC verdicts to incoming emails, and publishes aggregate DMARC reports to domain owners. These two new features will help combat email spoofing and phishing, making the email ecosystem a safer and more secure place.
What is DMARC?
DMARC stands for Domain-based Message Authentication, Reporting, and Conformance. The DMARC standard was designed to prevent malicious actors from sending messages that appear to be from legitimate senders. Domain owners can tell email receivers how to handle unauthenticated messages that appear to be from their domains. The DMARC standard also specifies certain reports that email senders and receivers send to each other. The cooperative nature of this reporting process helps improve the email authentication infrastructure.
How does Amazon SES Implement DMARC?
When you receive an email message through Amazon SES, the headers of that message will include a DMARC policy verdict alongside the DKIM and SPF verdicts (both of which are already present). This additional information helps you verify the authenticity of all email messages you receive.
Messages you receive through Amazon SES will contain one of the following DMARC verdicts:
PASS – The message passed DMARC authentication.
FAIL – The message failed DMARC authentication.
GRAY – The sending domain does not have a DMARC policy.
PROCESSING_FAILED – An issue occurred that prevented Amazon SES from providing a DMARC verdict.
If the DMARC verdict is FAIL, Amazon SES will also provide information about the sending domain’s DMARC settings. In this situation, you will see one of the following verdicts:
NONE – The owner of the sending domain requests that no specific action be taken on messages that fail DMARC authentication.
QUARANTINE – The owner of the sending domain requests that messages that fail DMARC authentication be treated by receivers as suspicious.
REJECT – The owner of the sending domain requests that messages that fail DMARC authentication be rejected.
In addition to publishing the DMARC verdict on each incoming message, Amazon SES now sends DMARC aggregate reports to domain owners. These reports help domain owners identify systemic authentication failures, and avoid potential domain spoofing attacks.
Note: Domain owners only receive aggregate information about emails that do not pass DMARC authentication. These reports, known as RUA reports, only include information about the IP addresses that send unauthenticated emails to you. These reports do not include information about legitimate email senders.
How do I configure DMARC?
As is the case with SPF and DKIM, domain owners must publish their DMARC policies as DNS records for their domains. For more information about setting up DMARC, see Complying with DMARC Using Amazon SES in the Amazon SES Developer Guide.
DMARC reporting is now available in the following AWS Regions: US West (Oregon), US East (N. Virginia), and EU (Ireland). You can find more information about the dmarcVerdict and dmarcPolicy objects in the Amazon SES Developer Guide. The Developer Guide also includes a sample Lambda function that you can use to bounce incoming emails that fail DMARC authentication.
The Amazon SES team is excited to announce our latest update, which includes two related features that help you send personalized emails to large groups of customers. This post discusses these features, and provides examples that you can follow to start using these features right away.
Email templates
You can use email templates to create the structure of an email that you plan to send to multiple recipients, or that you will use again in the future. Each template contains a subject line, a text part, and an HTML part. Both the subject and the email body can contain variables that are automatically replaced with values specific to each recipient. For example, you can include a {{name}} variable in the body of your email. When you send the email, you specify the value of {{name}} for each recipient. Amazon SES then automatically replaces the {{name}} variable with the recipient’s first name.
Creating a template
To create a template, you use the CreateTemplate API operation. To use this operation, pass a JSON object with four properties: a template name (TemplateName), a subject line (SubjectPart), a plain text version of the email body (TextPart), and an HTML version of the email body (HtmlPart). You can include variables in the subject line or message body by enclosing the variable names in two sets of curly braces. The following example shows the structure of this JSON object.
Use this example to create your own template, and save the resulting file as mytemplate.json. You can then use the AWS Command Line Interface (AWS CLI) to create your template by running the following command: aws ses create-template – cli-input-json mytemplate.json
Sending an email created with a template
Now that you have created a template, you’re ready to send email that uses the template. You can use the SendTemplatedEmail API operation to send email to a single destination using a template. Like the CreateTemplate operation, this operation accepts a JSON object with four properties. For this operation, the properties are the sender’s email address (Source), the name of an existing template (Template), an object called Destination that contains the recipient addresses (and, optionally, any CC or BCC addresses) that will receive the email, and a property that refers to the values that will be replaced in the email (TemplateData). The following example shows the structure of the JSON object used by the SendTemplatedEmail operation.
Customize this example to fit your needs, and then save the resulting file as myemail.json. One important note: in the TemplateData property, you must use a blackslash (\) character to escape the quotes within this object, as shown in the preceding example.
When you’re ready to send the email, run the following command: aws ses send-templated-email – cli-input-json myemail.json
Bulk email sending
In most cases, you should use email templates to send personalized emails to several customers at the same time. The SendBulkTemplatedEmail API operation helps you do that. This operation also accepts a JSON object. At a minimum, you must supply a sender email address (Source), a reference to an existing template (Template), a list of recipients in an array called Destinations (within which you specify the recipient’s email address, and the variable values for that recipient), and a list of fallback values for the variables in the template (DefaultTemplateData). The following example shows the structure of this JSON object.
This example sends unique emails to Anaya ([email protected]), Liu ([email protected]), Shirley ([email protected]), and a fourth recipient ([email protected]), whose name and favorite animal we didn’t specify. Anaya, Liu, and Shirley will see their names in place of the {{name}} tag in the template (which, in this example, is present in both the subject line and message body), as well as their favorite animals in place of the {{favoriteanimal}} tag in the message body. The DefaultTemplateData property determines what happens if you do not specify the ReplacementTemplateData property for a recipient. In this case, the fourth recipient will see the word “friend” in place of the {{name}} tag, and “unknown” in place of the {{favoriteanimal}} tag.
Use the example to create your own list of recipients, and save the resulting file as mybulkemail.json. When you’re ready to send the email, run the following command: aws ses send-bulk-templated-email – cli-input-json mybulkemail.json
Other considerations
There are a few limits and other considerations when using these features:
You can create up to 10,000 email templates per Amazon SES account.
Each template can be up to 10 MB in size.
You can include an unlimited number of replacement variables in each template.
You can send email to up to 50 destinations in each call to the SendBulkTemplatedEmail operation. A destination includes a list of recipients, as well as CC and BCC recipients. Note that the number of destinations you can contact in a single call to the API may be limited by your account’s maximum sending rate. For more information, see Managing Your Amazon SES Sending Limits in the Amazon SES Developer Guide.
We look forward to seeing the amazing things you create with these new features. If you have any questions, please leave a comment on this post, or let us know in the Amazon SES forum.
The Amazon SES team is pleased to announce the addition of a reputation dashboard to the Amazon SES console. This new feature helps you track issues that could impact the sender reputation of your Amazon SES account.
What information does the reputation dashboard provide?
Amazon SES users must maintain bounce and complaint rates below a certain threshold. We put these rules in place to protect the sender reputations of all Amazon SES users, and to prevent Amazon SES from being used to deliver spam or malicious content. Users with very high rates of bounces or complaints may be put on probation. If the bounce or complaint rates are not within acceptable limits by the end of the probation period, these accounts may be shut down completely.
Previous versions of Amazon SES provided basic sending metrics, including information about bounces and complaints. However, the bounce and complaint metrics in this dashboard only included information for the past few days of email sent from your account, as opposed to an overall rate.
The new reputation dashboard provides overall bounce and complaint rates for your entire account. This enables you to more closely monitor the health of your account and adjust your email sending practices as needed.
Can’t I just calculate these values myself?
Because each Amazon SES account sends different volumes of email at different rates, we do not calculate bounce and complaint rates based on a fixed time period. Instead, we use a representative volume of email. This representative volume is the basis for the bounce and complaint rate calculations.
Why do we use representative volume in our calculations? Let’s imagine that you sent 1,000 emails one week, and 5 of them bounced. If we only considered a week of email sending, your metrics look good. Now imagine that the next week you only sent 5 emails, and one of them bounced. Suddenly, your bounce rate jumps from half a percent to 20%, and your account is automatically placed on probation. This example may be an extreme case, but it illustrates the reason that we don’t use fixed time periods when calculating bounce and complaint rates.
When you open the new reputation dashboard, you will see bounce and complaint rates calculated using the representative volume for your account. We automatically recalculate these rates every time you send email through Amazon SES.
What else can I do with these metrics?
The Bounce and Complaint Rate metrics in the reputation dashboard are automatically sent to Amazon CloudWatch. You can use CloudWatch to create dashboards that track your bounce and complaint rates over time, and to create alarms that send you notifications when these metrics cross certain thresholds. To learn more, see Creating Reputation Monitoring Alarms Using CloudWatch in the Amazon SES Developer Guide.
How can I see the reputation dashboard?
The reputation dashboard is now available to all Amazon SES users. To view the reputation dashboard, sign in to the Amazon SES console. On the left navigation menu, choose Reputation Dashboard. For more information, see Monitoring Your Sender Reputation in the Amazon SES Developer Guide.
We hope you find the information in the reputation dashboard to be useful in managing your email sending programs and campaigns. If you have any questions or comments, please leave a comment on this post, or let us know in the Amazon SES forum.
Today we released Dedicated IP Pools for Amazon Simple Email Service (SES). With dedicated IP pools, you can specify which dedicated IP addresses to use for sending different types of email. Dedicated IP pools let you use your SES for different tasks. For instance, you can send transactional emails from one set of IPs and you can send marketing emails from another set of IPs.
If you’re not familiar with Amazon SES these concepts may not make much sense. We haven’t had the chance to cover SES on this blog since 2016, which is a shame, so I want to take a few steps back and talk about the service as a whole and some of the enhancements the team has made over the past year. If you just want the details on this new feature I strongly recommend reading the Amazon Simple Email Service Blog.
What is SES?
So, what is SES? If you’re a customer of Amazon.com you know that we send a lot of emails. Bought something? You get an email. Order shipped? You get an email. Over time, as both email volumes and types increased Amazon.com needed to build an email platform that was flexible, scalable, reliable, and cost-effective. SES is the result of years of Amazon’s own work in dealing with email and maximizing deliverability.
In short: SES gives you the ability to send and receive many types of email with the monitoring and tools to ensure high deliverability.
Receiving and reacting to emails is easy too. You can set up rulesets that forward received emails to Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), or AWS Lambda – you could even trigger a Amazon Lex bot through Lambda to communicate with your customers over email. SES is a powerful tool for building applications. The image below shows just a fraction of the capabilities:
Deliverability 101
Deliverability is the percentage of your emails that arrive in your recipients’ inboxes. Maintaining deliverability is a shared responsibility between AWS and the customer. AWS takes the fight against spam very seriously and works hard to make sure services aren’t abused. To learn more about deliverability I recommend the deliverability docs. For now, understand that deliverability is an important aspect of email campaigns and SES has many tools that enable a customer to manage their deliverability.
Dedicated IPs and Dedicated IP pools
When you’re starting out with SES your emails are sent through a shared IP. That IP is responsible for sending mail on behalf of many customers and AWS works to maintain appropriate volume and deliverability on each of those IPs. However, when you reach a sufficient volume shared IPs may not be the right solution.
By creating a dedicated IP you’re able to fully control the reputations of those IPs. This makes it vastly easier to troubleshoot any deliverability or reputation issues. It’s also useful for many email certification programs which require a dedicated IP as a commitment to maintaining your email reputation. Using the shared IPs of the Amazon SES service is still the right move for many customers but if you have sustained daily sending volume greater than hundreds of thousands of emails per day you might want to consider a dedicated IP. One caveat to be aware of: if you’re not sending a sufficient volume of email with a consistent pattern a dedicated IP can actually hurt your reputation. Dedicated IPs are $24.95 per address per month at the time of this writing – but you can find out more at the pricing page.
Before you can use a Dedicated IP you need to “warm” it. You do this by gradually increasing the volume of emails you send through a new address. Each IP needs time to build a positive reputation. In March of this year SES released the ability to automatically warm your IPs over the course of 45 days. This feature is on by default for all new dedicated IPs.
Customers who send high volumes of email will typically have multiple dedicated IPs. Today the SES team released dedicated IP pools to make managing those IPs easier. Now when you send email you can specify a configuration set which will route your email to an IP in a pool based on the pool’s association with that configuration set.
One of the other major benefits of this feature is that it allows customers who previously split their email sending across several AWS accounts (to manage their reputation for different types of email) to consolidate into a single account.
The Amazon SES team is pleased to announce that you can now create groups of dedicated IP addresses, called dedicated IP pools, for your email sending activities.
Prior to the availability of this feature, if you leased several dedicated IP addresses to use with Amazon SES, there was no way to specify which dedicated IP address to use for a specific email. Dedicated IP pools solve this problem by allowing you to send emails from specific IP addresses.
This post includes information and procedures related to dedicated IP pools.
What are dedicated IP pools?
In order to understand dedicated IP pools, you should first be familiar with the concept of dedicated IP addresses. Customers who send large volumes of email will typically lease one or more dedicated IP addresses to use when sending mail from Amazon SES. To learn more, see our blog post about dedicated IP addresses.
If you lease several dedicated IP addresses for use with Amazon SES, you can organize these addresses into groups, called pools. You can then associate each pool with a configuration set. When you send an email that specifies a configuration set, that email will be sent from the IP addresses in the associated pool.
When should I use dedicated IP pools?
Dedicated IP pools are especially useful for customers who send several different types of email using Amazon SES. For example, if you use Amazon SES to send both marketing emails and transactional emails, you can create a pool for marketing emails and another for transactional emails.
By using dedicated IP pools, you can isolate the sender reputations for each of these types of communications. Using dedicated IP pools gives you complete control over the sender reputations of the dedicated IP addresses you lease from Amazon SES.
How do I create and use dedicated IP pools?
There are two basic steps for creating and using dedicated IP pools. First, create a dedicated IP pool in the Amazon SES console and associate it with a configuration set. Next, when you send email, be sure to specify the configuration set associated with the IP pool you want to use.
If you do not use dedicated IP addresses with Amazon SES, then your email sending process will not change.
If you use dedicated IP pools, your email sending process may change slightly. In most cases, you will need to specify a configuration set in the emails you send. To learn more about using configuration sets, see Specifying a Configuration Set When You Send Email in the Amazon SES Developer Guide.
Any dedicated IP addresses that you lease that are not part of a dedicated IP pool will automatically be added to a default pool. If you send email without specifying a configuration set that is associated with a pool, then that email will be sent from one of the addresses in the default pool.
Dedicated IP pools are now available in the following AWS Regions: us-west-2 (Oregon), us-east-1 (Virginia), and eu-west-1 (Ireland).
We hope you enjoy this feature. If you have any questions or comments, please leave a comment on this post, or let us know in the Amazon SES Forum.
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.