Tag Archives: AWS

Deploy tCell More Easily With the New AWS AMI Agent

Post Syndicated from Tom Caiazza original https://blog.rapid7.com/2022/07/18/deploy-tcell-more-easily-with-the-new-aws-ami-agent/

Deploy tCell More Easily With the New AWS AMI Agent

Rapid7’s tCell is a powerful tool that allows you to monitor risk and protect web applications and APIs in real time. Great! It’s a fundamental part of our push to make web application security as strong and comprehensive as it needs to be in an age when web application attacks account for roughly 70% of cybersecurity incidents.

But with that power comes complexity, and we know that not every customer has the same resources available both in-house or externally to leverage tCell in all its glory right out of the box. With our newest agent addition, we’re hoping to make that experience a little bit easier.

AWS AMI Agent for tCell

We’ve introduced the AWS AMI Agent for tCell, which makes it easier to deploy tCell into your software development life cycle (SDLC) without the need to manually configure tCell. If you aren’t as familiar with deploying web apps and need help getting tCell up and running, you can now deploy tCell with ease and get runtime protection on your apps within minutes.

If you use Amazon Web Services (AWS), you can now quickly launch a tCell agent with NGINX as a reverse proxy. This is placed in front of your existing web app without having to make development or code changes. To make things even easier, the new AWS AMI Agent even comes pre-equipped with a helper utility (with the NGINX agent pre-installed) that allows you to configure your tCell agent in a single command.

Shift left seamlessly

So why is this such an important new deployment method for tCell customers? Simply put, it’s a way to better utilize and understand tCell before making a case to your team of developers. To get the most out of tCell, it’s best to get buy-in from your developers, as deployment efforts traditionally can require bringing the dev team into the fold in a significant way.

With the AWS AMI Agent, your security team can utilize tCell right away, with limited technical knowledge, and use those learnings (and security improvements) to make the case that a full deployment of the tCell agent is in your dev team’s best interest. We’ve seen this barrier with some existing customers and with the overall shift-left approach within the web application community at large.

This new deployment offering is a way for your security team to get comfortable with the benefits (and there are many) of securing your web applications with tCell. They will better understand how to secure AWS-hosted web apps and how the two products work together seamlessly.

If you’d like to give it a spin, we recommend heading over to the docs to find out more.

The AWS AMI Agent is available to all existing tCell customers right now.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Zabbix 6.2 is out now!

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/zabbix-6-2-is-out-now/21602/

The Zabbix team is pleased to announce the release of the latest Zabbix major version – Zabbix 6.2! The latest version delivers features aimed at improving configuration management and performance on large Zabbix instances as well as extending the flexibility of the existing Zabbix functionality.

New features

A brief overview of the major new features available with the release of Zabbix 6.2:

  • Ability to suppress individual problems
    • Suppress problems indefinitely or until a specific point in time
  • Support of CyberArk vault for secret storage
  • Official AWS EC2 template
    • discover and monitor AWS EC2 performance statistics, alarms, and AWS EBS volumes
  • Ability to synchronize Zabbix proxy configuration directly from Zabbix frontend
    • Configuration synchronization is supported by active and passive proxies
  • Improved flexibility for hosts discovered from host prototypes
    • Link additional templates
    • Create and modify user macros
    • Populate the host with new tags
  • New items for VMware monitoring
  • The ability to further customize the hosts discovered by VMware discovery
  • Active agent check status can now be tracked from Zabbix frontend
  • Incremental configuration synchronization
    • Faster configuration synchronization
    • Reduced configuration synchronization performance impact
  • Newly created items are now checked within a minute after their creation
  • Execute now functionality is now available from the Latest data section
  • A warning message is now displayed when performing Execute now on items that do not support it
  • Templates are now grouped in template groups, instead of host groups
    • Improved host and template filtering
  • Multiple LDAP servers can now be defined and saved under Authentication – LDAP settings
  • Ability to collect Windows registry key values with the new registry monitoring items
  • New item for OS process discovery and collecting individual process statistics
  • New digital clock widget
  • The default Global view dashboard has been updated with the latest Zabbix widgets
  • The Graph widget has been further improved
    • Added stacked graph support
    • Legend now provides additional information
    • Added support of simple trigger display
  • UI forms now provide direct links to the relevant documentation sections
  • Many other improvements and features
Enhance the observability of your VMware infrastructure with the new items
Track your EC2 instances in a single pane of glass view
Suppress problems indefinitely or until a specific point in time
Track the active agent interface status from Zabbix frontend

New templates and integrations

Zabbix 6.2 comes pre-packaged with many new templates for the most popular vendors:

  • Envoy proxy
  • HashiCorp Consul
  • AWS EC2 Template
  • CockroachDB
  • TrueNAS
  • HPE MSA 2060 & 2040
  • HPE Primera
  • The S.M.A.R.T. monitoring template has received improvements

Zabbix 6.2 introduces a webhook integration for the GLPI IT Asset Management solution. This webhook can be used to forward problems created in Zabbix to the GLPi Assistance section

Zabbix 6.2 packages and images

The official Zabbix packages and images are available for:

  • Linux distributions for different hardware platforms on RHEL, CentOS, Oracle Linux, Debian, SUSE, Ubuntu, Raspbian, Alma Linux, Rocky Linux
  • Virtualization platforms based on VMware, VirtualBox, Hyper-V, XEN
  • Docker
  • Packages and precompiled agents for most popular platforms, including macOS and MSI packages for Windows

You can find the download instructions and download the new version on the Download page: https://www.zabbix.com/download

One-click deployments for the following cloud platforms are coming soon:

  • AWS, Azure, Google Cloud, Digital Ocean, Linode, Oracle Cloud, Red Hat OpenShift

Upgrading to Zabbix 6.2

In order to upgrade to Zabbix 6.2, you need to upgrade your repository package and download and install the new Zabbix component packages (Zabbix server, proxy, frontend, and other Zabbix components). When you start the Zabbix Server, an automatic database schema upgrade will be performed. Zabbix agents are backward compatible; therefore, it is not required to install the new agent versions. You can do it at a later time if needed.

If you’re using the official Docker container images – simply deploy a new set of containers for your Zabbix components. Once the Zabbix server container connects to the backend database, the database upgrade will be performed automatically.

You can find step-by-step instructions for the upgrade process to Zabbix 6.2 in the Zabbix documentation.

Join the webinar

If you wish to learn more about the Zabbix 6.2 features and improvements, we invite you to join our What’s new in Zabbix 6.2 public webinar.

During the webinar, you will get the opportunity to:

  • Learn about the Zabbix 6.2 features and improvements
  • See the latest Zabbix templates and integrations
  • Participate in a Q&A session with Zabbix founder and CEO Alexei Vladishev
  • Discuss the latest Zabbix version with Zabbix community and Zabbix team members
  • Anyone can sign up and attend the webinar at absolutely no cost

Don’t hesitate and sign up for the webinar now!

The post Zabbix 6.2 is out now! appeared first on Zabbix Blog.

Analyzing Amazon SES event data with AWS Analytics Services

Post Syndicated from Oscar Mendoza original https://aws.amazon.com/blogs/messaging-and-targeting/analyzing-amazon-ses-event-data-with-aws-analytics-services/

In this post, we will walk through using AWS Services, such as, Amazon Kinesis Firehose, Amazon Athena and Amazon QuickSight to monitor Amazon SES email sending events with the granularity and level of detail required to get insights from your customers engage with the emails you send.

Nowadays, email Marketers rely on internal applications to create their campaigns or any communications requirements, such us newsletters or promotional content. From those activities, they need to collect as much information as possible to analyze and improve their pipeline to get better interaction with the customers. Data such us bounces, rejections, success reception, delivery delays, complaints or open rate can be a powerful tool to understand the customers. Usually applications work with high-level data points without detailed logging or granular information that could help improve even better the effectiveness of their campaigns.

Amazon Simple Email Service (SES) is a smart tool for companies that wants a cost-effective, flexible, and scalable email service solution to easily integrate with their own products. Amazon SES provides methods to control your sending activity with built-in integration with Amazon CloudWatch Metrics and also provides a mechanism to collect the email sending events data.

In this post, we propose an architecture and step-by-step guide to track your email sending activities at a granular level, where you can configure several types of email sending events, including sends, deliveries, opens, clicks, bounces, complaints, rejections, rendering failures, and delivery delays. We will use the configuration set feature of Amazon SES to send detailed logging to our analytics services to store, query and create dashboards for a detailed view.

Overview of solution

This architecture uses Amazon SES built-in features and AWS analytics services to provide a quick and cost-effective solution to address your mail tracking requirements. The following services will be implemented or configured:

The following diagram shows the architecture of the solution:

Serverless Architecture to Analyze Amazon SES events

Figure 1. Serverless Architecture to Analyze Amazon SES events

The flow of the events starts when a customer uses Amazon SES to send an email. Each of those send events will be capture by the configuration set feature and forward the events to a Kinesis Firehose delivery stream to buffer and store those events on an Amazon S3 bucket.

After storing the events, it will be required to create a database and table schema and store it on AWS Glue Data Catalog in order for Amazon Athena to be able to properly query those events on S3. Finally, we will use Amazon QuickSight to create interactive dashboard to search and visualize all your sending activity with an email level of detailed.

Prerequisites

For this walkthrough, you should have the following prerequisites:

Walkthrough

Step 1: Use AWS CloudFormation to deploy some additional prerequisites

You can get started with our sample AWS CloudFormation template that includes some prerequisites. This template creates an Amazon S3 Bucket, an IAM role needed to access from Amazon SES to Amazon Kinesis Data Firehose.

To download the CloudFormation template, run one of the following commands, depending on your operating system:

In Windows:

curl https://raw.githubusercontent.com/aws-samples/amazon-ses-analytics-blog/main/SES-Blog-PreRequisites.yml -o SES-Blog-PreRequisites.yml

In MacOS

wget https://raw.githubusercontent.com/aws-samples/amazon-ses-analytics-blog/main/SES-Blog-PreRequisites.yml

To deploy the template, use the following AWS CLI command:

aws cloudformation deploy --template-file ./SES-Blog-PreRequisites.yml --stack-name ses-dashboard-prerequisites --capabilities CAPABILITY_NAMED_IAM

After the template finishes creating resources, you see the IAM Service role and the Delivery Stream on the stack Outputs tab. You are going to use these resources in the following steps.

IAM Service role and Delivery Stream created by CloudFormation template

Figure 2. CloudFormation template outputs

Step 2: Creating a configuration set in SES and setting the default configuration set for a verified identity

SES can track the number of send, delivery, open, click, bounce, and complaint events for each email you send. You can use event publishing to send information about these events to other AWS service. In this case we are going to send the events to Kinesis Firehose. To do this, a configuration set is required.

To create a configuration set, complete the following steps:

  1. On the AWS Console, choose the Amazon Simple Email Service.
  2. Choose Configuration sets.
  3. Click on Create set.

    Create a configuration set in Amazon SES

    Figure 3. Amazon SES Create Configuration Set

  4. Set a Configuration set name.
  5. Leave the other configurations by default.

    Write a name for your configuration set

    Figure 4. Configuration Set Name

  6. Once the configuration set is created, select Event destinations

    Configuration set created successfully

    Figure 5. Configuration set created successfully

  7. Click on Add destination
  8. Select the event types you would like to analyze and then click on next.

    Sending Events to analyze

    Figure 6. Sending Events to analyze

  9. Select Amazon Kinesis Data Firehose as the destination, choose the delivery stream and the IAM role created previously, click on next and in the review page, click on Add destination.

    Destination for Amazon SES sending events

    Figure 7. Destination for Amazon SES sending events

  10. Once you have created the configuration set and added the event destination, you can define the Default configuration set for the verified identity (domain or email address). In the SES console, choose Verified identities.

    Amazon SES Verified Identity

    Figure 8 Amazon SES Verified Identity

  11. Choose the verified identity from which you want to collect events and select Configuration set. Click on Edit.

    Edit Configuration Set for Verified Identity

    Figure 9. Edit Configuration Set for Verified Identity

  12. Click on the checkbox Assign a default configuration set and choose the configuration set created previously.

    Assign default configuration set

    Figure 10. Assign default configuration set

  13. Once you have completed the previous steps, your events will be sent to Amazon S3. Due to the buffer’s configuration on the Kinesis Delivery Stream, the data will be loaded every 5 minutes or every 5 MiB to Amazon S3. You can check the structure created on the bucket and see json logs with SES events data.

    Amazon S3 bucket structure

    Figure 11. Amazon S3 bucket structure

Step 3: Using Amazon Athena to query the SES event logs

Amazon SES publishes email sending event records to Amazon Kinesis Data Firehose in JSON format. The top-level JSON object contains an eventType string, a mail object, and either a Bounce, Complaint, Delivery, Send, Reject, Open, Click, Rendering Failure, or DeliveryDelay object, depending on the type of event.

  1. In order to simplify the analysis of email sending events, create the sesmaster table by running the following script in Amazon Athena. Don’t forget to change the location in the following script with your own bucket containing the data of email sending events.
    CREATE EXTERNAL TABLE sesmaster (
    eventType string,
    complaint struct < arrivaldate: string,
    complainedrecipients: array < struct < emailaddress: string >>,
    complaintfeedbacktype: string,
    feedbackid: string,
    `timestamp`: string,
    useragent: string >,
    bounce struct < bouncedrecipients: array < struct < action: string,
    diagnosticcode: string,
    emailaddress: string,
    status: string >>,
    bouncesubtype: string,
    bouncetype: string,
    feedbackid: string,
    reportingmta: string,
    `timestamp`: string >,
    mail struct < timestamp: string,
    source: string,
    sourcearn: string,
    sendingaccountid: string,
    messageid: string,
    destination: string,
    headerstruncated: boolean,
    headers: array < struct < name: string,
    value: string >>,
    commonheaders: struct < `from`: array < string >,
    to: array < string >,
    messageid: string,
    subject: string >,
    tags: struct < ses_source_tls_version: string,
    ses_operation: string,
    ses_configurationset: string,
    ses_source_ip: string,
    ses_outgoing_ip: string,
    ses_from_domain: string,
    ses_caller_identity: string >>,
    send string,
    delivery struct < processingtimemillis: int,
    recipients: array < string >,
    reportingmta: string,
    smtpresponse: string,
    `timestamp`: string >,
    open struct < ipaddress: string,
    `timestamp`: string,
    userAgent: string >,
    reject struct < reason: string >,
    click struct < ipAddress: string,
    `timestamp`: string,
    userAgent: string,
    link: string >
    )
    ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
    WITH SERDEPROPERTIES (
    "mapping.ses_caller_identity" = "ses:caller-identity",
    "mapping.ses_configurationset" = "ses:configuration-set",
    "mapping.ses_from_domain" = "ses:from-domain",
    "mapping.ses_operation" = "ses:opeation",
    "mapping.ses_outgoing_ip" = "ses:outgoing-ip",
    "mapping.ses_source_ip" = "ses:source-ip",
    "mapping.ses_source_tls_version" = "ses:source-tls-version"
    )
    LOCATION 's3://aws-s3-ses-analytics-<aws-account-number>/'
    

    The sesmaster table uses the org.openx.data.jsonserde.JsonSerDe SerDe library to deserialize the JSON data.

    We have leveraged the support for JSON arrays and maps and the support for nested data structures. Those features ease the process of preparation and visualization of data.

    In the sesmaster table, the following mappings were applied to avoid errors due to name of JSON fields containing colons.

    • “mapping.ses_configurationset”=”ses:configuration-set”
    • “mapping.ses_source_ip”=”ses:source-ip”
    • “mapping.ses_from_domain”=”ses:from-domain”
    • “mapping.ses_caller_identity”=”ses:caller-identity” “mapping.ses_outgoing_ip”=”ses:outgoing-ip”
  2. Once the sesmaster table is ready, it is a good strategy to create curated views of its data. The first view called vwSESMaster contains all the records of email sending events and all the fields which are unique on each event. Create the vwSESMaster view by running the following script in Amazon Athena.
    CREATE OR REPLACE VIEW vwSESMaster AS
    SELECT
    eventtype as eventtype
    , mail.messageId as mailmessageid
    , mail.timestamp as mailtimestamp
    , mail.source as mailsource
    , mail.sendingAccountId as mailsendingAccountId
    , mail.commonHeaders.subject as mailsubject
    , mail.tags.ses_configurationset as mailses_configurationset
    , mail.tags.ses_source_ip as mailses_source_ip
    , mail.tags.ses_from_domain as mailses_from_domain
    , mail.tags.ses_outgoing_ip as mailses_outgoing_ip
    , delivery.processingtimemillis as deliveryprocessingtimemillis
    , delivery.reportingmta as deliveryreportingmta
    , delivery.smtpresponse as deliverysmtpresponse
    , delivery.timestamp as deliverytimestamp
    , delivery.recipients[1] as deliveryrecipient
    , open.ipaddress as openipaddress
    , open.timestamp as opentimestamp
    , open.userAgent as openuseragent
    , bounce.bounceType as bouncebounceType
    , bounce.bouncesubtype as bouncebouncesubtype
    , bounce.feedbackid as bouncefeedbackid
    , bounce.timestamp as bouncetimestamp
    , bounce.reportingMTA as bouncereportingmta
    , click.ipAddress as clickipaddress
    , click.timestamp as clicktimestamp
    , click.userAgent as clickuseragent
    , click.link as clicklink
    , complaint.timestamp as complainttimestamp
    , complaint.userAgent as complaintuseragent
    , complaint.complaintFeedbackType as complaintcomplaintfeedbacktype
    , complaint.arrivalDate as complaintarrivaldate
    , reject.reason as rejectreason
    FROM
    sesmaster

    The sesmaster table contains some fields which are represented by nested arrays, so it is necessary to flatten them into multiples rows. Following you can see the event types and the fields which need to be flatten.

    • Event type SEND: field mail.commonHeaders
    • Event type BOUNCE: field bounce.bouncedrecipients
    • Event type COMPLAINT: field complaint.complainedrecipients

    To flatten those arrays into multiple rows, we used the CROSS JOIN in conjunction with the UNNEST operator using the following strategy for all the three events:

    • Create a temporal view with the mail.messageID and the field to be flattened.
    • Create another temporal view with the array flattened into multiple rows.
    • Create the final view joining the sesmaster table with the second temporal view by event type and mail.messageID.

    To create those views, follow the next steps.

  3. Run the following scripts in Amazon Athena to flat the mail.commonHeaders array in the SEND event type
    CREATE OR REPLACE VIEW vwSendMailTmpSendTo AS 
    SELECT
    mail.messageId as messageid
    , mail.commonHeaders.to as recipients
    FROM
    sesmaster
    WHERE 
    eventtype='Send'
    
    CREATE OR REPLACE VIEW vwsendmailrecipients AS 
    SELECT
    messageid
    , recipient
    FROM
    ("vwSendMailTmpSendTo"
    CROSS JOIN UNNEST(recipients) t (recipient))
    
    CREATE OR REPLACE VIEW vwSentMails AS
    SELECT 
    eventtype as eventtype
    , mail.messageId as mailmessageid
    , mail.timestamp as mailtimestamp
    , mail.source as mailsource
    , mail.sendingAccountId as mailsendingAccountId
    , mail.commonHeaders.subject as mailsubject
    , mail.tags.ses_configurationset as mailses_configurationset
    , mail.tags.ses_source_ip as mailses_source_ip
    , mail.tags.ses_from_domain as mailses_from_domain
    , mail.tags.ses_outgoing_ip as mailses_outgoing_ip
    , dest.recipient as mailto
    FROM
    sesmaster as sm
    ,vwsendmailrecipients as dest
    WHERE
    sm.eventtype = 'Send'
    and sm.mail.messageid = dest.messageid
  4. Run the following scripts in Amazon Athena to flat the bounce.bouncedrecipients array in the BOUNCE event type
    CREATE OR REPLACE VIEW vwbouncemailtmprecipients AS 
    SELECT
    mail.messageId as messageid
    , bounce.bouncedrecipients
    FROM
    sesmaster
    WHERE (eventtype = 'Bounce')
    
    CREATE OR REPLACE VIEW vwbouncemailrecipients AS 
    SELECT
    messageid
    , recipient.action
    , recipient.diagnosticcode
    , recipient.emailaddress
    FROM
    (vwbouncemailtmprecipients
    CROSS JOIN UNNEST(bouncedrecipients) t (recipient))
    
    CREATE OR REPLACE VIEW vwBouncedMails AS
    SELECT
    eventtype as eventtype
    , mail.messageId as mailmessageid
    , mail.timestamp as mailtimestamp
    , mail.source as mailsource
    , mail.sendingAccountId as mailsendingAccountId
    , mail.commonHeaders.subject as mailsubject
    , mail.tags.ses_configurationset as mailses_configurationset
    , mail.tags.ses_source_ip as mailses_source_ip
    , mail.tags.ses_from_domain as mailses_from_domain
    , mail.tags.ses_outgoing_ip as mailses_outgoing_ip
    , bounce.bounceType as bouncebounceType
    , bounce.bouncesubtype as bouncebouncesubtype
    , bounce.feedbackid as bouncefeedbackid
    , bounce.timestamp as bouncetimestamp
    , bounce.reportingMTA as bouncereportingmta
    , bd.action as bounceaction
    , bd.diagnosticcode as bouncediagnosticcode
    , bd.emailaddress as bounceemailaddress
    FROM
    sesmaster as sm
    ,vwbouncemailrecipients as bd
    WHERE
    sm.eventtype = 'Bounce'
    and sm.mail.messageid = bd.messageid
    
  5. Run the following scripts in Amazon Athena to flat the complaint.complainedrecipients array in the COMPLAINT event type
    CREATE OR REPLACE VIEW vwcomplainttmprecipients AS 
    SELECT
    mail.messageId as messageid
    , complaint.complainedrecipients
    FROM
    sesmaster
    WHERE (eventtype = 'Complaint')
    
    CREATE OR REPLACE VIEW vwcomplainedrecipients AS 
    SELECT
    messageid
    , recipient.emailaddress
    FROM
    (vwcomplainttmprecipients 
    CROSS JOIN UNNEST(complainedrecipients) t (recipient))
    

    At the end we have one table and four views which can be used in Amazon QuickSight to analyze email sending events:

    • Table sesmaster
    • View vwSESMaster
    • View vwSentMails
    • View vwBouncedMails
    • View vwComplainedemails

Step 4: Analyze and visualize data with Amazon QuickSight

 In this blog post, we use Amazon QuickSight to analyze and to visualize email sending events from the sesmaster and the four curated views created previously. Amazon QuickSight can directly access data through Athena. Its pay-per-session pricing enables you to put analytical insights into the hands of everyone in your organization.

Let’s set this up together. We first need to select our table and our views to create new data sources in Athena and then we use these data sources to populate the visualization. We are creating just an example of visualization. Feel free to create your own visualization based on your information needs.

Before we can use the data in Amazon QuickSight, we need to first grant access to the underlying S3 bucket. If you haven’t done so already for other analyses, see our documentation on how to do so.

  1. On the Amazon QuickSight home page, choose Datasets from the menu on the left side, then choose New dataset from the upper-right corner, set and pick Athena as data source. In the following dialog box, give the data source a descriptive name and choose Create data source.

    Create New Athena Data Source

    Figure 12. Create New Athena Data Source

  2. In the following dialog box, select the Catalog and the Database containing your sesmaster and curated views. Let’s select the sesmaster table in order to create some basic Key Performance Indicators. Select the table sesmaster and click on the Select

    Select Sesmaster Table

    Figure 13. Select Sesmaster Table

  3. Our sesmaster table now is a data source for Amazon QuickSight and we can turn to visualizing the data.

    QuickSight Visualize Data

    Figure 14. QuickSight Visualize Data

  4. You can see the list fields on the left. The canvas on the right is still empty. Before we populate it with data, let’s select Key Performance Indicator from the available visual types.

    QuickSight Visual Types

    Figure 15. QuickSight Visual Types

  5. To populate the graph, drag and drop the fields from the field list on the left onto their respective destinations. In our case, we put the field send onto the value well and use count as aggregation.

    Add Send field to visualization

    Figure 16. Add Send field to visualization

  6. Add another visual from the left-upper side and select Key Performance Indicator as visual type.
    Add a new visual

    Figure 17. Add a new visual

    Key Performance Indicator Visual Type

    Figure 18. Key Performance Indicator Visual Type

  7. Put the field Delivery onto the value well and use count as aggregation.

    Add Delivery Field to visualization

    Figure 19. Add Delivery Field to visualization

  8. Repeat the same procedure, (steps 1 to 4) to count the number of Open, Click, Bounce, Complaint and Reject Events. At the end, you should see something similar to the following visualization. After resizing and rearranging the visuals, you should get an analysis like the shown in the image below.

    Preview of Key Performance Indicators

    Figure 20. Preview of Key Performance Indicators

  9. Let´s add another dataset by clicking the pencil on the right of the current Dataset.

    Add a New Dataset

    Figure 21. Add a New Dataset

  10. On the following dialog box, select Add Dataset.

    Add a New Dataset

    Figure 22. Add a New Dataset

  11. Select the view called vwsesmaster and click Select.
    Add vwsesmaster dataset

    Figure 23. Add vwsesmaster dataset

    Now you can see all the available fields of the vwsesmaster view.

    New fields from vwsesmaster dataset

    Figure 24. New fields from vwsesmaster dataset

  12. Let’s create a new visual and select the Table visual type.

    QuickSight Visual Types

    Figure 25. QuickSight Visual Types

  13. Drag and drop the fields from the field list on the left onto their respective destinations. In our case, we put the fields eventtype, mailmessageid, and mailsubject onto the Group By well, but you can add as many fields as you need.

    Add eventtype, mailmessageid and mailsubject fields

    Figure 26. Add eventtype, mailmessageid and mailsubject fields

  14. Now let’s create a filter for this visual in order to filter by type of event. Be sure you select the table and then click on Filter on the left menu.

    Add a Filter

    Figure 27. Add a Filter

  15. Click on Create One and select the field eventtype on the popup window. Now select the eventtype filter to see the following options.

    Create eventtype filter

    Figure 28. Create eventtype filter

  16. Click on the dots on the right of the eventtype filter and select Add to Sheet.

    Add filter to sheet

    Figure 29. Add filter to sheet

  17. Leave all the default values, scroll down and select Apply

    Apply filters with default values

    Figure 30. Apply filters with default values

  18. Now you can filter the vwsesmaster view by eventtype.

    Filter vwsesmasterview by eventtype

    Figure 31. Filter vwsesmasterview by eventtype

  19. You can continue customizing your visualization with all the available data in the sesmaster table, the vwsesmaster view and even add more datasets to include data from the vwSentMails, vwBouncedMails, and vwComplainedemails views. Below, you can see some other visualizations created from those views.
    Final visualization 1

    Figure 32. Final visualization 1

    Final visualization 2

    Figure 33. Final visualization 2

    Final visualization 3

    Figure 34. Final visualization 3

Clean up

To avoid ongoing charges, clean up the resources you created as part of this post:

  1. Delete the visualizations created in Amazon Quicksight.
  2. Unsubscribe from Amazon QuickSight if you are not using it for other projects.
  3. Delete the views and tables created in Amazon Athena.
  4. Delete the Amazon SES configuration set.
  5. Delete the Amazon SES events stored in S3.
  6. Delete the CloudFormation stack in order to delete the Amazon Kinesis Delivery Stream.

Conclusion

In this blog we showed how you can use AWS native services and features to quickly create an email tracking solution based on Amazon SES events to have a more detailed view on your sending activities. This solution uses a full serverless architecture without having to manage the underlying infrastructure and giving you the flexibility to use the solution for small, medium or intense Amazon SES usage, without having to take care of any servers.

We showed you some samples of dashboards and analysis that can be built for most of customers requirements, but of course you can evolve this solution and customize it according to your needs, adding or removing charts, filters or events to the dashboard. Please refer to the following documentation for the available Amazon SES Events, their structure and also how to create analysis and dashboards on Amazon QuickSight:

From a performance and cost efficiency perspective there are still several configurations that can be done to improve the solution, for example using a columnar file formant like parquet, compressing with snappy or setting your S3 partition strategy according to your email sending usage. Another improvement could be importing data into SPICE to read data in Amazon Quicksight. Using SPICE results in the data being loaded from Athena only once, until it is either manually refreshed or automatically refreshed using a schedule.

You can use this walkthrough to configure your first SES dashboard and start visualizing events detail. You can adjust the services described in this blog according to your company requirements.

About the authors

Oscar Mendoza AWS Solutions Architect Oscar Mendoza is a Solutions Architect at AWS based in Bogotá, Colombia. Oscar works with our customers to provide guidance in architectural best practices and to build Well Architected solutions on the AWS platform. He enjoys spending time with his family and his dog and playing music.
Luis Eduardo Torres AWS Solutions Architect Luis Eduardo Torres is a Solutions Architect at AWS based in Bogotá, Colombia. He helps companies to build their business using the AWS cloud platform. He has a great interest in Analytics and has been leading the Analytics track of AWS Podcast in Spanish.
Santiago Benavidez AWS Solutions Architect Santiago Benavídez is a Solutions Architect at AWS based in Buenos Aires, Argentina, with more than 13 years of experience in IT, currently helping DNB/ISV customers to achieve their business goals using the breadth and depth of AWS services, designing highly available, resilient and cost-effective architectures.

Correlate IAM Access Analyzer findings with Amazon Macie

Post Syndicated from Nihar Das original https://aws.amazon.com/blogs/security/correlate-iam-access-analyzer-findings-with-amazon-macie/

In this blog post, you’ll learn how to detect when unintended access has been granted to sensitive data in Amazon Simple Storage Service (Amazon S3) buckets in your Amazon Web Services (AWS) accounts.

It’s critical for your enterprise to understand where sensitive data is stored in your organization and how and why it is shared. The ability to efficiently find data that is shared with entities outside your account and the contents of that data is paramount. You need a process to quickly detect and report which accounts have access to sensitive data. Amazon Macie is an AWS service that can detect many sensitive data types. Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and help protect your sensitive data in AWS.

AWS Identity and Access Management (IAM) Access Analyzer helps to identify resources in your organization and accounts, such as S3 buckets or IAM roles, that are shared with an external entity. When you enable IAM Access Analyzer, you create an analyzer for your entire organization or your account. The organization or account you choose is known as the zone of trust for the analyzer. The analyzer monitors the supported resources within your zone of trust. This analyzer enables IAM Access Analyzer to detect each instance of a resource shared outside the zone of trust and generates a finding about the resource and the external principals that have access to it.

Currently, you can use IAM Access Analyzer and Macie to detect external access and discover sensitive data as separate processes. You can join the findings from both to best evaluate the risk. The solution in this post integrates IAM Access Analyzer, Macie, and AWS Security Hub to automate the process of correlating findings between the services and presenting them in Security Hub.

How does the solution work?

First, IAM Access Analyzer discovers S3 buckets that are shared outside the zone of trust. Next, the solution schedules a Macie sensitive data discovery job for each of these buckets to determine if the bucket contains sensitive data. Upon discovery of shared sensitive data in S3, a custom high severity finding is created in Security Hub for review and incident response.

Solution architecture

This solution is based on a serverless architecture, and uses the following services:

Figure 1: Architecture diagram

Figure 1: Architecture diagram

Figure 1 depicts the following process flow:

  1. IAM Access Analyzer detects shared S3 buckets outside of the zone of trust—the organization or account you choose is known as a zone of trust for the analyzer—and creates the event Access Analyzer Finding in EventBridge.
  2. EventBridge triggers the Lambda function sda-aa-save-findings.
  3. The sda-aa-save-findings function records each finding in DynamoDB.
  4. An EventBridge scheduled event periodically starts a new cycle of the Step Function state machine, which immediately runs the Lambda function sda-macie-submit-scan. The template sets a 15-minute interval, but this is configurable.
  5. The sda-macie-submit-scan function reads the IAM Access Analyzer findings that were created by sda-aa-save-findings from DynamoDB.
  6. sda-macie-submit-scan launches a Macie classification job for each distinct S3 bucket that is related to one or more recent IAM Access Analyzer findings.
  7. Macie performs a sensitive discovery scan on each requested S3 bucket.
  8. The sda-macie-submit-scan function initiates the Lambda function sda-macie-check-status.
  9. sda-macie-check-status periodically checks the status of each Macie classification job, waiting for all the Macie jobs initiated by this solution to complete.
  10. Upon completion of the sda-macie-check-status function, the step function runs the Lambda function sda-sh-create-findings.
  11. sda-sh-create-findings joins the resulting IAM Access Analyzer and Macie datasets for each S3 bucket.
  12. sda-sh-create-findings publishes a finding to Security Hub for each bucket that has both external access and sensitive data.

    Note: The Macie scan is skipped if the S3 bucket is tagged to be excluded or if it was recently scanned by Macie. See the Cost considerations section for more information on custom configurations.

  13. Information security can review and act on the findings shown in Security Hub.

Sample Security Hub output

Figure 2 shows the sample findings that Security Hub will present. Each finding includes:

  • Severity
  • Workflow status
  • Record state
  • Company
  • Product
  • Title
  • Resource
Figure 2: Sample Security Hub findings

Figure 2: Sample Security Hub findings

The output to Security Hub will display a severity of HIGH with workflow NEW, because this is the first time the event has been observed. The record state is ACTIVE because the workflow state is NEW. The title explains the reason for the event.

For example, if potentially sensitive data is discovered in a bucket that is shared outside a zone of trust, selecting an event will display the resources involved in the finding so you can investigate. For more information, see the Security Hub User Guide.

Notes:

  • Detection of public S3 buckets by IAM Access Analyzer will still occur through Security Hub and will be marked as critical severity. This solution does not add to or augment this finding in Security Hub.
  • If a finding in IAM Access Analyzer is archived, the solution does not update the related finding in Security Hub.

Prerequisites

To use this solution, you need the following:

  • Permission to run AWS CloudFormation
  • Permission to create Lambda functions
  • Permission to create DynamoDB tables
  • Permission to create Step Function state machines
  • Permission to create EventBridge event rules
  • Permission to enable IAM Access Analyzer on the account where sensitive discovery is required
  • Permission to enable Macie on the account
  • Permission to enable Security Hub on the account

Deploy the solution

The solution is deployed through AWS CloudFormation, and you can review the template for options to best suit your specific needs.

  1. Sign in to your AWS account located at https://aws.amazon.com/console/.
  2. In the AWS Management Console, navigate to the AWS CloudFormation service, and then choose Create stack.
  3. Under Prerequisite – Prepare template, choose Template is ready.
  4. Under Specify template, choose Amazon S3 URL and provide the following URL:
    https://awsiammedia.s3.amazonaws.com/public/sample/936-correlating-aa-findings-macie/sda-cfn.yml
  5. Choose Next.
  6. Enter the stack name.
  7. The Application code location, S3 Bucket and S3 Key fields will be pre-filled.
  8. Under Service Activations, modify the activations based on the services you presently have running in your account.
  9. Modify the Logging and Monitoring settings if required.
  10. (Optional) Set an alert email address for errors.
  11. Choose Next, then choose Next again.
  12. Under Capabilities, select the check box.
  13. Choose Create Stack. The solution will begin deploying; watch for the CREATE_COMPLETE message.
Figure 3: Sample CloudFormation deployment status

Figure 3: Sample CloudFormation deployment status

The solution is now deployed and will start monitoring for sensitive data that is being shared. It will send the findings to Security Hub for your teams to investigate.

Cost considerations

When you scan large S3 buckets with sensitive data, remember that Macie cost is based on the amount of data scanned. For more information on Macie costs, see Amazon Macie pricing.

This solution allows the following options, which you can use to help manage costs:

  • Use environment variables in Lambda to skip specific tagged buckets
  • Skip recently scanned S3 buckets and reuse prior findings
Figure 4: Screen shot of configurable environment variable

Figure 4: Screen shot of configurable environment variable

Conclusion

In this post, we discussed how the solution uses Lambda, Step Functions and EventBridge to integrate IAM Access Analyzer with Macie discovery jobs. We reviewed the components of the application, deployed it by using CloudFormation, and reviewed the output a security team would use to take the appropriate actions. We also provided two ways that you can manage the costs associated with the solution.

After you deploy this project, you can modify it to meet your organization’s needs. For example, you can modify the tags to skip specific S3 buckets your organization has already classified to hold sensitive data. Customers who use multiple AWS accounts can designate a centralized Security Hub administrator account to receive the solution alerts from each member account. For more information on this option, see Designating a Security Hub administrator account.

If you have feedback about this post, please submit it in the Comments section below. If you have questions about this post, please start a new thread on the AWS Identity and Access Management forum.

Other resources

For more information on correlating security findings with AWS Security Hub and Amazon EventBridge, refer to this blog post.

Want more AWS Security news? Follow us on Twitter.

Nihar Das

Nihar Das

Nihar has over 20 years of experience in various business domains including financial services. As an AWS Senior Solutions Architect, he is passionate about solving challenges in the cloud and helps financial services customers to migrate to AWS and support the continued innovation.

Joe Dunn

Joe Dunn

Joe is an AWS Senior Solutions Architect in Financial Services with over 20 years of experience in infrastructure architecture and migration of business-critical loads to AWS. He helps financial services customers to innovate on the AWS Cloud by providing solutions using AWS products and services.

Armand Aquino

Armand Aquino

Armand is a solutions architect helping financial services organizations design their critical workloads on AWS. In his spare time, he enjoys exploring outdoors and learning Korean.

Registering SMS Sender IDs in Singapore

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/registering-sms-sender-ids-in-singapore/

A few weeks ago, we published a blog post about the process of registering alphanumeric Sender IDs. Today, we’re announcing support for registering Sender IDs in Singapore.

About Sender ID registration in Singapore

Singapore’s Infocomm Media Development Authority (IMDA) has created a Sender ID registry to protect consumers from fraudulent and malicious SMS messages. This registry is called the Singapore SMS Sender ID Registry (SSIR).

The government of Singapore encourages all government agencies and financial institutions to register with SSIR. Organizations and businesses outside of these industries can also register with SSIR.

Currently, there is no requirement to register your Sender ID. However, when you register with the SSIR, your Sender ID becomes a “Protected Sender ID.” Protected Sender IDs help to protect you and your customers by preventing other senders from using your Sender ID.

Note that in order to complete this registration process, your business or organization must have a Unique Entity Number (UEN). Businesses and other organizations receive a UEN when they register with Singapore’s Accounting and Corporate Regulatory Authority.

Registering your Sender ID

The first step in the registration process is to create a Protected Sender ID through the Singapore Network Information Centre (SGNIC). To initiate the registration process, send an email to [email protected]. In your message, include the name of your business, the Sender IDs that you want to register, and a description of your use case. SGNIC may contact you for additional information.

After you register with SGNIC, open a ticket in the AWS Support Center. You can find the procedure for opening a case in the Amazon Pinpoint User Guide. The AWS Support team will respond to your case within 24 hours. Their response includes a template for a letter that shows your intent to register a Sender ID.

The next step is to modify the contents of this letter. The regulatory groups in Singapore require a copy of this letter in order to allow AWS to send messages using your Sender ID. Begin by placing the contents of the letter on your company’s letterhead. Next, modify the fields that are highlighted in yellow. These fields include the following:

  • <Place>: The address of your company or organization.
  • <Brand Owner Company Name>: The name of your company or organization.
  • <Number>: Your Unique Entity Number.
  • <Signature>, <Name>, <Title>: The personal signature, name, and job title of the person who is submitting the request on behalf of your company or organization.
  • <ExampleSenderId1>, <ExampleSenderId2>: The Sender IDs that you intend to register with SGNIC. You can add or remove lines here depending on how many Sender IDs you plan to register.

Once you finish modifying the letter, submit it by attaching it to your existing case in the AWS Support Center.

What happens next?

IMDA regularly sends us lists of new Sender ID registrations. When we receive confirmation that your Sender ID has been registered, we update your account to allow it to send SMS messages through your Sender ID. We will also comment on your Support case to indicate that the process is complete.

Wrapping up

We continue to monitor changes to Sender ID registration requirements around the world. We’re working closely with carriers and organizations around the world to make the registration processes as straightforward as possible for our customers. Check in on this blog regularly to learn more about future regulatory changes.

For more information about registering Sender IDs in Singapore, see Special requirements for Singapore in the Amazon Pinpoint User Guide.

Registering Sender IDs for Sending SMS Messages

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/registering-sender-ids-for-sending-sms-messages/

With Amazon Pinpoint, you can use Sender IDs to send text messages to recipients in various countries around the world. A Sender ID is a short, alphanumeric identifier (such as “AMAZON”) that appears on a recipient’s device when they receive a message from you. A Sender ID is one type of origination identity—that is, an identity that’s used to send text messages. Other types of origination identities include short codes and long codes. Sender IDs are great for branding purposes, because recipients can easily determine who the sender of the message is.

SMS senders who send messages to some countries (such as India or the Philippines) are required to register their SMS use cases and message templates before they can send messages to those countries using a Sender ID. On the Amazon Pinpoint team, we listen to our customers when they tell us which countries they need to send messages to. We regularly add support for registration processes to help our customers reach their end users. In this post, I’ll discuss the purpose of Sender ID registration and provide information about registering Sender IDs.

Why is Sender ID registration required?

The rise of fraudulent and malicious SMS activity around the world means that it’s more important than ever for recipients of SMS messages to trust the Sender ID that is contacting them. To reduce the volume of fraudulent SMS messages reaching their customers, mobile carriers have systems in place to identify and prevent abuse.

Registering Sender IDs helps mobile carriers trace abuse and other issues back to a specific SMS sender. By registering a Sender ID, your messages bypass filters that can throttle or block unregistered traffic. This not only improves deliverability rates, but also helps earn trust, because the sender’s name is consistent and identifiable. AWS has processes for registering your dedicated Sender ID with regulatory agencies and industry groups in several countries.

The future of Sender ID registration

In the months and years ahead, we expect more countries to add Sender ID registration requirements. AWS will continue to work with local network operators to expand the services that we offer to our customers. We carefully monitor the global SMS industry and create new processes when needs arise. Regardless of changes to the regulatory landscape, we strive to offer consistently high, reliable SMS message deliverability rates.

How can I register a Sender ID?

You can find a list of countries that support Sender IDs in Supported countries and regions in the Amazon Pinpoint User Guide. That document also lists the countries that require pre-registration of Sender IDs.

If you plan to send messages to a country that requires Sender ID registration, you must complete the registration process. The registration process can be complicated, with many specific requirements and with different processes in each country. The AWS Support team can work with you to complete your registration. The first step in registering your Sender ID is to create a case with AWS Support. You can find more information about creating a case in Requesting Sender IDs for SMS messaging in the Amazon Pinpoint User Guide.

When you request a Sender ID, we provide you with an estimate of how long the request will take to complete. This estimate is based on the completion times that we’ve seen from other customers. Because each country has its own process, completion times for registration vary by destination country. For example, Sender ID registration in India can be complete in one week or less, whereas it can take six weeks or more in Vietnam. These requests can’t be expedited, because they involve the carriers themselves making changes to the ways that their networks are configured. We suggest that you start your registration process early so that you can start sending messages as soon as you launch your product or service.

When you create a case, it’s important that you check on it regularly. The AWS Support team will provide you with registration materials, such as the forms and cover letters that you must submit to begin the registration process. We recommend that you provide all of the requested information with as much detail as you can. Too much information is better than too little information. We also recommend that you don’t skip any fields in the registration forms that we send you. The carriers require that you provide responses in all of the fields on these forms. This is true even if you believe that a field doesn’t apply to your use case. This might occur if you’re registering a One-Time Password (OTP) use case, and the carriers require you to provide a response to the keyword “STOP.” Although it doesn’t seem logical that customers would want to opt-out of receiving one-time passwords, the carriers in most countries require you to provide recipients with a way to completely opt-out of receiving messages from you.

After you submit your application, it’s also possible that the mobile carriers will have feedback about your application. In this situation, you have to address their concerns before the registration process can continue. Addressing these concerns quickly can help reduce delays in completing your request.

Sender IDs are a great tool for reaching your customers by SMS. You can learn more about sender IDs and the other types of origination identities that Amazon Pinpoint supports in Originating identities for SMS messaging in the Amazon Pinpoint User Guide. Happy sending!

Automate the Creation of On-Demand Capacity Reservations for running EC2 instances

Post Syndicated from sbbusser original https://aws.amazon.com/blogs/compute/automate-the-creation-of-on-demand-capacity-reservations-for-running-ec2-instances/

This post is written by Ballu Singh a Principal Solutions Architect at AWS, Neha Joshi a Senior Solutions Architect at AWS, and Naveen Jagathesan a Technical Account Manager at AWS.

Customers have asked how they can “create On-Demand Capacity Reservations (ODCRs) for their existing instances during events, such as the holiday season, Black Friday, marketing campaigns, or others?”

ODCRs let you reserve compute capacity for your your Amazon Elastic Compute Cloud (Amazon EC2) instances. ODCRs further make sure that you always have EC2 capacity access when required, and for as long as you need it. Customers who want to make sure that any instances that are stopped/started during the critical event and are available when needed should be covered by ODCRs.

ODCRs let you reserve compute capacity for your Amazon EC2 instances in a specific availability zone for any duration. This means that you can create and manage capacity reservations independently from the billing discounts offered by Savings Plans or Regional Reserved Instances. You can create ODCR at any time, without entering into a one-year or three-year term commitment, and the capacity is available immediately. Billing starts as soon as the ODCR enters the active state. When you no longer need it, cancel the ODCR to stop incurring charges.

At the time of this blog publication, if you need to create ODCR for existing running instances, you must manually identify your running instances configuration with matching attributes, such as instance type, platform, and Availability Zone. This is a time and resource consuming process.

In this post, we provide an automated way to manage ODCR operations. This includes creating, modifying, and cancelling ODCRs for the running instances across regions in an account, all without requiring any manual intervention of specifying instance configuration attributes. Additionally, it creates an Amazon CloudWatch Alarm for InstanceUtilization and an Amazon Simple Notification Service (Amazon SNS) topic with topic name ODCRAlarmNotificationTopic to notify when the threshold breaches.

Note: This will not create cluster placement group ODCRs. For details on capacity reservations in cluster placement groups, refer here.

Getting started

Before you create Capacity Reservations, note the limitations and restrictions here.

To get started, download the scripts for registering, modifying, and canceling ODCRs and associated requirements.txt, as well as AWS Identity and Access Management (IAM) policy from the GitHub link here.

Pre-requisites

To implement these scripts, you need the following prerequisites:

  1. Access to AWS Management Console, AWS Command Line Interface (CLI),or AWS SDK for ODCR.
  2. The following IAM role permissions for IAM users using the solution as provided in ODCR_IAM.json.
  3. Amazon EC2 instance having supported platform for capacity reservation. Capacity Reservations support the following platforms listed here for Linux and Windows.
  4. Refer to the above GitHub link for the code, and save the requirements.txt file in the same directory with other python scripts. You may want to run the requirements.txt file if you don’t have appropriate dependency to run the rest of the python scripts. You can run this using the following command:
pip3 install -r requirements.txt

Implementation Details

To create ODCR capacity reservation

The following instructions will guide you through creating a capacity reservation of running instances across all of the Regions within an AWS account.
Input variables needed from users:

  • EndDateType (String) – Indicates how the Capacity Reservation ends. A Capacity Reservation can have one of the following end types:
      • unlimited – The Capacity Reservation remains active until you explicitly cancel it. Don’t provide an EndDate if the EndDateType is unlimited.
      • limited – The Capacity Reservation expires automatically at a specified date and time. You must provide an EndDate value if the EndDateType value is limited.
  • EndDate (datetime) – The date and time when the Capacity Reservation expires. When a Capacity Reservation expires, the reserved capacity is released and you can no longer launch instances into it. The Capacity Reservation’s state changes to expired when it reaches its end date and time.

You must provide EndDateType as ‘limited’ and the EndDate in standard UTC format to secure instances for a limited period. Command to execute register ODCR script with limited period:

You must provide EndDateType as ‘unlimited’ to secure instances for unlimited period. Command to execute register ODCR script with unlimited period:

registerODCR.py '<EndDateType>' '<EndDate>'
    Example- registerODCR.py 'limited' '2022-01-31 14:30:00'
  • You must provide EndDateType as ‘unlimited’ to secure instances for unlimited period. Command to execute register ODCR script with unlimited period:
registerODCR.py 'EndDateType'
    Example- registerODCR.py 'unlimited'

This registerODCR.py script does following four things:

1. Describe instances cross-region in an account. It checks for the instance that has:

    • No Capacity reservation
    • State of the instance is running
    • Tenancy is default
    • InstanceLifecycle is None indicates whether this is a Spot Instance or a Scheduled Instance

Note: Describe instances API call is counted toward your account API limit. Therefore, it is advisable to run the script during non-peak hours or before the short-term scaling event begins. Work with AWS Support team if you run into API throttling.

2. Aggregates instances with similar attributes, such as InstanceType, AvailabilityZone, Tenancy, and Platform.

3. Describe reserved instances cross-region in an account. It checks for instance(s) that have Zonal Reservation Instances (ZRIs) and compares them with aggregated instances with similar attributes.

4. Finally,

    • Reserves ODCR(s) for existing running instances with matching attributes for which ZRIs do not exist.

Note: If you have one or more ZRIs in an account, then the script compares them with the existing instances with matching characteristics – Instance Type, AZ, and Platform – and does NOT create ODCR for the ZRIs to avoid incurring redundant charges. If there are more running instances than ZRIs, then the script creates an ODCR for just the delta.

    • Creates an SNS topic with the topic name – ODCRAlarmNotificationTopic in the region where you’re registering ODCR, if it doesn’t already exist.
    • Creates CloudWatch alarm for InstanceUtilization using the best practices, which can be found here.

Note: You must subscribe and confirm to the SNS topic, if you haven’t already, to receive notifications.

The CloudWatch alarm is also created on your behalf in the region for each ODCR. This alarm monitors your ODCR metric- InstanceUtilization. Whenever it breaches threshold (50% in this case), it enters the alarm state and sends an SNS notification using the topic that was created for you if you subscribed to it.

Note: You can change the alarm threshold based on your specific needs.

  • You will receive an email notification when CloudWatch Alarm State changes to Alarm with:
    • SNS Subject (Assuming CW alarms triggers in US East region).
ALARM: "ODCRAlarm-cr-009969c7abf4daxxx" in US East (N. Virginia)
    • SNS Body will have the details
      • CW alarm, region, link to view the alarm, alarm details, and state change actions.

With this, if your ODCR InstanceUtilization drops, then you will be notified in near-real time to help you optimize the capacity and stop unnecessary payments for unused capacity.

To modify ODCR capacity reservation

To modify the attributes of an active capacity reservation after you have created it, adhere to the following instructions.

Note: When modifying a Capacity Reservation, you can only increase or decrease the quantity and change how it is released. You can’t change the instance type, EBS optimization, instance store settings, platform, Availability Zone, or instance eligibility of a Capacity Reservation. If you must modify any of these attributes, then we recommend that you cancel the reservation, and then create a new one with the required attributes. You can’t modify a Capacity Reservation after it has expired or after you have explicitly canceled it.

  • Input variables needed from users:
    • CapacityReservationID – The ID of the Capacity Reservation that you want to modify.
    • InstanceCount (integer) – The number of instances for which to reserve capacity. The number of instances can’t be increased or decreased by more than 1000 in a single request.
    • EndDateType (String) – Indicates how the Capacity Reservation ends. A Capacity Reservation can have one of the following end types:
      • unlimited – The Capacity Reservation remains active until you explicitly cancel it. Don’t provide an EndDate if the EndDateType is unlimited.
      • limited – The Capacity Reservation expires automatically at a specified date and time. You must provide an EndDate value if the EndDateType value is limited.
    • EndDate (datetime) – The date and time of when the Capacity Reservation expires. When a Capacity Reservation expires, the reserved capacity is released, and you can no longer launch
    • instances into it. The Capacity Reservation’s state changes to expired when it reaches its end date and time.
      Example to run the modify ODCR script for ‘limited’ period:
    • You must provide EndDateType as ‘unlimited’ to modify instances for an unlimited period. Command to the run modify ODCR script with unlimited period:
  • Command to execute modify ODCR script:
    modifyODCR.py <CapacityReservationId> <InstanceCount> <EndDateType> <EndDate> 
  • Example to execute the modify ODCR script for limited period:
modifyODCR.py 'cr-05e6a94b99915xxxx' '1' 'limited' '2022-01-31 14:30:00'

Note: EndDate is in the standard UTC time.

  • You must provide EndDateType as ‘unlimited’ to modify instances for unlimited period. Command to execute modify ODCR script with unlimited period:
modifyODCR.py <CapacityReservationId> <InstanceCount> <EndDateType>
  • Example to execute the modify ODCR script for unlimited period:
modifyODCR.py 'cr-05e6a94b99915xxxx' '1' 'unlimited'

To cancel ODCR capacity reservation

To cancel the ODCR that are in the “Active” state, follow these instructions:

Note: Once the cancellation request succeeds, the reservation status will be marked as “cancelled”.

  • Input variables needed from users:
    • CapacityReservationID – The ID of the Capacity Reservation to cancel.
  • You must provide one parameter while executing the cancellation script.
  • Command to execute cancel ODCR script:
cancelODCR.py <CapacityReservationId> 
  • Example to execute the cancel ODCR script:
Example - cancelODCR.py 'cr-05e6a94b99915xxxx'

Monitoring

CloudWatch metrics let you monitor the unused capacity in your Capacity Reservations to optimize the ODCR. ODCRs send metric data to CloudWatch every five minutes. Although Capacity Reservation usage metrics are UsedInstanceCount, AvailableInstanceCount, TotalInstanceCount, and InstanceUtilization, for this solution we will be using the InstanceUtilization metric. This shows the percentage of reserved capacity instances that are currently in use. This will be useful for monitoring and optimizing ODCR consumption.

For example, if your On-Demand Capacity Reservation is for four instances and with matching criteria only one EC2 instance is currently running, then the InstanceUtilization metric will be 25% for your respective capacity reservation.

Let’s look at the steps to create the CloudWatch monitoring dashboard for your On-Demand Capacity Reservation solution:

  1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
  2. If necessary, change the Region. From the navigation bar, select the Region where your Capacity Reservation resides. For more information, see Regions and Endpoints.
  3. In the navigation pane, choose Metrics.

Amazon CloudWatch Dashboard

For All metrics, choose EC2 Capacity Reservations.

Amazon CloudWatch Dashboard: Metrics

4. Choose the metric dimension By Capacity Reservation. Metrics will be grouped by

Amazon CloudWatch Metrics: Capacity Reservation Ids

5. Select the dropdown arrow for InstanceUtilization, and select Search for this only.

Amazon CloudWatch Metrics Filter

Once we see the InstanceUtilization metric in the filter list, select Graph Search.

Amazon CloudWatch Metrics: Graph Search

This displays the InstanceUtilization metrics for the selected period.

Amazon CloudWatch Metrics Duration

OPTIONAL: To display the Capacity Reservation IDs for active metrics only:

    • Navigate to Graphed metrics.

Amazon CloudWatch: Graphed Metrics

    • Under Details column, select Edit math expression.

Amazon CloudWatch Metrics: Math Expression

    • Edit the math expression with the following, and select Apply:
REMOVE_EMPTY(SEARCH('{AWS/EC2CapacityReservations,CapacityReservationId} MetricName="InstanceUtilization"', 'Average', 300))

Amazon CloudWatch Graphed Metrics: Math Expression Apply

This displays the Capacity Reservation IDs for active metrics only.

Amazon CloudWatch Metrics: Active Capacity Reservation Ids

With this configuration, whenever new Capacity Reservations are created, the InstanceUtilization metric for respective Capacity Reservation IDs will be populated.

6. From the Actions drop-down menu, select Add to dashboard.

Amazon CloudWatch Metrics: Add to Dashboard

Select Create new to create a new dashboard for monitoring your ODCR metrics.

Amazon CloudWatch: Creat New Dashboard

Specify the new dashboard name, and select Add to dashboard.

Amazon CloudWatch: Create New Dashboard

7. These configuration steps will navigate you to your newly created CloudWatch dashboard under Dashboards.

Amazon CloudWatch Dashboard: ODCR Metrics

Once this is created, if you create new Capacity Reservations, or new instances get added to existing reservations, then those metrics will be automatically be added to your CloudWatch Dashboard.

Note: You may see a delay of approximately 5-10 minutes from the point when changes are made to your environment (ODCR operations or instances launch/termination activities) to those changes getting reflected on your CloudWatch Dashboard metrics.

Conclusion

In this post, we discussed a solution for automating ODCR operations for existing EC2 instances. This included creating capacity reservation, modifying capacity reservation, and cancelling capacity reservation operations that inherit your existing EC2 instances for attribute details. We also discussed monitoring aspects of ODCR metrics using CloudWatch. This solution allows you to automate some of the ODCR operations for existing instances, thereby optimizing and speeding up the entire process.

For more information, see Target a group of Amazon EC2 On-Demand Capacity Reservations blog and Capacity Reservations documentation.

If you have feedback or questions about this post, please submit your comments in the comments section or contact AWS Support.

Incident notification mechanism using Amazon Pinpoint two-way SMS

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/incident-notification-mechanism-using-amazon-pinpoint-two-way-sms/

Unexpected situations that require immediate attention can occur in any industry. Part of resolving these incidents is the notifications’ delivery. For example, utility companies that have installed gas sensors need to notify immediately the available engineer if a leak occurs.

The goal of an incident management process is to restore a normal service operation as quickly as possible and to minimize the impact on business operations, thus ensuring that the best possible levels of service quality and availability are maintained. A key element of incident management is sending timely notifications to the assigned or available resource(s) who can rectify the issue.

An incident can take place at any time and the resource(s) assigned to it might not have internet access and even if they receive the message they might not be equipped to work on it. This creates five key requirements for an incident notifications mechanism:

  1. Notify the resources via a communication channel that ensures message delivery even without internet access
  2. Enable assigned resources to respond to a request via a communication channel that doesn’t require internet access
  3. Send reminder(s) in case there is no response from the assigned resource(s)
  4. Escalate to another resource in case the first one doesn’t reply or declines the incident
  5. Store the incident details & status for reporting and data analysis

In this blog post, I share a solution on how you can automate the delivery of incident notifications. This solution utilizes Amazon Pinpoint SMS channel to contact the designated resources who might not have access to the internet. Furthermore, the recipient of the SMS is able to reply with an acknowledgement. AWS Step Functions orchestrates the user journey using AWS Lambda functions to evaluate the recipients’ response and trigger the next best action. You will use AWS CloudFormation to deploy this solution.

Use Cases

An incident notification mechanism can vary depending the organization’s requirements and 3rd party system integrations. In this blog the solution covers all five points listed above but it might require further modifications depending your use case.

With minor modifications this solution can also be used in the following use cases:

  1. Medicine intake notification: It will notify the patient via SMS that it is their time to take their medicine. If the patient doesn’t acknowledge the SMS by replying then this can be escalated to their assigned doctor
  2. Assignment submission: It will notify the student that their assignment is due. If the student doesn’t acknowledge the SMS by replying then this can be escalated to their teacher

High-level Architecture

The solution requires the country of your SMS recipients to support two-way SMS. To check which countries, support two-way SMS visit this page.  If two-way SMS is supported then you will need to request a dedicated originating identity. You can also use Toll Free Number or 10DLC if your recipients are in the US.

Note: Sender ID doesn’t support two-way SMS.

A new incident is represented as an item in an Amazon DynamoDB table containing information such as description, URL, incident_id as well as the contact numbers for two resources. A resource is someone who has been assigned to work on this incident. The second resource is for escalation purposes in case the first one doesn’t acknowledge or decline the incident notification.

The Amazon DynamoDB table covers three functions for this solution:

  1. A way to add new incidents using either the AWS console or programmatically
  2. As a storage for variables that indicate the incident’s status and can be used from the solution to determine the next action(s)
  3. As a historical data storage for all incidents that have been created for data analysis purposes

The solution utilizes Amazon DynamoDB Streams to invoke an AWS Lambda function every time a new incident is created. The AWS Lambda function triggers an AWS Step Function State machine, which orchestrates three AWS Lambda functions:

  1. Send_First_SMS: Sends the first SMS
  2. Reminder_SMS: Sends a reminder SMS if the resource does not acknowledge the first SMS
  3. Incident_State_Review: Assesses the status of the incident and either goes back to the first AWS Lambda function or finishes the AWS Step Function State machine execution

The AWS Step Functions State machine uses the Choice state, which evaluates the response of the previous AWS Lambda function and decides on the next state. This is a very useful feature that can reduce custom code and potentially AWS Lambda invocations resulting to cost savings.

Additionally, the waiting between steps is also managed from AWS Step Functions State machine using the Wait state. This can be configured to wait seconds, days or till a specific point in the future.

To be able to receive SMS, this solution uses Amazon Pinpoint’s two-way SMS feature. When receiving an SMS Amazon Pinpoint sends a payload to an Amazon SNS topic, which needs to be created separately. An AWS Lambda function that is subscribed to the Amazon SNS topic processes the SMS content and performs one or both of the following actions:

  1. Update the incident status in the DynamoDB table
  2. Create a new Step Function State machine execution

In this solution SMS recipients can reply by typing either yes or no. The SMS response is not case sensitive.

An inbound SMS payload contains the originationNumber, destinationNumber, messageKeyword, messageBody, inboundMessageId and previousPublishedMessageId. Noticeably there isn’t a direct way to associate an inbound SMS with an incident. To overcome this challenge this solution uses a second DynamoDB table, which stores the message_id and incident_id every time an SMS is send to any of the two resources. This allows the solution to use the previousPublishedMessageId from the inbound SMS payload to fetch the respective incident_id from the second DynamoDB table.

The code in this solution uses AWS SDK for Python (Boto3).

Prerequisites

  1. An Amazon Pinpoint project with the SMS channel enabled – Guide on how to enable Amazon Pinpoint SMS channel
  2. Check if the country you want to send SMS to, supports two-way SMS – List with countries that support two-way SMS
  3. An originating identity that supports two-way SMS – Guide on how to request a phone number
  4. Increase your monthly SMS spending quota for Amazon Pinpoint – Guide on how to increase the monthly SMS spending quota

Deploy the solution

Step 1: Create an S3 bucket

  1. Navigate to the Amazon S3 console
  2. Select Create bucket
  3. Enter a unique name for Bucket name
  4. Select the AWS Region to be the same as the one of your Amazon Pinpoint project
  5. Scroll to the bottom of the page and select Create bucket
  6. Follow this link to download the GitHub repository. Once the repository is downloaded, unzip it and navigate to  \amazon-pinpoint-incident-notifications-mechanism-main\src
  7. Access the S3 bucket created above and upload the five .zip files

Step 2: Create a stack

  1. The application is deployed using an AWS CloudFormation template.
  2. Navigate to the AWS CloudFormation console select Create stack > With new resources (standard)
  3. Select Template is ready as Prerequisite – Prepare template and choose Upload a template file as Template source
  4. Select Choose file and from the GitHub repository downloaded in step 1.6 navigate to amazon-pinpoint-incident-notifications-mechanism-main\cfn upload CloudFormation_template.yaml and select Next
  5. Type Pinpoint-Incident-Notifications-Mechanism as Stack name, paste the S3 bucket name created in step 1.5 as the LambdaCodeS3BucketName, type the Amazon Pinpoint Originating Number in E.164 format as OriginatingIdenity, paste the Amazon Pinpoint project ID as PinpointProjectId and type 40 for WaitingBetweenSteps
  6. Select Next, till you reach to Step 4 Review where you will need to check the box I acknowledge that AWS CloudFormation might create IAM resources and then select Create Stack
  7. The stack creation process takes approximately 2 minutes. Click on the refresh button to get the latest event regarding the deployment status. Once the stack has been deployed successfully you should see the last Event with Logical ID Pinpoint-Incident-Notifications-Mechanism and with Status CREATE_COMPLETE

Step 3: Configure two-way SMS SNS topic

  1. Navigate to the Amazon Pinpoint console > SMS and voice > Phone numbers. Select the originating identity that supports two-way SMS. Scroll to the bottom of the page and click to expand the  and check the box to enable it.

    For SNS topic select Choose an existing SNS topic then using the drop down choose the one that contains the name of the AWS CloudFormation stack from Step 2.4 as well as the name TwoWaySMSSNSTopic and click Save.

Step 4: Create a new incident

To create a new incident, navigate to Amazon DynamoDB console > Tables and select the table containing the name of the AWS CloudFormation stack from Step 2.4 as well as the name IncidentInfoDynamoDB. Select View items and then Create item.

On the Create item page choose JSON, copy and paste the JSON below into the text box and replace the values for the first_contact and second_contact with a valid mobile number that you have access to.

Note: If you don’t have two different mobile numbers, enter the same for both first_contact and second_contact fields. The mobile numbers must follow E.164 format +<country code><number>.

{
   "incident_id":{
      "S":"123"
   },
   "incident_stat":{
      "S":"not_acknowledged"
   },
   "double_escalation":{
      "S":"no"
   },
   "description":{
      "S":"Error 111, unit 1 malfunctioned. Urgent assistance is required."
   },
   "url":{
      "S":"https://example.com/incident/111/overview"
   },
   "first_contact":{
      "S":"+4479083---"
   },
   "second_contact":{
      "S":"+4479083---"
   }
}

Incident fields description:

  • incident_id: Needs to be unique
  • incident_stat: This is used from the application to store the incident status. When creating the incident, this value should always be not_acknowledged
  • double_escalation: This is used from the application as a flag for recipients who try to escalate an incident that is already escalated. When creating the incident, this value should always be no
  • description: You can type a description that best describes the incident. Be aware that depending the number of characters the SMS parts will increase. For more information on SMS character limits visit this page
  • url: You can add a URL that resources can access to resolve the issue. If this field is not pertinent to your use case then type no url
  • first_contact: This should contain the mobile number in E.164 format for the first resource
  • second_contact: This should contain the mobile number in E.164 format for the second resource. The second resource will be contacted only if the first one does not acknowledge the SMS or declines the incident

Once the above is ready you can select Create item. This will execute the AWS Step Functions State machine and you should receive an SMS. You can reply with yes to acknowledge the incident or with no to decline it. Depending your response, the incident status in the DynamoDB table will be updated and if you reply no then the incident will be escalated sending a SMS to the second_contact.

Note: The SMS response is not case sensitive.

Clean-up

To remove the solution:

  1. Delete the AWS CloudFormation stack by following the steps listed in this guide
  2. Delete the dedicated originating identity that you used to send the SMS by following the steps listed in this guide
  3. Delete the Amazon Pinpoint project by navigating the Amazon Pinpoint console, select your Amazon Pinpoint Project, choose Settings > General settings > Delete Project

Next Steps

This solution currently works only if your SMS recipients are in one country. If your use case requires to send SMS to multiple countries you will need to:

  • Check this page to ensure that these countries support two-way SMS
  • Follow the instructions in this page to obtain a number that supports two-way SMS for each country
  • Expand the solution to identify the country of the SMS recipient and to choose the correct number accordingly. To identify the country of the SMS recipient you can use Amazon Pinpoint’s phone number validate service via Amazon Pinpoint API or SDKs. The phone validate service returns a list of data points per mobile number with one of them being the Country

Incidents that are not being acknowledged by any of the assigned resources, have their status updated to unacknowledged but they don’t escalate further. Depending your requirements, you can expand the solution to send an email using Amazon Pinpoint APIs or perform an outbound call using Amazon Connect APIs.

Conclusion

In this blog post, I have demonstrated how your organization can use Amazon Pinpoint two-way SMS and Step Functions to automate incident notifications. Furthermore, the solution highlights the synergy of AWS services and how you can build a custom solution with little effort that meets your requirements.

About the Author

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. He loves to dive deep into his customer’s technical issues and help them design communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Queueing Amazon Pinpoint API calls to distribute SMS spikes

Post Syndicated from satyaso original https://aws.amazon.com/blogs/messaging-and-targeting/queueing-amazon-pinpoint-api-calls-to-distribute-sms-spikes/

Organizations across industries and verticals have user bases spread around the globe. Amazon Pinpoint enables them to send SMS messages to a global audience through a single API endpoint, and the messages are routed to destination countries by the service. Amazon Pinpoint utilizes downstream SMS providers to deliver the messages and these SMS providers offer a limited country specific threshold for sending SMS (referred to as Transactions Per Second or TPS). These thresholds are imposed by telecom regulators in each country to prohibit spamming. If customer applications send more messages than the threshold for a country, downstream SMS providers may reject the delivery.

Such scenarios can be avoided by ensuring that upstream systems do not send more than the permitted number of messages per second. This can be achieved using one of the following mechanisms:

  • Implement rate-limiting on upstream systems which call Amazon Pinpoint APIs.
  • Implement queueing and consume jobs at a pre-configured rate.

While rate-limiting and exponential backoffs are regarded best practices for many use cases, they can cause significant delays in message delivery in particular instances when message throughput is very high. Furthermore, utilizing solely a rate-limiting technique eliminates the potential to maximize throughput per country and priorities communications accordingly. In this blog post, we evaluate a solution based on Amazon SQS queues and how they can be leveraged to ensure that messages are sent with predictable delays.

Solution Overview

The solution consists of an Amazon SNS topic that filters and fans-out incoming messages to set of Amazon SQS queues based on a country parameter on the incoming JSON payload. The messages from the queues are then processed by AWS Lambda functions that in-turn invoke the Amazon Pinpoint APIs across one or more Amazon Pinpoint projects or accounts. The following diagram illustrates the architecture:

Step 1: Ingesting message requests

Upstream applications post messages to a pre-configured SNS topic instead of calling the Amazon Pinpoint APIs directly. This allows applications to post messages at a rate that is higher than Amazon Pinpoint’s TPS limits per country. For applications that are hosted externally, an Amazon API Gateway can also be configured to receive the requests and publish them to the SNS topic – allowing features such as routing and authentication.

Step 2: Queueing and prioritization

The SNS topic implements message filtering based on the country parameter and sends incoming JSON messages to separate SQS queues. This allows configuring downstream consumers based on the priority of individual queues and processing these messages at different rates.

The algorithm and attribute used for implementing message filtering can vary based on requirements. Similarly, filtering can be enabled based on business use-cases such as “REMINDERS”,   “VERIFICATION”, “OFFERS”, “EVENT NOTIFICATIONS” etc. as well. In this example, the messages are filtered based on a country attribute shown below:

Based on the filtering logic implemented, the messages are delivered to the corresponding SQS queues for further processing. Delivery failures are handled through a Dead Letter Queue (DLQ), enabling messages to be retried and pushed back into the queues.

Step 3: Consuming queue messages at fixed-rate

The messages from SQS queues are consumed by AWS Lambda functions that are configured per queue. These are light-weight functions that read messages in pre-configured batch sizes and call the Amazon Pinpoint Send Messages API. API call failures are handled through 1/ Exponential Backoff within the AWS SDK calls and 2/ DLQs setup as Destination Configs on the Lambda functions. The Amazon Pinpoint Send Messages API is a batch API that allows sending messages to 100 recipients at a time. As such, it is possible to have requests succeed partially – messages, within a single API call, that fail/throttle should also be sent to the DLQ and retried again.

The Lambda functions are configured to run at a fixed reserve concurrency value. This ensures that a fixed rate of messages is fetched from the queue and processed at any point of time. For example, a Lambda function receives messages from an SQS queue and calls the Amazon Pinpoint APIs. It has a reserved concurrency of 10 with a batch size of 10 items. The SQS queue rapidly receives 1,000 messages. The Lambda function scales up to 10 concurrent instances, each processing 10 messages from the queue. While it takes longer to process the entire queue, this results in a consistent rate of API invocations for Amazon Pinpoint.

Step 4: Monitoring and observability

Monitoring tools record performance statistics over time so that usage patterns can be identified. Timely detection of a problem (ideally before it affects end users) is the first step in observability. Detection should be proactive and multi-faceted, including alarms when performance thresholds are breached. For the architecture proposed in this blog, observability is enabled by using Amazon Cloudwatch and AWS X-Ray.

Some of the key metrics that are monitored using Amazon Cloudwatch are as follows:

  • Amazon Pinpoint
    • DirectSendMessagePermanentFailure
    • DirectSendMessageTemporaryFailure
    • DirectSendMessageThrottled
  • AWS Lambda
    • Invocations
    • Errors
    • Throttles
    • Duration
    • ConcurrentExecutions
  • Amazon SQS
    • ApproximateAgeOfOldestMessage
    • NumberOfMessagesSent
    • NumberOfMessagesReceived
  • Amazon SNS
    • NumberOfMessagesPublished
    • NumberOfNotificationsDelivered
    • NumberOfNotificationsFailed
    • NumberOfNotificationsRedrivenToDlq

AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how the application and its underlying services are performing, to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.

Notes:

  1. If you are using Amazon Pinpoint’s Campaign or Journey feature to deliver SMS to recipients in various countries, you do not need to implement this solution. Amazon Pinpoint will drain messages depending on the MessagesPerSecond configuration pre-defined in the campaign/journey settings.
  2. If you need to send transactional SMS to a small number of countries (one or two), you should work with AWS support to define your SMS sending throughput for those countries to accommodate spikey SMS message traffic instead.

Conclusion

This post shows how customers can leverage Amazon Pinpoint along with Amazon SQS and AWS Lambda to build, regulate and prioritize SMS deliveries across multiple countries or business use-cases. This leads to predictable delays in message deliveries and provides customers with the ability to control the rate and priority of messages sent using Amazon Pinpoint.


About the Authors

Satyasovan Tripathy works as a Senior Specialist Solution Architect at AWS. He is situated in Bengaluru, India, and focuses on the AWS Digital User Engagement product portfolio. He enjoys reading and travelling outside of work.

Rajdeep Tarat is a Senior Solutions Architect at AWS. He lives in Bengaluru, India and helps customers architect and optimize applications on AWS. In his spare time, he enjoys music, programming, and reading.

Handy Tips #22: Deploying Zabbix in the AWS cloud platform

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-22-deploying-zabbix-in-the-aws-cloud-platform/19343/

Deploy a production-ready Zabbix instance in the AWS cloud platform with just a few clicks.

With a major paradigm shift to cloud IT infrastructures, many organizations opt-in to migrate their on-prem systems to the Cloud. Zabbix provides official cloud images for the most popular cloud vendors including the AWS cloud platform.

Deploy the complete Zabbix infrastructure in AWS:

  • Deploying a fully functional environment takes less than 5 minutes
  • Select between multiple geographical regions

  • Select the EC2 Instance type best fit for your Zabbix workloads
  • Perfect for both Q/A and Production environments

Check out the video to learn how to deploy Zabbix in AWS.

How to deploy a Zabbix instance in AWS:
 
  1. Open the Zabbix Cloud Images page and select the AWS Zabbix server image
  2. Click Continue to Subscribe and subscribe to use the image
  3. Read the terms and conditions and click Continue to Configuration
  4. Select the Region in which you wish to deploy a Zabbix instance
  5. Select the launch options and the EC2 instance Type
  6. Select a VPC, a subnet, a Security group, and a key pair
  7. Make sure that the selected security group allows traffic through ports 10051, 22 and 443
  8. Press Launch to launch the instance
  9. Check the instance address and connect to the instance 
  10. Copy the initial frontend username and password
  11. Sign-in into the frontend with your credentials

Tips and best practices:
  • The initial frontend password can be obtained by connecting to the instance terminal
  • By default, the Zabbix frontend uses the UTC timezone
  • The frontend timezone can be changed by editing the php_value[date.timezone] variable in /etc/php-fpm.d/zabbix.conf and restarting the php-fpm process
  • The MySQL root password is stored in /root/.my.cnf configuration file

The post Handy Tips #22: Deploying Zabbix in the AWS cloud platform appeared first on Zabbix Blog.

Dynamically personalize your in-product user experience using Amazon Pinpoint in-app messaging

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/dynamically-personalize-your-in-product-user-experience-using-amazon-pinpoint-in-app-messaging/

Many businesses today struggle to align out-of-product messaging through channels such as email and SMS, with in-product messaging shown when a users is within a mobile or web application. Customers will present one message to a user through a targeted email, but once a user visits the application they are presented with different messaging. This creates confusion for the user, and reduces the chances of them performing a high-value action such as a purchasing a discounted product. Customers can get around this by hard coding certain messages into their application, however this is time consuming for development teams, and slower to implement as it requires a new release of a mobile or web client.

Amazon Pinpoint in-app messaging allows customers to create, target and display in-product messages to users dynamically without the need to update client-side code after initial implementation. This allows a non-technical persona such as a marketer to modify the application experience and target user messaging independently of a development team. This also allows the in-product messaging to share the user targeting as the out-of-app messaging. This creates consistent user messaging, and increases the chance a user performs a high value action.

The blog outlines how to create in-app endpoints, segments, and campaigns. Then how to fetch in-app messages, implement simple logic to control message prioritization, message caps, and to listen for events in order to show the message at the desired moment.

Solution Overview

Assume you are a retailer and want to display a banner with a promotion to all customers with a recent purchase over $500 when they launch the application. To deliver the above experience using the in-app messaging channel, you will need to create a dynamic customer segment where User.UserAttribute.LastPurchaseValue > $500, design an in-app message template with a call-to-action to claim the promotion, and create an in-app campaign. The in-app campaign will be triggered based on the customer event app_launch and only for customers who belong to the dynamic segment created above. To render the message and send in-app message engagement events back to Amazon Pinpoint, you will need to go through an one time setup that is explained in a later section of this blog. Monitor your in-app campaign performance across different metrics, using the Amazon Pinpoint campaign analytics dashboard.

In-app channel implementation can differ depending the use case and requirements. The creation of customer segments, message templates and campaigns can be done either via the Amazon Pinpoint console or programmatically using Amazon Pinpoint APIs. The in-app messages retrieval, rendering and recording of engagement events can either be build and managed from you or use AWS Amplify.

In the following sections you will be introduced to the seven components of the in-app channel and how they operate together:

  • Step 1: Creating in-app endpoints & segment
  • Step 2: Creating an in-app message template
  • Step 3: Creating an in-app campaign
  • Step 4: Querying available in-app messages for an Amazon Pinpoint customer
  • Step 5: Rendering in-app messages
  • Step 6: Recording in-app events
  • Step 7: In-app message display logic using SessionCap, DailyCap, TotalCap

Prerequisites

For this blog post, you should have the following prerequisites:

Step 1: Creating an Amazon Pinpoint customer segment

In Amazon Pinpoint, users can have one or more endpoints. An endpoint describes a unique address, such as an email or mobile number. Similar to other Amazon Pinpoint channels, you need to create or import in-app endpoints with Channel = IN_APP. To retrieve in-app messages for a user, you have to use their IN_APP endpoint id. Note that the Address is not a required field for in-app and can be left blank.

  1. Copy the text below and save it as CSV in your computer
    Id,ChannelType,Address,User.UserId,OptOut
    111,IN_APP,123,userid1,NONE
  2. Navigate to the Amazon Pinpoint console
  3. Select the Amazon Pinpoint project that you want to set up the in-app channel
  4. Navigate to the Segments’ section
  5. Choose Import a segment
  6. Select Upload files from your computer as Import method
  7. Select Choose files and find the CSV file you created in step 1
  8. Choose Create segment
  9. Navigate to AWS Cloudshell console and wait till the terminal loads
  10. Replace <Application id> with your Amazon Pinpoint application id in the following command aws pinpoint get-endpoint –application-id <Application id> —endpoint-id 111
  11. Execute the command in step 10 by pasting it in the AWS CloudShell terminal and press Enter. You should be able to see a response similar to the one below


Step 2: Creating an in-app Message Template

In-app message templates contain a variety of fields with some of them offering the option to choose from pre-defined values such as Header alignment and others in a form of free text such as Message. The end result is a banner that includes a Header, Message, Image, Button(s) and Custom data with all of them being fully customizable. While building an in-app template, you can preview the banner across Phone, Tablet and Browser. This preview is for reference purposes only as the rendering can vary according to the end user’s device as well as your preference on how to render it.

Note: The message template for in-app currently doesn’t support message helpers for personalization but it is a feature the Amazon Pinpoint product team is exploring.

  1. Navigate to Message templates
  2. Select Create template and choose In-app messaging as Channel
  3. Type my_first_in-app_message_template as Template name
  4. Complete the  section, as per your message requirements
  5. Select Create

Step 3: Creating an in-app Campaign

A campaign is a messaging initiative that engages a specific audience segment. A campaign sends tailored messages according to a schedule or customer event that you define.

  1. Navigate to your Amazon Pinpoint project and select Campaigns and Create a campaign
  2. Type my_first_in-app_campaign as Campaign name
  3. Select Standard campaign as Campaign type and In-app messaging as Channel
  4. Select Very important for Set prioritization. This configuration is specific to the in-app channel and it helps you identify the most important in-app message for an endpoint
  5. Select Next and choose the segment in-app-segment from the dropdown. This should be an imported segment that you created in Step 1: Creating an Amazon Pinpoint customer segment. The Segment estimate should show 1 endpoints
  6. Select Next and choose the in-app message template with the name my_first_in-app_message_template, then select Next
  7. An in-app campaign needs to have a Trigger event, which will determine when the in-app message will be displayed. You can add event Attributes and/or Metrics to make it more specific. To learn how to record events with Amazon Pinpoint visit Reporting events in your application. If you currently do not record any events in your Amazon Pinpoint project type test_event as Trigger events
  8. Select Start and End date and time for Campaign dates. Note that in-app campaigns need to start at least 15 minutes later from the time of publishing
  9. In the Edit campaign settings section you will find the fields, which specify the Maximum number of session messages viewed per endpoint (SessionCap), Maximum number of daily messages viewed per endpoint (DailyCap) and Maximum number of messages viewed per endpoint (TotalCap). These values indicate how many times the in-app message can be displayed to the customer for that in-app campaign within a session, day and in total respectively. In all three campaign setting fields enter the number 10 and select Override project-level setting where applicable Set prioritization, Trigger events and Caps are part of the in-app message payload that you receive when calling Amazon Pinpoint’s In-app messages REST API operation. You will use this information to decide whether to render or not that in-app message.
  10. Select Next scroll down and select Launch campaign

Step 4: Querying available in-app messages for an Amazon Pinpoint customer

To retrieve in-app messages for an Amazon Pinpoint customer, you will need to have their IN_APP endpoint id and either use the In-app messages REST API operation, one of the AWS SDKs that support Amazon Pinpoint, AWS Command Line Interface or AWS Amplify.

Note: AWS Amplify manages on your behalf the in-app messages request, rendering and tracking, thus if you are using AWS Amplify for Amazon Pinpoint in-app channel the steps below are not required.

In the request body you need to specify the IN_APP endpoint id. If there are any available in-app messages for that endpoint id, the response will contain a JSON object with the top ten active in-app messages based on their priority (the ten in app message response is a hard limit). Loop through the in-app messages and identify the one that meets the criteria based on the Trigger event and Prioritization.

  1. Navigate to the AWS CloudShell console
  2. Replace <Application id> with your Amazon Pinpoint application id in the following command aws pinpoint get-in-app-messages –application-id <Application id> —endpoint-id 111
  3. Execute the command in step 2 by pasting it in the AWS CloudShell terminal and press Enter. You should be able to see a response similar to the one below

The response should contain only one in-app campaign. You can see all the in-app message template data and campaign configuration are present in the response.

Note: Campaigns that have passed their end date, or have reached their daily or total cap limit won’t show in the response. In case the response contains more than one in app message with the same priority and they both haven’t exceeded their caps, you can use the in-app campaign start date to evaluate which one to display.

It is recommended to retrieve the in-app messages once per session and store them locally. That way in every event the customer triggers in your mobile / web app you would check against the in-app messages stored locally instead of performing additional calls to Amazon Pinpoint. This approach decreases the in-app channel cost as you pay per request.

You can perform the operation of retrieving in-app messages for an Amazon Pinpoint customer either client side or server side. Server side can be implemented using the architecture illustrated below, which utilizes Amazon API Gateway and AWS Lambda creating a development framework agnostic approach. Furthermore Amazon API Gateway is offering a great variety of authentication and authorization mechanisms.

The server side architecture depicted below doesn’t cover the use case for offline customers. If this is a requirement then it is recommend to store in-app messages and fetch them locally when the device doesn’t have internet connectivity. Once the device is connected back to the internet you can retrospectively send any in-app related events.

Note: If you are using AWS Amplify, it will retry to publish customer offline events that occurred once the device gets back online.

Step 5: Rendering in-app messages

Render the in-app messages yourself based on the in-app message API response or use AWS Amplify which will render it on your behalf. AWS Amplify allows you to provide your own In-App Messaging UI component to override the default Amplify provided UI.

Step 6: Recording in-app events

Measuring in-app campaigns’ performance is based on four metrics:

  • Message displayed: a message has been displayed to an end user
  • Message dismissed: a user has dismissed a message
  • Message clicked: a user has clicked through a message
  • Any event type: Any event that a user can trigger on the mobile or web app

Fire the above events either from client or server side as Amazon Pinpoint custom events. Amazon Pinpoint custom events can be recorded using put_events REST API operation or AWS SDKs that support Amazon Pinpoint.

Note: If you are using AWS Amplify, the in-app events will be recorded automatically

To have these events recorded under Amazon Pinpoint Campaign deliver metrics dashboard, you have to use the following EventType names:

  • Message displayed: _inapp.message_displayed
  • Message dismissed: _inapp.message_dismissed
  • Message clicked: _inapp.message_clicked
  • Any event type: No specific name is required

In addition to the EventType, a few other fields are required in order to attribute these events to the correct in-app campaign. Within the event attributes’ object of the request payload, the fields campaign_id and delivery_type must be provided. Campaign_id should match the InApp campaign_id, while the delivery_type should be IN_APP_MESSAGE. Additionally, the treatment_id is necessary if you are running an A/B test.

Note: If you do not use the above event names and attributes, you won’t see any events under Campaign delivery metrics and Campaign engagement rates on the Amazon Pinpoint console.

Step 7: In-app message display logic using SessionCap, DailyCap and TotalCap

Message display logic refers to the logic that stores and assesses the number of times a user has seen / interacted with the in-app message. Amazon Pinpoint calculates the DailyCap & TotalCap as long as you record the _inapp.message_displayed event or using AWS Amplify. For the SessionCap event you need to count the _inapp.message_displayed locally on your mobile / web application unless you are using AWS Amplify.

Note: When retrieving the in-app messages from Amazon Pinpoint, the payload contains the remaining number of times you can display the in-app message daily & total.

Conclusion

This post walks you through how to configure Amazon Pinpoint to send in-app messages to your customers when browsing your mobile / web application. Using this Amazon Pinpoint channel, you can now:

  • Create in-app segments, message templates and campaigns
  • Retrieve in-app messages per user
  • Render in-app messages
  • Record customer engagement data with the in-app message

Related links

To learn more about the technologies or features used to create this solution, explore the following pages:

Deliverability Sessions: Managing Large Volume Spikes in Email

Post Syndicated from Matt Strzelecki original https://aws.amazon.com/blogs/messaging-and-targeting/deliverability-sessions-managing-large-volume-spikes-in-email/

Introduction:
In an ideal world of email deliverability, email is sent on a regular cadence to a normalized lists of subscribers and recipient email addresses with no major changes in pattern. Typically the volume, list members and content are relatively the same and mailbox providers (such as Gmail) begin to expect that schedule and those volumes. Often times however, marketers are tasked with sending out campaigns (both marketing and transactional) with little time to prepare and even less time to ramp up to a normalized schedule. This can create not only a short term deliverability problem but potentially a long term deliverability problem as your sender reputation may suffer as a result of big changes to volume and cadence. This blog provides some recommendations and points to consider that will give your messages a better chance at inbox placement and thus engagement.

What Internet Service Providers (ISP)/Mailbox Providers (MP) Expect:
As email senders, we are responsible to understand and adhere to the recipient domains we are attempting to send messages. For example, if you are sending a good portion of your emails to Gmail or Yahoo you should understand what each mailbox provider expects in terms of warming up, sending throughput, and general deliverability advice. Examples of these resources can be found here for Gmail and Yahoo. The important thing here is that while general email practices are similar, each mailbox provider may have specific requirements or recommendations for delivering to their users. The mailbox providers top priorities are to #1 deliver wanted messages to their users and #2 block unwanted messages from getting to their users. So one of the keys to developing a good approach even with spikes in sending is to understand your destination ISPs/mailboxes and make sure you’re following the recommended best practices from those ISPs/MPs.

Ultimately you need to build trust with the ISPs/MPs in order to successfully deliver to them. A big part of it is understanding what they expect but the following key areas will also provide valuable recommendations for approaching an email program with variant timing and volumes. These topics include: List hygiene, bounce/complaint management, list segmentation/stacking & scheduling, and IP/Domain environment.

List Hygiene and Management:
The next area of focus we’ll review is your list and how you manage your list. It is important to understand that building a list is hard and takes a lot of time and effort but it is important to build your list(s) organically. This means that you only send to folks who have explicitly signed up for whatever it is you’re planning on sending them. The goal here is to honor your user’s preferences and at times limit the volume of messages if they are unresponsive.

When a recipient becomes unresponsive over a longer period of time (say over 1 year) a few things are happening if you continue to send those addresses email. The first thing that happens is that your user engagement goes down as you are not getting opens for any of those messages sent. This can be problematic especially as mailbox providers shift to more machine learning and A.I. driven filtering decisions, like Gmail. The second thing that often happens is if they are ignoring your messages purposefully and you keep sending, at some point they may select all the messages and flag them all as spam inflating your spam feedback numbers. The third thing that happens is that ISPs/MPs start to see lower overall user engagement which then reduces your sender reputation score with them and if your spam rate spikes as well, you’ll be certain to have deliverability issues.

The best way to manage your list is to be as targeted as possible in terms of your brands, offerings, and what the user initially signed up to receive (or implicitly confirmed through a purchase or transaction). Understand that if a user is not engaging with your message it is best to stop sending that specific series and look at putting them into a win-back style campaign in which you make one to a few more attempts to connect with the recipient and confirm their preferences and opt-in status to those mailing lists.

In large volume sending days, you still need to honor previous unsubscribes and spam complaints by removing them from your active mailing lists and not sending to those addresses that have explicitly opted-out. Additionally, large spikes in bounced email addresses (invalid addresses) will also negatively impact your sender reputation so be sure to keep your suppression list(s) and bounce management current.

More information on strategies for list management are available in this SES Blog post:
https://aws.amazon.com/blogs/messaging-and-targeting/strategies-for-list-management-with-amazon-pinpoint-and-amazon-simple-email-service/

IP/Domain Reputation:
Building and maintaining IP and domain reputation is extremely important when it comes to consistent deliverability and also having good enough sender reputation to have a spike in traffic without immediately running into deliverability issues. The best way to maintain good sender reputation (both IP and domain) is getting high user engagement (Unique Open Rate) and low complaints. High user engagement means users are interacting positively with your messages at a high rate, primarily identified by Opens but can also be supported by clicks as well. The rate can vary based on industry but if you’re getting around a 20% unique open rate, you have high user engagement and are doing well with your list. But rates can vary depending on industry, frequency of sends, types of messages and content. Complaints can hurt deliverability quickly because it is instant feedback to ISPs/MPs and if the complaint rate is high enough it is a major trigger for the ISP/MP to react negatively which typically results in putting messages directly into the spam folder, throttling messages (deferring) and/or blocking the message outright.

List Segmenting and Scheduling:
When it comes to a large volume spike in messaging for your email program list segmenting and scheduling is extremely important. Typically you want to avoid a large spike in volume but at times it is mandatory to send out. To do so you need to split out your segments by likely best performance. You want to send to the subscribers that will most likely engage with the message positively – for instance your new signups, recently engaged in a message and long term engagement (multiple opens within the past 30 days for example). This does two things. First it allows the most likely to positively engage with the message the opportunity to get the message to their inbox. The second thing that will happen is that as you get better initial engagement on your first few segments, your sender reputation will continue to improve and the next segments will have a much better chance at also hitting the inbox as a result of good performance from the first segments.

When you need to send a large volume spike, utilize as much of your scheduling flexibility as you have available. If you have 2 days to send the massive spike, use the full two days and spread the segments out. This helps you reduce the size of your message blasts to an ISP/MP. In addition, you can monitor performance of your segments which will start to give you a better idea of where in your list the ROI might not be worth the risk. For example, once you get towards the end of your list it may not be worth sending to people who have never opened a message in the past year and the risk of a complaint, bounce or unsubscribe may outweigh that benefit of a potential open/click.

Authentication:
There are two authentication mechanisms for email which are SPF and DKIM. SPF (Sender Policy Framework) is a simple text record within the DNS of the sending domain that lists the IP addresses that messages should always come from and a policy indicating what to do with messages that are not from those resources. These options can be rejecting a message, accepting all messages or accepting messages but placing them in the spam folder. Additionally DKIM (DomainKeys Identified Mail) is an encrypted signature within the message header to validate the message came from the purported source. Most mailbox providers require both authentication mechanisms to exists to pass the message on to their users.

In additional to these two authentication mechanisms is another reporting mechanism called DMARC (domain-based message authentication, reporting and conformance). DMARC utilizes SPF and DKIM protocols to indicate to recipient mail servers that the messages are protected by SPF and DKIM and how to handle the messages based on the alignment of these two protocols. In addition to creating a delivery policy, DMARC provides the ability for the recipient to send back reports to the sender indicating a pass or fail of the DMARC evaluation. This is a good mechanism for brands to see if their brand is being spoofed by bad actors and/or if they have authentication issues for various sources of their messages.

Authentication is not only suggested but it is required. Passing SPF and DKIM are critical for message delivery. DMARC allows senders to additionally impose policies based on these two heavily used email authentication protocols. DMARC also provides insight into other sources who may be purporting to your brand.

More information on these protocols can be found here:
SPF: https://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email-authentication-spf.html
DKIM: https://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email-authentication-dkim.html
DMARC: https://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email-authentication-dmarc.html

Final Thoughts:
Even though you will sometimes be forced to go off schedule (or possibly a non-normalized schedule is the norm) you must still try to align with ISP/MP best practices when possible. The goal is to build and maintain trust with not only the ISPs and Mailbox Providers but more importantly with your recipients. Your recipients are your key to email deliverability success – send them what they want and honor their opt-outs or preference center updates and you will be on the right track for good email deliverability.

Snaring the Bad Folks

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/snaring-the-bad-folks-66726a1f4c80

Project by Netflix’s Cloud Infrastructure Security team (Alex Bainbridge, Mike Grima, Nick Siow)

Cloud security is a hard problem, but an even harder one is cloud security at scale. In recent years we’ve seen several cloud focused data breaches and evidence shows that threat actors are becoming more advanced with their techniques, goals, and tooling. With 2021 set to be a new high for the number of data breaches, it was plainly evident that we needed to evolve how we approach our cloud infrastructure security strategy.

In 2020, we decided to reinvent how we handle cloud security findings by redefining how we write and respond to cloud detections. We knew that given our scale, we needed to rely heavily on automations and that we needed to build our solutions using battle tested scalable infrastructure.

Introducing Snare

Snare Logo

Snare is our Detection, Enrichment, and Response platform for handling cloud security related findings at Netflix. Snare is responsible for receiving millions of records a minute, analyzing, alerting, and responding to them. Snare also provides a space for our security engineers to track what’s going on, drill down into various findings, follow their investigation flow, and ensure that findings are reaching their proper resolution. Snare can be broken down into the following parts: Detection, Enrichment, Reporting & Management, and Remediation.

Snare Finding Lifecycle

Overview

Snare was built from the ground up to be scalable to manage Netflix’s massive scale. We currently process tens of millions of log records every minute and analyze these events to perform in-house custom detections. We collect findings from a number of sources, which includes AWS Security Hub, AWS Config Rules, and our own in-house custom detections. Once ingested, findings are then enriched and processed with additional metadata collected from Netflix’s internal data sources. Finally, findings are checked against suppression rules and routed to our control plane for triaging and remediation.

Where We Are Today

We’ve developed, deployed, and operated Snare for almost a year, and since then, we’ve seen tremendous improvements while handling our cloud security findings. A number of findings are auto remediated, others utilize slack alerts to loop in the oncall to triage via the Snare UI. One major improvement was a direct time savings for our detection squad. Utilizing Snare, we were able to perform more granular tuning and aggregation of findings leading to an average of 73.5% reduction in our false positive finding volume across our ingestion streams. With this additional time, we were able to focus on new detections and new features for Snare.

Speaking of new detections, we’ve more than doubled the number of our in-house detections, and onboarded several detection solutions from security vendors. The Snare framework enables us to write detections quickly and efficiently with all of the plumbing and configurations abstracted away from us. Detection authors only need to be concerned with their actual detection logic, and everything else is handled for them.

Simple Snare Root User Detection

As for security vendors, we’ve most notably worked with AWS to ensure that services like GuardDuty and Security Hub are first class citizens when it comes to detection sources. Integration with Security Hub was a critical design decision from the start due to the high amount of leverage we get from receiving all of the AWS Security findings in a normalized format and in a centralized location. Security Hub has played an integral role in our platform, and made evaluations of AWS security services and new features easy to try out and adopt. Our plumbing between Security Hub and Snare is managed through AWS Organizations as well as EventBridge rules deployed in every region and account to aid in aggregating all findings into our centralized Snare platform.

High Level Security Service Plumbing
Example AWS Security Finding from our testing/sandbox account In Snare UI

One area that we are investing heavily is our automated remediation potential. We’ve explored a few different options ranging from fully automated remediations, manually triggered remediations, as well as automated playbooks for additional data gathering during incident triage. We decided to employ AWS Step Functions to be our execution environment due to the unique DAGs we could build and the simplistic “wait”/”task token” functionality, which allows us to involve humans when necessary for approval/input.

Building on top of step functions, we created a 4 step remediation process: pre-processing, decision, remediation, and post-processing. Pre/post processing can be used for managing out-of-band resource checks, or any work that needs to be done in order to ensure a successful remediation. The decision step is used to perform a final pre-flight check before remediation. This can involve a human reachout, verifying the resource is still around, etc. The remediation step is where we perform our actual remediation. We’ve been able to use this to a great deal of success with infrastructure-wide misconfigured resources being automatically fixed near real time, and enabling the creation of new fully automated incident response playbooks. We’re still exploring new ways we might be able to use this, and are excited for how we might evolve our approach in the near future.

Step Function DAG for S3 Public Access Block Remediation

Diagram from a remediation to enable S3’s public access block on a non-compliant bucket. Each choice stage allows for dynamic routing to a variety of different stages based on the output of the previous function. Wait stages are used when human intervention/approval is needed.

Extensible Learnings

We’ve come a long way in our journey, and we’ve had numerous learning opportunities that we wanted to collect and share. Hopefully, we’ve made the mistakes and learned from those experiences.

Information is Key

Home grown context and metadata streams are invaluable for a detection and response program. By uniting detections and context, you’re able to unlock a new world of possibilities for reducing false positives, creating new detections that rely on business specific context, and help better tailor your severities and automated remediation decisions based on your desired risk appetite. A common theme we’ve often encountered is the need to bring additional context throughout various stages of our pipeline, so make sure to plan for that from the get-go.

Step Functions for Remediations

Step functions provide a highly extensible and unique platform to create remediations. Utilizing the AWS CDK, we were able to build a platform to enable us to easily roll out new remediations. While creating our remediation platform, we explored SSM Automation Runbooks. While SSM Automation Runbooks have great potential for remediating simple issues, we found they weren’t flexible enough to cover a wide spread of our needs, nor did they offer some of the more advanced features we were looking for such as reaching out to humans. Step functions gave us the right amount of flexibility, control, and ease of use in order to be a great asset for the Snare platform.

Closing Thoughts

We’ve come a long way in a year, and we still have a number of interesting things on the horizon. We’re looking at continuing to create new, more advanced features and detections for Snare to reduce cloud security risks in order to keep up with all of the exciting things happening here at Netflix. Make sure to check out some of our other recent blog posts!

Special Thanks

Special thanks to everyone who helped to contribute and provide feedback during the design and implementation of Snare. Notably Shannon Morrison, Sapna Solanki, Jason Schroth from our partner team Detection Engineering, as well as some of the folks from AWS — Prateek Sharma & Ely Kahn. Additional thanks to the rest of our Cloud Infrastructure Security team (Hee Won Kim, Joseph Kjar, Steven Reiling, Patrick Sanders, Srinath Kuruvadi) for their support and help with Snare features, processes, and design decisions!


Snaring the Bad Folks was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

How to set up Amazon Quicksight dashboard for Amazon Pinpoint and Amazon SES engagement events

Post Syndicated from satyaso original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-set-up-amazon-quicksight-dashboard-for-amazon-pinpoint-and-amazon-ses-events/

In this post, we will walk through using Amazon Pinpoint and Amazon Quicksight to create customizable messaging campaign reports. Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service that allows customers to connect with users over channels like email, SMS, push, or voice. Amazon QuickSight is a scalable, serverless, embeddable, machine learning-powered business intelligence (BI) service built for the cloud. This solution allows event and user data from Amazon Pinpoint to flow into Amazon Quicksight. Once in Quicksight, customers can build their own reports that shows campaign performance on a more granular level.

Engagement Event Dashboard

Customers want to view the results of their messaging campaigns in ever increasing levels of granularity and ensure their users see value from the email, SMS or push notifications they receive. Customers also want to analyze how different user segments respond to different messages, and how to optimize subsequent user communication. Previously, customers could only view this data in Amazon Pinpoint analytics, which offers robust reporting on: events, funnels, and campaigns. However, does not allow analysis across these different parameters and the building of custom reports. For example, show campaign revenue across different user segments, or show what events were generated after a user viewed a campaign in a funnel analysis. Customers would need to extract this data themselves and do the analysis in excel.

Prerequisites

  • Digital user engagement event database solution must be setup at 1st.
  • Customers should be prepared to purchase Amazon Quicksight because it has its own set of costs which is not covered within Amazon Pinpoint cost.

Solution Overview

This Solution uses the Athena tables created by Digital user engagement events database solution. The AWS CloudFormation template given in this post automatically sets up the different architecture components, to capture detailed notifications about Amazon Pinpoint engagement events and log those in Amazon Athena in the form of Athena views. You still need to manually configure Amazon Quicksight dashboards to link to these newly generated Athena views. Please follow the steps below in order for further information.

Use case(s)

Event dashboard solutions have following use cases: –

  • Deep dive into engagement insights. (eg: SMS events, Email events, Campaign events, Journey events)
  • The ability to view engagement events at the individual user level.
  • Data/process mining turn raw event data into useful marking insights.
  • User engagement benchmarking and end user event funneling.
  • Compute campaign conversions (post campaign user analysis to show campaign effectiveness)
  • Build funnels that shows user progression.

Getting started with solution deployment

Prerequisite tasks to be completed before deploying the logging solution

Step 1 – Create AWS account, Pinpoint Project, Implement Event-Database-Solution.
As part of this step customers need to implement DUE Event database solution as the current solution (DUE event dashboard) is an extension of DUE event database solution. The basic assumption here is that the customer has already configured Amazon Pinpoint project or Amazon SES within the required AWS region before implementing this step.

The steps required to implement an event dashboard solution are as follows.

a/Follow the steps mentioned in Event database solution to implement the complete stack. Prior installing the complete stack copy and save the name Athena events database name as shown in the diagram. For my case it is due_eventdb. Database name is required as an input parameter for the current Event Dashboard solution.

b/Once the solution is deployed, navigate to the output page of the cloud formation stack, and copy, and save the following information, which will be required as input parameters in step 2 of the current Event Dashboard solution.

Step 2 – Deploy Cloud formation template for Event dashboard solution
This step generates a number of new Amazon Athena views that will serve as a data source for Amazon Quicksight. Continue with the following actions.

  • Download the cloud formation template(“Event-dashboard.yaml”) from AWS samples.
  • Navigate to Cloud formation page in AWS console, click up right on “Create stack” and select the option “With new resources (standard)”
  • Leave the “Prerequisite – Prepare template” to “Template is ready” and for the “Specify template” option, select “Upload a template file”. On the same page, click on “Choose file”, browse to find the file “Event-dashboard.yaml” file and select it. Once the file is uploaded, click “Next” and deploy the stack.

  • Enter following information under the section “Specify stack details”:
    • EventAthenaDatabaseName – As mentioned in Step 1-a.
    • S3DataLogBucket- As mentioned in Step 1-b
    • This solution will create additional 5 Athena views which are
      • All_email_events
      • All_SMS_events
      • All_custom_events (Custom events can be Mobile app/WebApp/Push Events)
      • All_campaign_events
      • All_journey_events

Step 3 – Create Amazon Quicksight engagement Dashboard
This step walks you through the process of creating an Amazon Quicksight dashboard for Amazon Pinpoint engagement events using the Athena views you created in step-2

  1. To Setup Amazon Quicksight for the 1st time please follow this link (this process is not needed if you have already setup Amazon Quicksight). Please make sure you are an Amazon Quicksight Administrator.
  2. Go/search Amazon Quicksight on AWS console.
  3. Create New Analysis and then select “New dataset”
  4. Select Athena as data source
  5. As a next step, you need to select what all analysis you need for respective events. This solution provides option to create 5 different set of analysis as mentioned in Step 2. They are a/All email events, b/All SMS Events, c/All Custom Events (Mobile/Web App, web push etc), d/ All Campaign events, e/All Journey events. Dashboard can be created from Quicksight analysis and same can be shared among the organization stake holders. Following are the steps to create analysis and dashboards for different type of events.
  6. Email Events –
    • For all email events, name the analysis “All-emails-events” (this can be any kind of customer preferred nomenclature), select Athena workgroup as primary, and then create a data source.
    • Once you create the data source Quicksight lists all the views and tables available under the specified database (in our case it is:-  due_eventdb). Select the email_all_events view as data source.
    • Select the event data location for analysis. There are mainly two options available which are a/ Import to Spice quicker analysis b/ Directly query your data. Please select the preferred options and then click on “visualize the data”.
    • Import to Spice quicker analysis – SPICE is the Amazon QuickSight Super-fast, Parallel, In-memory Calculation Engine. It’s engineered to rapidly perform advanced calculations and serve data. In Enterprise edition, data stored in SPICE is encrypted at rest. (1 GB of storage is available for free for extra storage customer need to pay extra, please refer cost section in this document )
    • Directly query your data – This process enables Quicksight to query directly to the Athena or source database (In the current case it is Athena) and Quicksight will not store any data.
    • Now that you have selected a data source, you will be taken to a blank quick sight canvas (Blank analysis page) as shown in the following Image, please drag and drop what visualization type you need to visualize onto the auto-graph pane. Please note that Amazon QuickSight is a Busines intelligence platform, so customers are free to choose the desired visualization types to observe the individual engagement events.
    • As part of this blog, we have displayed how to create some simple analysis graphs to visualize the engagement events.
    • As an initial step please Select tabular Visualization as shown in the Image.
    • Select all the event dimensions that you want to put it as part of the Table in X axis. Amazon Quicksight table can be extended to show as many as tables columns, this completely depends upon the business requirement how much data marketers want to visualize.
    • Further filtering on the table can be done using Quicksight filters, you can apply the filter on specific granular values to enable further filtering. For Eg – If you want to apply filtering on the destination email Id then 1/Select the filter from left hand menu 2/Add destination field as the filtering criterion 3/ Tick on the destination field you are trying to filter or search for the Destination email ID that 4/ All the result in the table gets further filtered as per the filter criterion
    • As a next step please add another visual from top left corner “Add -> Add Visual”, then select the Donut Chart from Visual types pane. Donut charts are always used for displaying aggregation.
    • Then select the “event_type” as the Group to visualize the aggregated events, this helps marketers/business users to figure out how many email events occurred and what are the aggregated success ratio, click ratio, complain ratio or bounce ratio etc for the emails/Campaign that’s sent to end users.
    • To create a Quicksight dashboards from the Quicksight analysis click Share menu option at the top right corner then select publish dashboard”. Provide required dashboard name while publishing the dashboard”. Same dashboard can be shared with multiple audiences in the Organization.
    • Following is the final version of the dashboard. As mentioned above Quicksight dashboards can be shared with other stakeholders and also complete dashboard can be exported as excel sheet.
  7. SMS Events-
    • As shown above SMS events can be analyzed using Quicksight and dash boards can be created out of the analysis. Please repeat all of the sub-steps listed in step 6. Following is a sample SMS dashboard.
  8. Custom Events-
    • After you integrate your application (app) with Amazon Pinpoint, Amazon Pinpoint can stream event data about user activity, different type custom events, and message deliveries for the app. Eg :- Session.start, Product_page_view, _session.stop etc. Do repeat all of the sub-steps listed in step 6 create a custom event dashboards.
  9. Campaign events
    • As shown before campaign also can be included in the same dashboard or you can create new dashboard only for campaign events.

Cost for Event dashboard solution
You are responsible for the cost of the AWS services used while running this solution. As of the date of publication, the cost for running this solution with default settings in the US West (Oregon) Region is approximately $65 a month. The cost estimate includes the cost of AWS Lambda, Amazon Athena, Amazon Quicksight. The estimate assumes querying 1TB of data in a month, and two authors managing Amazon Quicksight every month, four Amazon Quicksight readers witnessing the events dashboard unlimited times in a month, and a Quicksight spice capacity is 50 GB per month. Prices are subject to change. For full details, see the pricing webpage for each AWS service you will be using in this solution.

Clean up

When you’re done with this exercise, complete the following steps to delete your resources and stop incurring costs:

  1. On the CloudFormation console, select your stack and choose Delete. This cleans up all the resources created by the stack,
  2. Delete the Amazon Quicksight Dashboards and data sets that you have created.

Conclusion

In this blog post, I have demonstrated how marketers, business users, and business analysts can utilize Amazon Quicksight dashboards to evaluate and exploit user engagement data from Amazon SES and Pinpoint event streams. Customers can also utilize this solution to understand how Amazon Pinpoint campaigns lead to business conversions, in addition to analyzing multi-channel communication metrics at the individual user level.

Next steps

The personas for this blog are both the tech team and the marketing analyst team, as it involves a code deployment to create very simple Athena views, as well as the steps to create an Amazon Quicksight dashboard to analyse Amazon SES and Amazon Pinpoint engagement events at the individual user level. Customers may then create their own Amazon Quicksight dashboards to illustrate the conversion ratio and propensity trends in real time by integrating campaign events with app-level events such as purchase conversions, order placement, and so on.

Extending the solution

You can download the AWS Cloudformation templates, code for this solution from our public GitHub repository and modify it to fit your needs.


About the Author


Satyasovan Tripathy works at Amazon Web Services as a Senior Specialist Solution Architect. He is based in Bengaluru, India, and specialises on the AWS Digital User Engagement product portfolio. He likes reading and travelling outside of work.

InsightCloudSec Supports 12 New AWS Services Announced at re:Invent

Post Syndicated from Chris DeRamus original https://blog.rapid7.com/2021/12/06/insightcloudsec-supports-12-new-aws-services-announced-at-re-invent/

InsightCloudSec Supports 12 New AWS Services Announced at re:Invent

In case you didn’t hear, Amazon hosted AWS re:Invent in Las Vegas last week. As has come to be expected at the annual mega-event, Amazon made a number of huge announcements and launched a significant number of improvements and brand-new services and settings to enhance their public cloud platform, including an improved version of Amazon Inspector, S3 Object Ownership, Recycle Bin, EBS Archive Mode, and more.

Along with these announcements comes plenty of excitement and fanfare from the developer community who gets to take advantage of the new functionality. And that excitement is warranted. But these announcements also usually come with a hint of hesitation from their colleagues in security, who are responsible for analyzing all of these new services and settings to ensure that they are used properly and don’t introduce unintended consequences to their AWS environment. Yes, security is a factor here, but those unintended consequences also include costs associated with rolling out these new services. Rightfully so: It can often take weeks or months for organizations to vet these services, define governance policies, and actually start taking advantage of them.

But in order to help extinguish some of that announcement-induced anxiety and allow our customers to start taking advantage of these services as quickly as possible, the InsightCloudSec team has worked day and night for the last week to deliver support for a dozen of the new services that AWS rolled out last week.

In all of these cases, InsightCloudSec gathers the data related to these services across all AWS accounts and regions and consolidates it, giving security teams a single place to see all of the information across the entire AWS footprint. In many cases, the support also enhances the services provided by Amazon by providing additional context about the service or the resources associated with it.

Rather than choosing between slowing down innovation or taking on unmitigated risk, our customers will have the ability to take full advantage of each of these services as soon as they are available.

The list of newly supported AWS services and services includes:

Let’s take a look at a few of the most critical services and what they mean for DevOps and Security teams.

The new AWS Inspector

Amazon Inspector is a vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure. As an AWS-built service, Amazon Inspector is designed to exchange data and interact with other core AWS services not only to identify potential security findings, but also to automate addressing those findings.

By joining insights from both AWS Inspector and Rapid7, customers benefit from immediate value in the form of multiple enhancements across the board. These include enhanced risk assessment of containers and workloads, unified visibility and control, and robust context across AWS environments.

By consolidating AWS’s vulnerability management solutions with Rapid7’s cloud security capabilities, organizations are enabled with a ​​highly scalable service, equipped with optimized security controls to better handle their most valuable assets.

InsightCloudSec seamlessly complements the new and improved AWS Inspector, allowing customers to leverage enhanced capabilities including:

  • Identify regions, accounts, and compute instances where AWS Inspector is not enabled, along with a new bot action to turn on the capability across EC2/ECR
  • Identify compute resources by risk score and/or specific findings
  • Identify accounts and regions with the highest overall risk
  • Add Inspector as an agent type so customers can switch to the “Vulnerability View,” which provides a single pane of glass to view and naturally sort assets by risk/severity findings across their entire fleet of accounts
  • Use Inspector data to enrich existing Insights such as resources on a public subnet, resources with an IAM role attached, etc. to build new Insights (e.g., EC2 instance on a public subnet with a security group exposing SSH that has been identified as high-risk by Inspector)

VPC Network Access Analyzer

Another great new service that Amazon rolled out is the Amazon VPC Network Access Analyzer, which helps their customers identify network configurations that lead to unintended network access. The tool essentially allows you to create a scope or query — for example, you could create a scope to find all web apps that do not use a firewall to access internet resources — then analyze your AWS account against that scope. It then serves up a list of the unexpected network paths between resources defined in the scope.

InsightCloudSec supports this new network analyzer by consuming all findings from the analyses our customers run in their entire AWS environment. This gives cloud security teams a single place to see all results, rather than having to jump from account to account to gather all the information across the entire AWS footprint. It also enriches the data provided by that network analysis with additional context about the resources, such as whether they have misconfigurations or overly permissive IAM policies attached to them, helping the user see the bigger picture and more effectively prioritize their work. Finally, the automation capabilities in InsightCloudSec allow our users to automatically schedule these network scans on a regular basis across target accounts, eliminating all manual effort.

S3 Object Ownership

The sheer scale of S3 makes access management a blind spot for a number of organizations. For years, customers who use S3 have had the ability to set object-level permissions, effectively superseding the access permissions established at the bucket level. While enhancements have been introduced over the years such as Block Public Access, which can help mitigate the chance of objects being made public via direct Access Control Lists (ACLs), not all customers leverage the capability. Amazon has gone a step further by introducing a capability known as S3 Object Ownership, which gives administrators the ability to completely disable object-level ACLs. This setting is now the default value on all newly created S3 buckets, and customers can now migrate their existing S3 buckets to leverage this capability.

InsightCloudSec now detects the presence of this capability and renders it in the product, as well as through the API response via the `Object Ownership` property. A new filter was created to identify S3 buckets based on the value of this property, and the team has also expanded the core Insight Storage Container Without Uniform Bucket Level Access to work across AWS, AWS GovCloud, and AWS China.

InsightCloudSec Supports 12 New AWS Services Announced at re:Invent

Recycle Bin

Data Lifecycle Management can be challenging as customer cloud footprints grow from dozens of cloud accounts to hundreds or even thousands. At InsightCloudSec we’ve seen customers with millions of EBS Snapshots across their fleet of accounts. While many of our customers have embraced AWS Backup to help centralize their backup and retention management, there’s always concern with the accidental removal of an important snapshot while performing cleanup operations across accounts.

AWS offers a new Recycle Bin service that can be used to reduce the risk of accidental deletion. Think of Recycle Bin in the same way that Recycle Bin operates on your own computer. When enabled, the capability will store snapshots for a period of time defined by the customer and allow them to be recovered. Customers define the length of time they’d like these snapshots to remain in the recycle bin before being permanently deleted.

InsightCloudSec now provides visibility into these Recycle Bin Rules directly within the Resources section of the product. We’ve also included filters to identify accounts/regions where snapshots exist and recycle bin rules are not in place. These filters can help InsightCloudSec customers continue to meet their evolving governance needs.

InsightCloudSec Supports 12 New AWS Services Announced at re:Invent

EBS Snapshot Archive

Going beyond Recycle Bin, AWS now offers a new archive mode storage class that, when enabled, can help customers reduce their storage costs. EBS Snapshot Archive is a new storage tier that is up to 75% cheaper than the standard storage tier. Converting to this tier is quite straightforward and can be done via the AWS Console or programmatic API.

To help our customers further reduce spend with this service, we’ve introduced visibility into this storage tier, along with a new filter and Bot action to help customers begin migrating to this new tier where applicable.

InsightCloudSec Supports 12 New AWS Services Announced at re:Invent

More to come

This is one of our team’s favorite weeks each year. It’s always great to see the new capabilities that the teams at Amazon have been hard at work on and how they take in customer feedback to mature their offering. The InsightCloudSec team will be introducing support for a number of these enhancements with our release this week (21.7.3) and will be working closely with our customers to add additional features and capabilities.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Deploying Zabbix in Amazon Web Services cloud platform

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/deploying-zabbix-in-amazon-web-services-cloud-platform/17283/

With the rapid evolution and proliferation of different cloud services, many organizations have decided to move parts of their infrastructures from on-prem to cloud. As an essential part of your infrastructure, Zabbix is no exception – you always have the option to either deploy Zabbix on-prem or select from one of the many supported cloud service providers to deploy your Zabbix Server or Zabbix Proxy on.

In this blog post, let’s look at how we can quickly deploy Zabbix Server and Zabbix Proxy nodes in Amazon Web Services cloud platform.

Deploying the Zabbix Server in AWS

Let’s begin with the Zabbix download page. Under the Zabbix Cloud Images section, select the AWS cloud vendor and then the Cloud Image you wish to deploy. Let’s start with Zabbix Server 5.0 with MySQL DB backend and Nginx Web server backend for our frontend.

Next, we will be redirected to the AWS marketplace, where we will have to subscribe to the Zabbix Server 5.0 image.

Once we have subscribed to the Zabbix Server image, we can continue with the deployment configuration.

Next, we must select our Region, Zabbix minor version (usually the latest available), and the Fulfillment option. Once that is done, we can finalize the launch configuration.

Select the preferred Launch option, EC2 Instance Type, VPC, and Subent settings on the Launch page.

Next – We have to select or create a security group.

We also have to select or generate EC 2 Key pair – make sure to save your private key in a safe location!

Note that creating a security group based on seller settings does not guarantee that the group will have an inbound SSH access rule! Make sure to double-check the security group and manually add the SSH inbound rule if it hasn’t yet been added. We will need to access this instance via SSH to obtain the initial frontend login credentials!

Once you click on the Launch button, the deployment process for your Zabbix application will be initiated.

Accessing the application

Let’s open up the Instances section and open our newly deployed Zabbix instance

We can access the Zabbix Frontend by opening the Public IPv4 address or Public IPv4 DNS of the Zabbix instance

Note that the Zabbix frontend password is still unknown to us. Recall how I mention that we will need to access the instance via SSH to obtain the frontend password. Let’s do so now.

Write down the login credentials and use them to log in to the Zabbix instance.

Accessing the database

In case we wish to access the Zabbix database backend, we can do so from the command line. Zabbix database can be accessed by using the root user. By default, it can be used without a password.

The MySQL root password is stored in /root/.my.cnf configuration file.

Modifying the Zabbix Frontend timezone

By default, the Zabbix frontend uses the “UTC” timezone. If you need to change it, edit php_value[date.timezone] PHP variable in /etc/php-fpm.d/zabbix.conf and restart php-fpm process:

systemctl restart php-fpm

Zabbix proxy

If you wish to deploy a Zabbix proxy instance in your AWS cloud, the deployment steps are very much the same. Most likely, you will still require SSH access if you wish to perform some configuration changes in the Zabbix proxy configuration file.

Note, that by default, the SQLite proxy database is stored in /tmp/zabbix_proxy.sqlite3

As always, don’t forget the point the proxy at your Zabbix server instance by modifying the Server parameter in the Zabbix proxy configuration file, located in /etc/zabbix/zabbix_proxy.conf

And that’s all! With just a few clicks, we are able to deploy a fully functional Zabbix instance or a small Zabbix proxy to distribute or scale our monitoring. Don’t forget that AWS is just one of the many cloud service providers you can use with Official Zabbix images. If you have any questions about the AWS deployment – you are very much encouraged to leave a comment under this blog post.

If you wish to learn more about the Zabbix Monitoring solution, check out the official documentation https://www.zabbix.com/documentation/current/manual/quickstart.

Creating a costs analytics view to email campaign generated by Amazon Pinpoint

Post Syndicated from rafaaws original https://aws.amazon.com/blogs/messaging-and-targeting/creating-a-costs-analytics-view-to-email-campaign-generated-by-amazon-pinpoint/

Introduction

Many companies have multiple departments using different campaigns in the same AWS account on Amazon Pinpoint and need to split costs at the end of each month between the owners of each campaign. To do this, companies need an easy way to find how much each campaign has generated of cost, since the Amazon Pinpoint console doesn’t have this information. To solution this, it is possible combine some AWS services to find out these costs.

In this post, I will demonstrate how AWS analytics and storage services such as Amazon Kinesis, AWS Glue, Amazon Athena, Amazon Quicksight and Amazon Simple Storage Service (Amazon S3) can help you create an analytical view of the costs generated by emails sent through each campaign on Amazon Pinpoint. Not include transactional and Amazon Journey e-mails on the example.

Amazon Pinpoint is an AWS service that helps companies to engage their customers across multiple channels. You can use the Amazon Pinpoint to send email, SMS, push notifications and voice messages to deliver one-off demands or across campaigns.

In this blog, you will learn how to create a dashboard with total cost of emails sent and MTA (Monthly Targeted Audience) by each campaign. With this information, you will be able to distribute costs internally to each department responsible for each campaign on Amazon Pinpoint.

Solution Overview

To create this dashboard, we will take advantage of the Digital User Engagement Events Database solution. We can use an AWS CloudFormation template that set up Amazon Pinpoint event flow. This solution uses the Amazon Kinesis to stream all events about campaigns to a bucket on Amazon S3. After that, a data processing task is performed by AWS Lambda and cataloged on AWS Glue. Some views will be created on Amazon Athena to organize all data and we will use them to calculating and analyzing the Pinpoint costs. For more information about the solution, architecture and how AWS CloudFormation template works to automate the deployment, please visit the Implementation Guide page.

During the deployment process, you will have the option to create a new Amazon Pinpoint project to manage your campaigns or use an existing one.

Prerequisites

1. Complete the Digital User Engagement Events Database implementation.

2. Have an e-mail campaign on Amazon Pinpoint created with some emails already sent.

Analytics View

All events created by Amazon Pinpoint through the campaigns after the Digital Use Engagement Events Database implementation is complete should be appearing in parquet file format in the Amazon S3. If your campaign did not generate any events after the AWS CloudFormation was completed, I recommend creating and executing a new email campaign just to test this solution. Any test email sent during the implementation of this solution will incur charges. The email costs will be explained during this blog.

After the entire Amazon Pinpoint event flow has worked correctly, some modifications on the views created in Amazon Athena must be made. These changes will help to access the information about the quantity of endpoints registered by each campaign of your project. For this, the following steps are required:

To create a new view

1. Open the Amazon Athena console.

2. Under Database, choose the database name created by AWS CloudFormation template.

You will notice a table called “all events” and some views, eg. campaign_send, email_open, email_send and others. These views are responsible to improve the organization of data sent by Amazon Pinpoint, eg. in the campaign_send view it is possible to see all informations about all events that Amazon Pinpoint sends across the campaign by multiple channels.

3. Choose Create view.

A tab will be added on the center page so you can add your command which will be responsible to created a new view.

4. Replace the existing text to the command below and choose Run query.

CREATE OR REPLACE VIEW endpoint_unit AS
SELECT DISTINCT
client.client_id endpoint_id
, "min"("from_unixtime"((event_timestamp / 1000))) event_timestamp
, "month"("from_unixtime"((event_timestamp / 1000))) month_data
, "year"("from_unixtime"((event_timestamp / 1000))) year_data
FROM
all_events
WHERE (event_type = '_campaign.send')
GROUP BY client.client_id, "month"("from_unixtime"((event_timestamp / 1000))), "year"("from_unixtime"((event_timestamp / 1000)))

In this command we are creating a new view, grouping all the endpoints already used by campaigns, along with the earliest date and time that the endpoint was registered. This command will help you identify the first campaign that used the endpoint in each month and year.

Once the new view is created, you will notice that it is listed in the views pane with the name endpoint_unit. You can run this view to check which values are returned.

5. Choose Preview in the endpoint_unit view to return the results.

Example:

The data displayed refer only to the information of the endpoints used in campaigns after the implementation of the Digital User Engagement Events Database.

Now is the time to create the analytical view in Amazon Quicksight.

To check Amazon Quicksight Region

1. Open the Amazon Quicksight console.

If this is your first time using Amazon Quicksight, a page will appear to subscribe to the services, feel free to choose the best option for your business.

Warning: Some costs might occur by using Amazon Quicksight. Check the Amazon Quicksight Pricing page for more information.

2. On top of the screen, choose <Role>/<Account-Name> and select the region where you have the Amazon Pinpoint project.

To check Amazon Quicksight Permissions

Check the permission of the Amazon Quicksight to enable the access to Amazon S3 bucket.

1. On top of the screen, choose <Role>/<Account-Name>, Manage Quicksight.

2. In navigation pane, choose Security & permissions.

3. Under QuickSight access to AWS services, choose Add or remove.

4. Inside the QuickSight access to AWS services table, under Amazon S3, choose Details.

5. Choose Select S3 buckets.

6. Check if the Amazon S3 bucket name checkbox containing all stream files is selected. If not, select the checkbox and choose Finish and Update.

Create a Dataset for Campaign cost

1. Back to the Amazon Quicksight main page.

2. In the navigation page, choose Datasets, New dataset.

3. Choose Athena.

Now, let’s add the table containing information regarding the number of emails sent per Amazon Pinpoint campaign.

4. In the New Athena data source dialog box, do the following:

a. For Data source name, type a name.

b. Choose Create data source.

5. In the Choose your table dialog box, do the following:

a. Choose Use custom SQL.

b. In first field, enter a name for the custom SQL.

c. In second field, paste the command below.

SELECT * FROM "due_eventdb"."campaign_send" where (message_tags['delivery_type'] = 'EMAIL')

This command is filtering only events of campaign based on emails.

d. Choose Confirm query.

6. In the Finish dataset creation dialog box, you will be asked to select between storing a copy of the data of this table in SPICE or performing a query directly from data source. Feel free to choose the best option for your business.

7. Choose Visualize.

After creating the dataset, you will be redirected to the Amazon Quicksight analytics creation page.

As the information sent by Amazon Pinpoint does not have the unit cost of each email sent, we will use three features called Parameters, Control and Calculated Field to calculate these amounts.

Parameters are named variables that can transfer a value for use by an action or an object. To make the parameters accessible to the dashboard viewer, you add a parameter control.

The calculated fields help you to transform your data by using one or more of the actions: Operators, Functions, Aggregate functions, Fields that contain data or other calculated fields.

Create an Amazon QuickSight parameter

1. In the navigation bar, choose + Add at the top of the screen.

2. Choose Add parameter.

3. In the Create new parameter dialog box, do the following:

a. For Name, type the name for the parameter, eg. Costemail.

b. For Data type, choose number.

c. For Values, select Single value.

d. For Static default value enter the unit cost of email. The cost of each Amazon Pinpoint email can be found on Amazon Pinpoint Pricing page.

e. Choose Create and Close.

Important: All costs in this blog will be calculated in USD.

After creating the first parameters, we need to create a manual control of these costs.

Create an Amazon QuickSight control

1. In the navigation pane, choose Parameters.

2. Under the name of parameters that you previously created, choose Add control.

3.  In the Add control dialog box, do the following:

a. For Display name, enter a name.

b. For Style, choose Text field.

c. Choose Add.

This control will help you in the future if you need to change the unit price without changing the Parameters configuration.

We now need to create the Calculate Field. It multiplying the total sent emails value by the unit costs.

Create an Amazon QuickSight calculated field

1. In the navigation bar, choose + Add at the top of the screen again.

2. Choose add calculated field.

3. In add name field, type a name.

4. Paste the expression below.

count({pinpoint_campaign_id}) * ${name_parameters}

5. Replace the “name_parameters” in the expression with the name of the Parameter you created earlier.

6. Choose Save.

We now have all fields available to create the chart on Quicksight.

Create an Amazon QuickSight dashboard

1. In the navigation page, choose Visualize.

2. Under Visual type, choose Vertical bar chart.

Note: If you prefer, you can change it later to other visual type.

3. Choose the fields pinpoint_campaign_id, calculated field that you just created and event_timestamp. Drag each field to X axis, value and group/color to create the chart.

Example:

In this example, you can see the cost in USD x Amazon Pinpoint Campaign ID between April and May 2021.

If you prefer, you can customize your chart in Format Visual. You can also build another view to show the amount of emails sent per campaign.

4. In the navigation bar, choose Share.

5. Choose Publish dashboard.

6. In the Publish a dashboard dialog box, under Publish new dashboard as, type a name.

7. Choose Publish dashboard.

If you want, you can share your dashboard with other username, group, or email address.

Now that we have the total costs of emails per each campaign, let’s create the chart for the total endpoint cost for each campaign.

Create a Dataset for MTA cost

1. Back to the Amazon Quicksight main page.

2. In the navigation page, choose Datasets, New dataset.

3. Choose Athena.

4. In the New Athena data source dialog box, do the following:

a. For Data source name, type a name.

b. Choose Create data source.

5. In the Choose your table dialog box, do the following:

a. Choose Use custom SQL.

b. In first field, enter a name for the custom SQL.

c. In second field, paste the command below.

SELECT distinct c.endpoint_id, e.pinpoint_campaign_id, c.event_timestamp
FROM "due_eventdb"."campaign_send" e
INNER JOIN "due_eventdb"."endpoint_unit" c ON c.event_timestamp = e.event_timestamp

d. Choose Confirm query.

6. In the Finish dataset creation dialog box, you will be asked to select between storing a copy of the data of this table in SPICE or performing a query directly from data source. Feel free to choose the best option for your business.

7. Choose Visualize.

This command is responsible to join the pinpoint_campaign view with the endpoint_unit view. This command will return the campaign ID responsible for contacting the endpoint for the first time on each month.

8. After the dataset is created, repeat the same steps to create a new Parameter, Control and Calculated Field on section Create an Amazon QuickSight parameter, Create an Amazon QuickSight control and Create an Amazon QuickSight calculated field, but when creating a new Parameter, you will use the unit cost of each endpoint, currently it is described on the Amazon Pinpoint Pricing page as Monthly Targeted Audience (MTA).

If you send messages from an Amazon Pinpoint campaign or journey, the unique endpoints you contact are known as a monthly targeted audience (MTA). You are charged on the number of MTAs targeted in a calendar month.

Important: In this calculation we are not considering the subtraction of the free tier values.

In the process of creating Calculated Field, use the following expression:

count({endpoint_id}) * ${name_parameters}

Also remember to replace the name_parameters field in the expression with the name of the parameter that you previously created in point 8 on Create a Dataset for MTA cost section to calculate the costs of endpoints.
In this expression you are calculating the MTA for the distinct endpoint contacted per month.

After this, you will also have all the required fields to create your total cost chart of endpoint per campaign. In this case, use the fields pinpoint_campaign_id, calculated field that you created earlier and event_timestamp.

Example:

In this example, you can see the MTA cost in USD x Amazon Pinpoint Campaign ID between April and May 2021.

Some customers usually have tens or hundreds of campaigns, on this case you can use the Filter option in the navigation pane to a specify a range of date.

Optional: If you prefer you can combine the email and MTA cost in the same Analyses and Dashboard.

Add more datasets in the same Analyses and Dashboard

1. Back to the Amazon Quicksight main page.

2. In the navigation page, choose Analyses.

3. Choose the email cost Analyses that you created on section Create an Amazon QuickSight dashboard.

4. Choose the pencil icon on Dataset option near to the Navigation page.

5. Choose add dataset.

6. In the Choose dataset to add dialog box, choose the dataset you created on section Create a Dataset for MTA cost.

7. Choose Select.

Now you can create each parameter, control and calculated field per dataset on the same Analyses and publish all charts on same dashboard.

Cleanup

To avoid incurring charges, navigate to AWS Cloudformation console and delete the stack that you used on Digital User Engagement Events Database solutions procedure.

After the stack is deleted, you also need to delete your dashboards, analyses and datasets on Amazon Quicksight. You can also delete the stream events data in your Amazon S3 bucket.

Conclusion

In this blog, we used the total cost of email sent and endpoints to create the charts, but it is possible to obtain several analyses in Quicksight using the views that became available in the Digital User Engagement Events Database solution, such as costs for push notifications and other types of channels.

Also try creating dashboards with other Amazon Pinpoint channels

To do this, use this same procedure, explore the view campaign_send to find all data about other channels and modify the SQL queries on Amazon Quicksight to create your dashboards.


About the Author

Rafael Rodrigues is an Enterprise Solution Architect for AWS based in Sao Paulo, Brazil. He helps customers innovate with modern IT architecture on cloud computing.

Amazon Simple Email Service Celebrates 50 Years of Email

Post Syndicated from Matt Strzelecki original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-simple-email-service-celebrates-50-years-of-email/

Email as we know it turns 50 years old this month (October 2021). The first email sent over a network — the beginning of email as we use it today — was sent in October 1971, by MIT graduate Ray Tomlinson (April 23, 1941–March 5, 2016). Tomlinson was the first to use the @ symbol to identify a message recipient on a remote computer system. Using this address format, he became the first person to send an email between two computers. That first email traveled 10 feet between two computers in Cambridge, Massachusetts. Tomlinson stated when interviewed that the first email was “something like QWERTYUIOP”.

Tomlinson leveraged existing software at the time, including SNDMSG and CPYNET, which allowed people to send messages to others who used the same computer, to send the first email over a network – back then multiple users would share computers, rather than having their own dedicated computers. His work enabled the exchange of messages between computers for the first time. Creating email was a side project at work for Tomlinson, and when he showed his work to another employee for the first time, he reportedly said: “Don’t tell anyone! This isn’t what we’re supposed to be working on.”

Ray Tomlinson was inducted into the Internet Hall of Fame in 2012, and his work is ranked fourth in Boston Globe’s top 150 MIT-related “Ideas, Inventions, and Innovators”.

According to the Guinness Book of Records, the first unsolicited email was sent in May 1978 to 397 recipients advertising an upcoming a product demonstration of computers. That’s right—spam is almost as old as email itself! In 1991, the first email was sent from space by astronauts on the NASA shuttle Atlantis. That message began with “Hello Earth!” and was delivered to Mission Control at the Johnson Space Center in Houston, Texas.

Over the past 50 years, there’s been a lot of firsts in email. For us at Amazon Simple Email Service (Amazon SES), our email first was when we launched our service back in January 2011. We initially started as a service that delivered email for Amazon.com, and grew over time into launching as a public service in Amazon Web Services (AWS).

Customers told us that building large-scale email solutions to send marketing and transactional messages was often a complex and costly challenge for businesses. Amazon SES eliminates these challenges and enables businesses to benefit from the years of experience and sophisticated email infrastructure Amazon.com has built to serve its own large-scale customer base. With Amazon.com being our first customer, from day one – scalability, reliability, and deliverability have been our highest priorities. This same service has also powered the email sending capabilities of Amazon Pinpoint since 2017, as well as email-related features in several other AWS services.

Today, Amazon SES is a cost-effective, flexible, and scalable email service that enables developers to send mail from within any application – supporting multiple email use cases, including transactional, marketing, or mass email communications, as well as inbound email.

We encourage our readers to share their own stories of their email firsts, or any other interesting email anecdotes. #QWERTYUIOP #50yrsofemail

Replace traditional email mailbox polling with real-time reads using Amazon SES and Lambda

Post Syndicated from agardezi original https://aws.amazon.com/blogs/messaging-and-targeting/replace-traditional-email-mailbox-polling-with-real-time-reads-using-amazon-ses-and-lambda/

Integrating emails into an automated workflow for automated processing can be challenging. Traditionally, applications have had to use the POP protocol to connect to mail servers and poll for emails to arrive in a mailbox and then process the messages inline and perform actions on the message. This can be an inefficient mechanism and prone to errors that result in the workflow missing messages. Since this method requires polling it’s not great if you need real-time processing of messages and introduces inefficiencies in the design. Amazon Simple Email Service (Amazon SES) is a cost effective, scalable and flexible email service with support for different workflows including the ability to perform spam checks and virus scans. In this blog you will see how to use Amazon SES with AWS Lambda and Amazon S3 in order to automate the processing of emails in real time and integrate with an application without the need for polling.

The use case explored in this blog focuses on automation for CRM or order processing platforms and processing of email related to customer contact or direct email requests. An example of this use case is copying a client engagement email to Salesforce (or any other database) where it is recorded and can later be categorized or attached to the appropriate client account or opportunity. When designing an application that needs to read emails from a mailbox, a developer would traditionally have to use a mail library (like JavaMail if using Java) to make a call to the mailbox, authenticate and then pull messages into an application object. This would mean polling the mailbox every 10 – 15 minutes to check for new messages, handle errors when the mailbox is unavailable and maintaining a fully functioning mailbox. This solution can help you implement automated processing of emails arriving in a mailbox without the need to poll the mailbox. The entire solution will be implemented in a serverless fashion.

Solution

This blog post shows how to use SES to perform automated processing of email in an application workflow. I will use the option in SES to save received emails in S3 and trigger a Lambda function to process the message without having to poll a mailbox. This sample application demo is using email to receive simple orders which get automatically processed and the details stored in DynamoDB. The following diagram shows the high-level architecture:

Step 1: Create an S3 Bucket for Email Storage

Start by creating an S3 bucket where received emails will be stored in order for the full email to be processed by the lambda. The bucket must have a policy attached so SES can put objects in the bucket on your behalf:

{
  "Version":"2012-10-17",
  "Statement":[
    {
      "Sid":"AllowSESPuts",
      "Effect":"Allow",
      "Principal":{
        "Service":"ses.amazonaws.com"
      },
      "Action":"s3:PutObject",
      "Resource":"arn:aws:s3:::myBucket/*",
      "Condition":{
        "StringEquals":{
          "aws:Referer":"111122223333"
        }
      }
    }
  ]
}

Make the following changes to the preceding policy example:

  1. Replace myBucket with the name of the Amazon S3 bucket that you want to write to.
  2. Replace 111122223333 with your AWS account ID.

You can find out more about the policy here.

Step 2: Create DynamoDB Table to Simulate Application

Next, add a DynamoDB table. The DynamoDB table will store the incoming order info. For this sample we will keep it simple and have a table with email as a partition key. Here is the data model:

{   
    "email_order_received": {
        "email": "string",
        "itemname": "string",
        "quantity": "number"
    }   
}

Step 3: Create Lambda Function triggered by SES to Process Email

Now the DynamoDB table is ready, create the Lambda function to process the email and send data to the DynamoDB table. The lambda function needs an execution role that has permissions to access the S3 bucket, the DynamoDB table and create the CloudWatch log group. It also needs a Resource-based Policy so SES can invoke the Lambda function. In the final step when we configure SES to call the lambda, SES automatically adds the necessary permissions to the function as detailed here.  This is a sample policy statement:

{
  "Version": "2012-10-17",
  "Id": "default",
  "Statement": [
    {
      "Sid": "allowSesInvoke",
      "Effect": "Allow",
      "Principal": {
        "Service": "ses.amazonaws.com"
      },
      "Action": "lambda:InvokeFunction",
      "Resource": "arn:aws:lambda:eu-west-1:111122223333:function:email-event-ses",
      "Condition": {
        "StringEquals": {
          "AWS:SourceAccount": "111122223333"
        }
      }
    }
  ]
}

Sample Lambda code in python:

import boto3
import email


def lambda_handler(event, context):
    s3 = boto3.client("s3")
    dynamodb = boto3.resource("dynamodb")
    table = dynamodb.Table('email_order_received')
    
    print("Spam filter")
    # Check the SES spam and virus filter settings
    if (
        event["Records"][0]["ses"]["receipt"]["spfVerdict"]["status"] == "FAIL" or
        event["Records"][0]["ses"]["receipt"]["dkimVerdict"]["status"] == "FAIL" or
        event["Records"][0]["ses"]["receipt"]["spamVerdict"]["status"] == "FAIL" or
        event["Records"][0]["ses"]["receipt"]["virusVerdict"]["status"] == "FAIL"
       ):
        print("Dropping Spam")
    else:
        print("Not Spam")
        email_bucket = "email-handling-test"
        bucketkey = "monitor/" + event["Records"][0]["ses"]["mail"]["messageId"]
    
        fileObj = s3.get_object(Bucket = email_bucket, Key=bucketkey)
    
        msg = email.message_from_bytes(fileObj['Body'].read())
        From = msg['From']
        itemname = msg['Subject']
        body = ""
        if msg.is_multipart():
            for part in msg.walk():
                type = part.get_content_type()
                disp = str(part.get('Content-Disposition'))
                # look for plain text parts, but skip attachments
                if type == 'text/plain' and 'attachment' not in disp:
                    charset = part.get_content_charset()
                    # decode the base64 unicode bytestring into plain text
                    body = part.get_payload(decode=True).decode(encoding=charset, errors="ignore")
                    # if we've found the plain/text part, stop looping thru the parts
                    break
        else:
            # not multipart - i.e. plain text, no attachments
            charset = msg.get_content_charset()
            body = msg.get_payload(decode=True).decode(encoding=charset, errors="ignore")
            
        table.put_item(
            Item={
                'email': From,
                'itemname': itemname,
                'quantity': body
            }
        )
        print("inserted data into dynamodb")

When you add a Lambda action to a receipt rule, Amazon SES sends an event record to Lambda every time it receives an incoming message. This event contains information about the email headers for the incoming message, as well as the results of tests (spam filtering and virus scanning) that Amazon SES performs on incoming messages, however it omits the body of the incoming email. This is why the lambda has to process the body form the email stored in S3. You can see details of the event here. In this demo app we assume the item name is in the subject and the body of the email has the quantity of the items and this data is written to the DynamoDB table.

Step 4: Configure SES to Send Emails to S3 and Trigger Lambda Function

The final step is to configure Amazon SES. Start by verifying a domain so SES can use it to send and receive emails. Domain verification helps ensure you are the owner of the domain and are thus authorised to manage the sending and receiving of the emails from addresses in the domain. To verify your domain:

  1. In the SES console in the navigation pane under Identity Management, choose Domains.
  2. Choose Verify new Domain
  3. In the Verify new Domain dialog enter your domain name
  4. Choose Verify This Domain
  5. In the dialogue box you will see a Domain verification record set. You need to add this record to your domain DNS server. You will also have to add the email receiving record (MX Record) to you domain DNS server.
  6. If your DNS server is Route53 and it is registered under the same account then SES also gives you the option to update your DNS server from within the SES console.

Once the domain is verified its status goes from “pending verification” to “verified” and now it can used it to send and receive emails.

Next, create a recipient rule set. The Rule Set lets you specify what SES does with emails it receives on domains you own. You can create rules for individual addresses or any address under the domain. To create the Rule Set:

  1. In the left navigation pane, under Email Receiving, choose Rule Sets.
  2. Choose Create Rule.
  3. Enter the recipient email address you want to configure the rule for. You can add up to a maximum of 100 recipient addresses or just set it up for any address in the domain using just the domain name as a wildcard.
  4. Once the addresses have been added, add the actions for the rule. Add two actions:
    1. First one is of type S3, this is to save a copy of the email to the S3 bucket created in step 1. Select the bucket name created in step 1 from the drop-down list. You can add a prefix to the filename as well to categorise the output of different rules.
    2. Second is of type Lambda to trigger the lambda for processing the email. Select the lambda created in step 3 from the drop-down list.

Once the SES Rule is configured, we have the full workflow in place. Now any email sent to the [email protected] address will be processed by the Lambda. In this way you can configure email processing to be part of your application workflow without having to perform polling.

Clean-up

To clean up the resources used in your account:

  1. Navigate to Amazon S3 and delete the contents of the bucket you created where your emails are stored.
  2. Once the bucket is empty, delete the bucket.
  3. Navigate to the DynamoDB console and delete the table you created above. Make sure you select the option to “Delete all CloudWatch alarms for this table”
  4. Remove the domain from Amazon SES. To do this, navigate to the Amazon SES Console and choose Domains from the left navigation. Select the domain you want to remove and choose Remove button to remove it from Amazon SES.
  5. From the Amazon SES Console, navigate to the Rule Sets from the left navigation. On the Active Rule Set section, choose View Active Rule Set button and delete all the rules you have created, by selecting the rule and choosing Action, Delete.
  6. On the Rule Sets page choose Disable Active Rule Set button to disable listening for incoming email messages.
  7. On the Rule Sets page, Inactive Rule Sets section, delete the only rule set, by selecting the rule set and choosing Action, Delete.
  8. Navigate to the Lambda console and delete the Lambda you created earlier. Select the Lambda and choose Delete from the Actions menu.
  9. Navigate to CloudWatch console and from the left navigation choose Logs, Log groups. Find the log group that belongs to the resources and delete it by selecting it and choosing Actions, Delete log group(s).

Conclusion

In this post, we have shown you how to integrate email processing into an application workflow without having to resort to polling a mail box.

By using SES to receive emails you can create a modular serverless architecture that allows emails to be processed and checked for spam plus viruses and the output can then be sent to any downstream system or stored in a database for application use.


About the Author

Syed Ali Abbas Gardezi is a Sr. Solution Architect for AWS based in London, United Kingdom. He works with AWS GSI Partners architecting, designing and implementing various large-scale IT solution. Before joining AWS he worked in several Architecture roles in a tier 1 financial organisation in London.

How to use domain with Amazon SES in multiple accounts or regions

Post Syndicated from Leonardo Azize original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-use-domain-with-amazon-ses-in-multiple-accounts-or-regions/

Sometimes customers want to use their email domain with Amazon Simples Email Service (Amazon SES) across multiple accounts, or the same account but across multiple regions.

For example, AnyCompany is an insurance company with marketing and operations business units. The operations department sends transactional emails every time customers perform insurance simulations. The marketing department sends email advertisements to existing and prospective customers. Since they are different organizations inside AnyCompany, they want to have their own Amazon SES billing. At the same time, they still want to use the same AnyCompany domain.

Other use-cases include customers who want to setup multi-region redundancy, need to satisfy data residency requirements, or need to send emails on behalf of several different clients. In all of these cases, customers can use different regions, in the same or across different accounts.

This post shows how to verify and configure your domain on Amazon SES across multiple accounts or multiple regions.

Overview of solution

You can use the same domain with Amazon SES across multiple accounts or regions. Your options are: different accounts but the same region, different accounts and different regions, and the same account but different regions.

In all of these scenarios, you will have two SES instances running, each sending email for example.com domain – let’s call them SES1 and SES2. Every time you configure a domain in Amazon SES it will generate a series of DNS records you will have to add on your domain authoritative DNS server, which is unique for your domain. Those records are different for each SES instance.

You will need to modify your DNS to add one TXT record, with multiple values, for domain verification. If you decide to use DomainKeys Identified Mail (DKIM), you will modify your DNS to add six CNAME records, three records from each SES instance.

When you configure a domain on Amazon SES, you can also configure a MAIL FROM domain. If you decide to do so, you will need to modify your DNS to add one TXT record for Sender Policy Framework (SPF) and one MX record for bounce and complaint notifications that email providers send you.

Furthermore, your domain can be configured to support DMAC for email spoofing detection. It will rely on SPF or DKIM configured above. Below we walk you through these steps.

  • Verify domain
    You will take TXT values from both SES1 and SES2 instances and add them in DNS, so SES can validate you own the domain
  • Complying with DMAC
    You will add a TXT value with DMAC policy that applies to your domain. This is not tied to any specific SES instance
  • Custom MAIL FROM Domain and SPF
    You will take TXT and MX records related from your MAIL FROM domain from both SES1 and SES2 instances and add them in DNS, so SES can comply with DMARC

Here is a sample matrix of the various configurations:

Two accounts, same region Two accounts, different regions One account, two regions
TXT records for domain verification*

1 record with multiple values

_amazonses.example.com = “VALUE FROM SES1”
“VALUE FROM SES2”

CNAMES for DKIM verification

6 records, 3 from each SES instance

record1-SES1._domainkey.example.com = VALUE FROM SES1
record2-SES1._domainkey.example.com = VALUE FROM SES1
record3-SES1._domainkey.example.com = VALUE FROM SES1
record1-SES2._domainkey.example.com = VALUE FROM SES2
record2-SES2._domainkey.example.com = VALUE FROM SES2
record3-SES2._domainkey.example.com = VALUE FROM SES2

TXT record for DMARC

1 record. It is not related to SES instance or region

_dmarc.example.com = DMARC VALUE

MAIL FROM MX record to define message sender for SES

1 record for entire region

mail.example.com = 10 feedback-smtp.us-east-1.amazonses.com

2 records, one for each region

mail1.example.com = 10 feedback-smtp.us-east-1.amazonses.com
mail2.example.com = 10 feedback-smtp.eu-west-1.amazonses.com

MAIL FROM TXT record for SPF

1 record for entire region

mail.example.com = “v=spf1 include:amazonses.com ~all”

2 records, one for each region

mail1.example.com = “v=spf1 include:amazonses.com ~all”
mail2.example.com = “v=spf1 include:amazonses.com ~all”

* Considering your DNS supports multiple values for a TXT record

Setup SES1 and SES2

In this blog, we call SES1 your primary or existing SES instance. We assume that you have already setup SES, but if not, you can still follow the instructions and setup both at the same time. The settings on SES2 will differ slightly, and therefore you will need to add new DNS entries to support the two-instance setup.

In this document we will use configurations from the “Verification,” “DKIM,” and “Mail FROM Domain” sections of the SES Domains screen and configure SES2 and setup DNS correctly for the two-instance configuration.

Verify domain

Amazon SES requires that you verify, in DNS, your domain, to confirm that you own it and to prevent others from using it. When you verify an entire domain, you are verifying all email addresses from that domain, so you don’t need to verify email addresses from that domain individually.

You can instruct multiple SES instances, across multiple accounts or regions to verify your domain.  The process to verify your domain requires you to add some records in your DNS provider. In this post I am assuming Amazon Route 53 is an authoritative DNS server for example.com domain.

Verifying a domain for SES purposes involves initiating the verification in SES console, and adding DNS records and values to confirm you have ownership of the domain. SES will automatically check DNS to complete the verification process. We assume you have done this step for SES1 instance, and have a _amazonses.example.com TXT record with one value already in your DNS. In this section you will add a second value, from SES2, to the TXT record. If you do not have SES1 setup in DNS, complete these steps twice, once for SES1 and again for SES2. This will prove to both SES instances that you own the domain and are entitled to send email from them.

Initiate Verification in SES Console

Just like you have done on SES1, in the second SES instance (SES2) initiate a verification process for the same domain; in our case example.com

  1. Sign in to the AWS Management Console and open the Amazon SES console.
  2. In the navigation pane, under Identity Management, choose Domains.
  3. Choose Verify a New Domain.
  4. In the Verify a New Domain dialog box, enter the domain name (i.e. example.com).
  5. If you want to set up DKIM signing for this domain, choose Generate DKIM Settings.
  6. Click on Verify This Domain.
  7. In the Verify a New Domain dialog box, you will see a Domain Verification Record Set containing a Name, a Type, and a Value. Copy Name and Value and store them for the step below, where you will add this value to DNS.
    (This information is also available by choosing the domain name after you close the dialog box.)

To complete domain verification, add a TXT record with the displayed Name and Value to your domain’s DNS server. For information about Amazon SES TXT records and general guidance about how to add a TXT record to a DNS server, see Amazon SES domain verification TXT records.

Add DNS Values for SES2

To complete domain verification for your second account, edit current _amazonses TXT record and add the Value from the SES2 to it. If you do not have an _amazonses TXT record create it, and add the Domain Verification values from both SES1 and SES2 to it. We are showing how to add record to Route 53 DNS, but the steps should be similar in any DNS management service you use.

  1. Sign in to the AWS Management Console and open the Amazon Route 53 console.
  2. In the navigation pane, choose Hosted zones.
  3. Choose the domain name you are verifying.
  4. Choose the _amazonses TXT record you created when you verified your domain for SES1.
  5. Under Record details, choose Edit record.
  6. In the Value box, go to the end of the existing attribute value, and then press Enter.
  7. Add the attribute value for the additional account or region.
  8. Choose Save.
  9. To validate, run the following command:
    dig TXT _amazonses.example.com +short
  10. You should see the two values returned:
    "4AjLMzUu4nSjrz4QVqDD8rXq8X2AHr+JhGSl4foiMmU="
    "abcde12345Sjrz4QVqDD8rXq8X2AHr+JhGSl4foiMmU="

Please note:

  1. if your DNS provider does not allow underscores in record names, you can omit _amazonses from the Name.
  2. to help you easily identify this record within your domain’s DNS settings, you can optionally prefix the Value with “amazonses:”.
  3. some DNS providers automatically append the domain name to DNS record names. To avoid duplication of the domain name, you can add a period to the end of the domain name in the DNS record. This indicates that the record name is fully qualified and the DNS provider need not append an additional domain name.
  4. if your DNS server does not support two values for a TXT record, you can have one record named _amazonses.example.com and another one called example.com.

Finally, after some time SES will complete its validation of the domain name and you should see the “pending validation” change to “verified”.

Verify DKIM

DomainKeys Identified Mail (DKIM) is a standard that allows senders to sign their email messages with a cryptographic key. Email providers then use these signatures to verify that the messages weren’t modified by a third party while in transit.

An email message that is sent using DKIM includes a DKIM-Signature header field that contains a cryptographically signed representation of the message. A provider that receives the message can use a public key, which is published in the sender’s DNS record, to decode the signature. Email providers then use this information to determine whether messages are authentic.

When you enable DKIM it generates CNAME records you need to add into your DNS. As it generates different values for each SES instance, you can use DKIM with multiple accounts and regions.

To complete the DKIM verification, copy the three (3) DKIM Names and Values from SES1 and three (3) from SES2 and add them to your DNS authoritative server as CNAME records.

You will know you are successful because, after some time SES will complete the DKIM verification and the “pending verification” will change to “verified”.

Configuring for DMARC compliance

Domain-based Message Authentication, Reporting and Conformance (DMARC) is an email authentication protocol that uses Sender Policy Framework (SPF) and/or DomainKeys Identified Mail (DKIM) to detect email spoofing. In order to comply with DMARC, you need to setup a “_dmarc” DNS record and either SPF or DKIM, or both. The DNS record for compliance with DMARC is setup once per domain, but SPF and DKIM require DNS records for each SES instance.

  1. Setup “_dmarc” record in DNS for your domain; one time per domain. See instructions here
  2. To validate it, run the following command:
    dig TXT _dmarc.example.com +short
    "v=DMARC1;p=quarantine;pct=25;rua=mailto:[email protected]"
  3. For DKIM and SPF follow the instructions below

Custom MAIL FROM Domain and SPF

Sender Policy Framework (SPF) is an email validation standard that’s designed to prevent email spoofing. Domain owners use SPF to tell email providers which servers are allowed to send email from their domains. SPF is defined in RFC 7208.

To comply with Sender Policy Framework (SPF) you will need to use a custom MAIL FROM domain. When you enable MAIL FROM domain in SES console, the service generates two records you need to configure in your DNS to document who is authorized to send messages for your domain. One record is MX and another TXT; see screenshot for mail.example.com. Save these records and enter them in your DNS authoritative server for example.com.

Configure MAIL FROM Domain for SES2

  1. Open the Amazon SES console at https://console.aws.amazon.com/ses/.
  2. In the navigation pane, under Identity Management, choose Domains.
  3. In the list of domains, choose the domain and proceed to the next step.
  4. Under MAIL FROM Domain, choose Set MAIL FROM Domain.
  5. On the Set MAIL FROM Domain window, do the following:
    • For MAIL FROM domain, enter the subdomain that you want to use as the MAIL FROM domain. In our case mail.example.com.
    • For Behavior if MX record not found, choose one of the following options:
      • Use amazonses.com as MAIL FROM – If the custom MAIL FROM domain’s MX record is not set up correctly, Amazon SES will use a subdomain of amazonses.com. The subdomain varies based on the AWS Region in which you use Amazon SES.
      • Reject message – If the custom MAIL FROM domain’s MX record is not set up correctly, Amazon SES will return a MailFromDomainNotVerified error. Emails that you attempt to send from this domain will be automatically rejected.
    • Click Set MAIL FROM Domain.

You will need to complete this step on SES1, as well as SES2. The MAIL FROM records are regional and you will need to add them both to your DNS authoritative server.

Set MAIL FROM records in DNS

From both SES1 and SES2, take the MX and TXT records provided by the MAIL FROM configuration and add them to the DNS authoritative server. If SES1 and SES2 are in the same region (us-east-1 in our example) you will publish exactly one MX record (mail.example.com in our example) into DNS, pointing to endpoint for that region. If SES1 and SES2 are in different regions, you will create two different records (mail1.example.com and mail2.example.com) into DNS, each pointing to endpoint for specific region.

Verify MX record

Example of MX record where SES1 and SES2 are in the same region

dig MX mail.example.com +short
10 feedback-smtp.us-east-1.amazonses.com.

Example of MX records where SES1 and SES2 are in different regions

dig MX mail1.example.com +short
10 feedback-smtp.us-east-1.amazonses.com.

dig MX mail2.example.com +short
10 feedback-smtp.eu-west-1.amazonses.com.

Verify if it works

On both SES instances (SES1 and SES2), check that validations are complete. In the SES Console:

  • In Verification section, Status should be “verified” (in green color)
  • In DKIM section, DKIM Verification Status should be “verified” (in green color)
  • In MAIL FROM Domain section, MAIL FROM domain status should be “verified” (in green color)

If you have it all verified on both accounts or regions, it is correctly configured and ready to use.

Conclusion

In this post, we explained how to verify and use the same domain for Amazon SES in multiple account and regions and maintaining the DMARC, DKIM and SPF compliance and security features related to email exchange.

While each customer has different necessities, Amazon SES is flexible to allow customers decide, organize, and be in control about how they want to uses Amazon SES to send email.

Author bio

Leonardo Azize Martins is a Cloud Infrastructure Architect at Professional Services for Public Sector.

His background is on development and infrastructure for web applications, working on large enterprises.

When not working, Leonardo enjoys time with family, read technical content, watch movies and series, and play with his daughter.

Contributor

Daniel Tet is a senior solutions architect at AWS specializing in Low-Code and No-Code solutions. For over twenty years, he has worked on projects for Franklin Templeton, Blackrock, Stanford Children’s Hospital, Napster, and Twitter. He has a Bachelor of Science in Computer Science and an MBA. He is passionate about making technology easy for common people; he enjoys camping and adventures in nature.