Tag Archives: AWS

Sandcastle – AWS S3 Bucket Enumeration Tool

Post Syndicated from Darknet original https://www.darknet.org.uk/2020/03/sandcastle-aws-s3-bucket-enumeration-tool/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Sandcastle – AWS S3 Bucket Enumeration Tool

Sandcastle is a Python-based Amazon AWS S3 Bucket Enumeration Tool, formerly known as bucketCrawler. The script takes a target’s name as the stem argument (e.g. shopify) and iterates through a file of bucket name permutations.

Amazon S3 [Simple Storage Service] is cloud storage for the Internet. To upload your data (photos, videos, documents etc.), you first create a bucket in one of the AWS Regions. You can then upload any number of objects to the bucket.

Read the rest of Sandcastle – AWS S3 Bucket Enumeration Tool now! Only available at Darknet.

Retrying Undelivered Voice Messages with Amazon Pinpoint

Post Syndicated from Heidi Gloudemans original https://aws.amazon.com/blogs/messaging-and-targeting/retrying-undelivered-voice-messages-with-amazon-pinpoint/

Note: This post was written by Murat Balkan, an AWS Senior Solutions Architect.


Many of our customers use voice notifications to deliver mission-critical and time-sensitive messages to their users. Customers often configure their systems to retry delivery when these voice messages aren’t delivered the first time around. Other customers set up their systems to fall back to another channel in this situation.

This blog post shows you how to retry the delivery of a voice message if the initial attempt wasn’t successful.

Architecture

By completing the steps in this post, you can create a system that uses the architecture illustrated in the following image:

 

First, a Lambda function calls the SendMessage operation in the Amazon Pinpoint API. The SendMessage operation then initiates a phone call to the recipient and generates a unique message ID, which is returned to the Lambda function. The Lambda function then adds this message ID to a DynamoDB table.

While Amazon Pinpoint attempts to deliver a message to a recipient, it emits several event records. These records indicate when the call is initiated, when the phone is ringing, when the call is answered, and so forth. Amazon Pinpoint publishes these events to an Amazon SNS topic. In this example, we’re only interested in the BUSY, FAILED, and NO_ANSWER event types, so we add some filtering criteria.

An Amazon SQS queue then subscribes to the Amazon SNS topic and monitors the incoming events. The Delivery Delay attribute of this queue is also set at the queue level. This configuration provides a back-off retry mechanism for failed voice messages.

When the Delivery Delay timer is reached, another Lambda function polls the queue and extracts the MessageId attribute from the polled message. It uses this attribute to locate the DynamoDB record for the original call. This record also tells us how many times Amazon Pinpoint has attempted to deliver the message.

The Lambda function compares the number of retries to a MAX_RETRY environment variable to determine whether it should attempt to send the message again.

Prerequisites

To start sending transactional voice messages, create an Amazon Pinpoint project and then request a phone number. Next, clone this GitHub repository to your local machine.

After you add a long code to your account, use AWS SAM to deploy the remaining parts of this serverless architecture. You provide the long number as an input parameter to this template.

The AWS SAM template creates the following resources:

  • A Lambda function (CallGenerator) that initiates a voice call using the Amazon Pinpoint API.
  • An Amazon SNS topic that collects state change events from Amazon Pinpoint.
  • An Amazon SQS queue that queues the messages.
  • A Lambda function (RetryCallGenerator) that polls the Amazon SQS queue and re-initiates the previously failed call attempt by calling the CallGenerator function.
  • A DynamoDB table that contains information about previous attempts to deliver the call.

The template also defines a custom Lambda resource, CustomResource, which creates a configuration set in Amazon Pinpoint. This configuration set specifies the events to send to Amazon SNS. A Lambda environment variable, CONFIG_SET_NAME, contains the name of the configuration set.

This architecture consists of two Lambda functions, which are represented as two different apps in the AWS SAM template. These functions are named CallGenerator and RetryCallGenerator. The CallGenerator function initiates the voice message with Amazon Pinpoint. The SendMessage API in Amazon Pinpoint returns a MessageId. The architecture uses this ID as a key to connect messages to the various events that they generate. The CallGenerator function also retains this ID in a DynamoDB table called call_attempts. The RetryCallGenerator function looks up the MessageId in the call_attempts table. If necessary, the function tries to send the message again by invoking the CallGenerator function.

 

Deploying and Testing

Start by downloading the template from the GitHub repository. AWS SAM requires you to specify an Amazon Simple Storage Service (Amazon S3) bucket to hold the deployment artifacts. If you haven’t already created a bucket for this purpose, create one now. The bucket should be reachable by a AWS Identity and Access Management (IAM) user.

At the command line, enter the following command to package the application:

sam package --template template.yaml --output-template-file output_template.yaml --s3-bucket BUCKET_NAME_HERE

In the preceding command, replace BUCKET_NAME_HERE with the name of the Amazon S3 bucket that should hold the deployment artifacts.

AWS SAM packages the application and copies it into the Amazon S3 bucket. This AWS SAM template requires you to specify three parameters: longCode, the phone number that’s used to make the outbound calls; maxRetry, which is used to set the MAX_RETRY environment variable for the RetryCallGenerator application; and retryDelaySeconds, which sets the delivery delay time for the Amazon SQS queue.

When the AWS SAM package command finishes running, enter the following command to deploy the package:

sam deploy --template-file output_template.yaml --stack-name blogstack --capabilities CAPABILITY_IAM --parameter-overrides maxRetry=2 longCode=LONG_CODE retryDelaySeconds=60

In the preceding command, replace LONG_CODE with the dedicated phone number that you acquired earlier.

When you run this command, AWS SAM shows the progress of the deployment. When the deployment finishes, you can test it by sending a sample event to the CallGenerator Lambda function. Use the following sample event to test the Lambda function:

{
"Message": "<speak>Thank you for visiting the AWS <emphasis>Messaging and Targeting Blog</emphasis>.</speak>",
"PhoneNumber": "DESTINATION_PHONE_NUMBER",
"RetryCount": 0
}

In the preceding event, replace DESTINATION_PHONE_NUMBER with the phone number to which you want to send a test message.

Important: Telecommunication providers take several steps to limit unsolicited voice messages. For example, providers in the United States only deliver a certain number of automated voice messages to each recipient per day. For this reason, you can only use Amazon Pinpoint to send 10 calls per day to each recipient. Keep this limit in mind during the testing process.

Conclusion

This architecture shows how Amazon Pinpoint can deliver state change events to Amazon SNS and how a serverless application can use it. You can adapt this architecture to apply to other use cases, such as call auditing, advanced call analytics and more.

An AWS Elasticsearch Post-Mortem

Post Syndicated from Bozho original https://techblog.bozho.net/aws-elasticsearch-post-mortem/

So it happened that we had a production issue on the SaaS version of LogSentinel – our Elasticsearch stopped indexing new data. There was no data loss, as elasticsearch is just a secondary storage, but it caused some issues for our customers (they could not see the real-time data on their dashboards). Below is a post-mortem analysis – what happened, why it happened, how we handled it and how we can prevent it.

Let me start with a background of how the system operates – we accept audit trail entries (logs) through a RESTful API (or syslog), and push them to a Kafka topic. Then the Kafka topic is consumed to store the data in the primary storage (Cassandra) and index it for better visualization and analysis in Elasticsearch. The managed AWS Elasticsearch service was chosen because it saves you all the overhead of cluster management, and as a startup we want to minimize our infrastructure management efforts. That’s a blessing and a curse, as we’ll see below.

We have alerting enabled on many elements, including the Elasticsearch storage space and the number of application errors in the log files. This allows us to respond quickly to issues. So the “high number of application errors” alarm triggered. Indexing was blocked due to FORBIDDEN/8/index write. We have a system call that enables it, so I tried to run it, but after less than a minute it was blocked again. This meant that our Kafka consumers failed to process the messages, which is fine, as we have a sufficient message retention period in Kafka, so no data can be lost.

I investigated the possible reasons for such a block. And there are two, according to Amazon – increased JVM memory pressure and low disk space. I checked the metrics and everything looked okay – JVM memory pressure was barely reaching 70% (and 75% is the threshold), and there was more than 200GiB free storage. There was only one WARN in the elasticsearch application logs (it was “node failure”, but after that there were no issues reported)

There was another strange aspect of the issue – there were twice as many nodes as configured. This usually happens during upgrades, as AWS is using blue/green deployment for Elasticsearch, but we haven’t done any upgrade recently. These additional nodes usually go away after a short period of time (after the redeployment/upgrade is ready), but they wouldn’t go away in this case.

Being unable to SSH into the actual machine, being unable to unblock the indexing through Elasticsearch means, and being unable to shut down or restart the nodes, I raised a ticket with support. And after a few ours and a few exchanged messages, the problem was clear and resolved.

The main reason for the issue is 2-fold. First, we had a configuration that didn’t reflect the cluster status – we had assumed a bit more nodes and our shared and replica configuration meant we have unassigned replicas (more on shards and replicas here and here). The best practice is to have nodes > number of replicas, so that each node gets one replica (plus the main shard). Having unassigned shard replicas is not bad per se, and there are legitimate cases for it. Our can probably be seen as misconfiguration, but not one with immediate negative effects. We chose those settings in part because it’s not possible to change some settings in AWS after a cluster is created. And opening and closing indexes is not supported.

The second issue is AWS Elasticsearch logic for calculating free storage in their circuit breaker that blocks indexing. So even though there were 200+ GiB free space on each of the existing nodes, AWS Elasticsearch thought we were out of space and blocked indexing. There was no way for us to see that, as we only see the available storage, not what AWS thinks is available. So, the calculation gets the total number of shards+replicas and multiplies it by the per-shard storage. Which means unassigned replicas that do not take actual space are calculated as if they take up space. That logic is counterintuitive (if not plain wrong), and there is hardly a way to predict it.

This logic appears to be triggered when blue/green deployment occurs – so in normal operation the actual remaining storage space is checked, but during upgrades, the shard-based check is triggered. That has blocked the entire cluster. But what triggered the blue/green deployment process?

We occasionally need access to Kibana, and because of our strict security rules it is not accessible to anyone by default. So we temporarily change the access policy to allow access from our office IP(s). This change is not expected to trigger a new deployment, and has never lead to that. AWS documentation, however, states:

In most cases, the following operations do not cause blue/green deployments: Changing access policy, Changing the automated snapshot hour, If your domain has dedicated master nodes, changing data instance count.
There are some exceptions. For example, if you haven’t reconfigured your domain since the launch of three Availability Zone support, Amazon ES might perform a one-time blue/green deployment to redistribute your dedicated master nodes across Availability Zones.

There are other exceptions, apparently, and one of them happened to us. That lead to the blue/green deployment, which in turn, because of our flawed configuration, triggered the index block based on the odd logic to assume unassigned replicas as taking up storage space.

How we fixed it – we recreated the index with fewer replicas and started a reindex (it takes data from the primary source and indexes it in batches). That reduced the size taken and AWS manually intervened to “unstuck” the blue/green deployment. Once the problem was known, the fix was easy (and we have to recreate the index anyway due to other index configuration changes). It’s appropriate to (once again) say how good AWS support is, in both fixing the issue and communicating it.

As I said in the beginning, this did not mean there’s data loss because we have Kafka keep the messages for a sufficient amount of time. However, once the index was writable, we expected the consumer to continue from the last successful message – we have specifically written transactional behaviour that committed the offsets only after successful storing in the primary storage and successful indexing. Unfortunately, the kafka client we are using had auto-commit turned on that we have overlooked. So the consumer has skipped past the failed messages. They are still in Kafka and we are processing them with a separate tool, but that showed us that our assumption was wrong and the fact that the code calls “commit” doesn’t actually mean something.

So, the morals of the story:

  • Monitor everything. Bad things happen, it’s good to learn about them quickly.
  • Check your production configuration and make sure it’s adequate to the current needs. Be it replicas, JVM sizes, disk space, number of retries, auto-scaling rules, etc.
  • Be careful with managed cloud services. They save a lot of effort but also take control away from you. And they may have issues for which your only choice is contacting support.
  • If providing managed services, make sure you show enough information about potential edge cases. An error console, an activity console, or something, that would allow the customer to know what is happening.
  • Validate your assumptions about default settings of your libraries. (Ideally, libraries should warn you if you are doing something not expected in the current state of configuration)
  • Make sure your application is fault-tolerant, i.e. that failure in one component doesn’t stop the world and doesn’t lead to data loss.

To sum it up, a rare event unexpectedly triggered a blue/green deployment, where a combination of flawed configuration and flawed free space calculation resulted in an unwritable cluster. Fortunately, no data is lost and at least I learned something.

The post An AWS Elasticsearch Post-Mortem appeared first on Bozho's tech blog.

Running a Safe Database Cluster in AWS With Auto-Scaling Groups

Post Syndicated from Bozho original https://techblog.bozho.net/running-a-safe-database-cluster-in-aws-with-auto-scaling-groups/

When you have to run a scalable application on AWS, your database must also be scalable. It’s easier to scale the stateless application layer, where each node is mostly disposable – even if a node in a 3-node cluster fails, you can just fire up another one and nobody notices.

The database layer is stateful and therefore there’s a risk to lose data. Having just a single node is not an option, as a node can always go down and that would mean downtime. So you need multiple nodes in a cluster to make sure your application is highly available and fault tolerant (I won’t go into the differences in terminology).

What database am I talking about? It doesn’t matter. It can be a SQL or a NoSQL database – each has some form of clustering available. Whether it’s active-active or active-passive.

Now, for AWS in particular, you can choose RDS (or another managed option), which will handle it for you. But if there’s no managed option (e.g. Cassandra) or you don’t feel the managed option is giving you enough control, or is more expensive, or the version you require is not available, you have to manage the database layer yourself. I won’t go into the details of how to configure the database-specific clustering – you should check the documentation of the particular database for that. I’ll try to give some tips how to safely run your infrastructure that supports the database cluster.

And here come auto-scaling groups. They allow you have have a group of identical nodes (based on a launch configuration) and the ASG makes sure you always have at least X healthy nodes by starting new nodes if existing ones fail( they can automatically kill unhealthy nodes (that is, nodes that do not respond to automated healthchecks)).

That’s awesome for application nodes, but it can be an issue with database nodes. You don’t necessarily want your database node killed if it gets unresponsive for a while. That’s why below I’ve compiled a list of tips to avoid pitfalls. Unfortunately, many of them are not available through CloudFormation, so you have to do them manually. And document them so that you don’t forget in case you need to recreate the stack:

  • Set the minimum number of nodes to 1. It protects from accidentally setting the “Desired” count to 0 as part of experimenting with other, unrelated ASGs
  • Make sure you have enabled termination protection for each instance as well as scale-in termination protection per ASG.
  • In the ASG settings there’s “Suspended Processes”. Make sure you suspend “Terminate” and “ReplaceUnhealthy”.
  • Make sure that in your LaunchConfiguration the EBS volume is not deleted in case of termination. Why do you need that, given that you have disabled all termination options? Well, termination can occasionally happen due to issues with the underlying host, or a node may be scheduled for decommission
  • If at some point you need to restore from an EBS volume, 1. let the ASG spawn a new node 2. temporarily add “Launch” to the suspended actions 3. Detach the root volume of the node 4. attach the old EBS volume to /dev/xvda 5. start the node.
  • Setup a lifecycle policy (through CloudFormation or manually) to do a backup on the database EBS volumes. Make sure you set the proper tags to the volumes (and this can only be done manually)
  • Make sure the ASG can spawn instances in multiple availability zones (in case one goes down)

If you follow that, your auto-scaling groups will not behave exactly as auto-scaling groups. You can still configure automatically increasing the number of nodes in case of increased load, but the rest of the features are rarely a good idea for database layers – you’d prefer to resolve your database issues on existing machines, even if stopped temporarily, rather than just spawn new ones.

But you should embrace failure. Even with all termination protections, you have to assume everything may fail and die and you should have a clear path to restoring your nodes.

The post Running a Safe Database Cluster in AWS With Auto-Scaling Groups appeared first on Bozho's tech blog.

Creating a Seamless Handoff Between Amazon Pinpoint and Amazon Connect

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/creating-a-seamless-handoff-between-amazon-pinpoint-and-amazon-connect/

Note: This post was written by Ilya Pupko, Senior Consultant for the AWS Digital User Engagement team.


Time to read5 minutes
Learning levelIntermediate (200)
Services usedAmazon Pinpoint, Amazon SNS, AWS Lambda, Amazon Lex, Amazon Connect

Your customers deserve to have helpful communications with your brand, regardless of the channel that you use to interact with them. There are many situations in which you might have to move customers from one channel to another—for example, when a customer is interacting with a chatbot over SMS, but their needs suddenly change to require voice assistance. To create a great customer experience, your communications with your customers should be seamless across all communication channels.

Welcome aboard Customer Obsessed Airlines

In this post, we look at a scenario that involves our fictitious airline, Customer Obsessed Airlines. Severe storms in one area of the country have caused Customer Obsessed Airlines to cancel a large number of flights. Customer Obsessed Airlines has to notify all of the affected customers of the cancellations right away. But most importantly, to keep customers as happy as possible in this unfortunate and unavoidable situation, Customer Obsessed Airlines has to make it easy for customers to rebook their flights.

Fortunately, Customer Obsessed Airlines has implemented the solution that’s outlined later in this post. This solution uses Amazon Pinpoint to send messages to a targeted segment of customers—in this case, the specific customers who were booked on the affected flights. Some of these customers might have straightforward travel itineraries that can simply be rebooked through interactions with a chatbot. Other customers who have more complex itineraries, or those who simply prefer to interact with a human over the phone, can be handed off to an agent in your call center.

About the solution

The solution that we’ll build to handle this scenario can be deployed in under an hour. The following diagram illustrates the interactions in this solution.

At a high level, this solution uses the following workflow:

  1. An event occurs. Automated impact analysis systems trigger the creation of custom segments—in this case, all passengers whose flights were cancelled.
  2. Amazon Pinpoint sends a message to the affected passengers through their preferred channels. Amazon Pinpoint supports the email, SMS, push, and voice channels, but in this example, we focus exclusively on SMS.
  3. Passengers who receive the message can respond. When they do, they interact with a chatbot that helps them book a different flight.
  4. If a passenger requests a live agent, or if their situation can’t be handled by a chatbot, then Amazon Pinpoint passes information about the customer’s situation and communication history to Amazon Connect. The passenger is entered into a queue. When the passenger reaches the front of the queue, they receive a phone call from an agent.
  5. After being re-booked, the passenger receives a written confirmation of the changes to their itinerary through their preferred channel. Passengers are also given the option of providing feedback on their interaction when the process is complete.

To build this solution, we use Amazon Pinpoint to segment our customers based on their attributes (such as which flight they’ve booked), and to deliver messages to those segments.

We also use Amazon Connect to manage the voice calling part of the solution, and Amazon Lex to power the chatbot. Finally, we connect these services using logic that’s defined in AWS Lambda functions.

Setting up the solution

Step 1: Set up Amazon Pinpoint and link it with Amazon Lex

The first step in setting up this solution is to create a new Amazon Pinpoint project and configure the SMS channel. When that’s done, you can create an Amazon Lex chatbot and link it to the Amazon Pinpoint project.

We described this process in detail in an earlier blog post. Complete the procedures in Create an SMS Chatbot with Amazon Pinpoint and Amazon Lex, and then proceed to step 2.

Step 2: Set up Amazon Connect and link it with your Amazon Lex chatbot

By completing step 1, we’ve created a system that can send messages to our passengers and receive messages from them. The next step is to create a way for passengers to communicate with our call center.

The Amazon Connect Administrator Guide provides instructions for linking an Amazon Lex bot to an Amazon Connect instance. For complete procedures, see Add an Amazon Lex Bot.

When you complete these procedures, link your Amazon Connect instance to the same Amazon Lex bot that you created in step 1. This step is intended to provide customers with a consistent, cohesive experience across channels.

Step 3: Set up an Amazon Connect callback queue and use Amazon Pinpoint keyword logic to trigger it

Now that we’ve configured Amazon Pinpoint and Amazon Connect, we can connect them.

Linking the two services makes it possible for passengers to request additional assistance. Traditionally, passengers in this situation would have to call a call center themselves and then wait on hold for an agent to become available. However, in this solution, our call center calls the passenger directly as soon as an agent is available. When the agent calls the passenger, the agent has all of the information about the passenger’s issue, as well as a transcript of the passenger’s interactions with your chatbot.

To implement an automatic callback mechanism, use the Amazon Pinpoint Connect Callback Requestor, which is available on the AWS GitHub page.

Next steps

By completing the preceding three steps, you can send messages to a subset of your users based on the criteria you choose and the type of message you want to send. Your customers can interact with your message by replying with questions. When they do, a chatbot responds intelligently and appropriately.

You can add to this solution by expanding it to cover other communication channels, such as push notifications. You can also automate the initial communication by integrating the solution with your systems of record.

We’re excited to see what you build using the solution that we outlined in this post. Let us know of your ideas and your successes in the comments.

The value of using an Email Service Provider (ESP) for end customer communications

Post Syndicated from Heidi Gloudemans original https://aws.amazon.com/blogs/messaging-and-targeting/the-value-of-using-an-email-service-provider/

In the world of pervasive consumer chat and messaging applications, email has retained its place as a ubiquitous channel for end customer communications. The Radicati Group reports that email usage will continue to grow from an estimate of 3.9 billion users in 2019 to 4.4 billion in 2023.1 Email is a welcome (and preferred) conduit for many consumers to stay connected to their favorite brands.2 Marketers and developers have found that email has the highest ROI of any end customer communications channel because of its low barrier to entry, affordability and ability to target specific recipients.

However, the very strength of email may also be its biggest weakness. The ubiquity of email as a communication channel means that many consumer mailboxes are inundated with potentially malicious or unwanted messages. Brands must then take action to ensure that their marketing and transactional messages are received by the end customer in a timely manner. While personalized content is one of the fastest growing parts of facilitating deeper customer engagement, organizations should first understand the value of partnering with a mature and trusted email service provider (ESP.)

Many organizations have built internal competencies around business email. Some businesses use an on-premises email server instance or subscribe to a managed email cloud offering (like Amazon WorkMail), while others use email servers on cloud-hosted infrastructure like Amazon EC2. While the fundamentals of email transport is shared between business and marketing email platforms, marketing and transactional email has unique parameters that require additional consideration. ESPs have built specialized expertise in delivering email at scale.

Security and Scale

ESPs have the ability to send email on behalf of an organization’s domain or subdomains. ESPs facilitate mail delivery securely through both domain-key identified mail (DKIM) records and sender policy framework (SPF) records. SPF validates for the mail received that the IP address is part of an accepted senders list. The second authentication protocol used is DKIM which provides a signature to verify each mail using an encrypted key pairing to validate the domain purported is the actual domain sending the message. The combination of both security methods is the first step in authenticating a message from an ESP to the mailbox provider or recipient domain. An additional optional authentication reporting mechanism is Domain Message Reporting and Conformance (DMARC), which can report back any attempts of non-authenticated parties to spoof sending domains.

In addition to security, the best ESPs can also deliver mail at a very large scale. Scale is not just the compute power to deliver hundreds of millions of email a day. It also includes the ability to sustain significant sending volumes of email over time while receiving end-point delivery messages. However, in addition to ESP infrastructure, there are other insights that ESPs can provide around deliverability of a message.

Deliverability

ESPs typically give mail senders the ability to manage their reputation assigned by major internet service providers (ISPs) that include Google Mail, Outlook.com, etc. Reputation is assigned to sending IP addresses and domains with a combination of influencing factors. Evaluation criteria includes, but is not limited to, the following:

  • Type of message and content – Marketing vs. Transactional
  • Valid email addresses and hard bounce rates
  • Feedback from end users (ie. Spam complaints and unsubscribes)
  • Open rates and engagement from end users
  • Time of delivery and cadence of delivery
  • Major blacklist listings, like Spamhaus

Reputation impacts deliverability, which impacts whether a message ends up in an end customer’s mailbox. Transactional messages and marketing messages are typically treated differently by ISPs. Transactional messages (like order confirmations, password resets, etc.) need to be sent and received within a short period of time after a transaction and has less strict rules when it comes to content and scheduling. Marketing messages must adhere to specific regulatory rules per country.

A poor deliverability designation may throttle or even block your message’s ability to reach its destination, instead ending up in the junk folder or blocked altogether. Top ESPs monitor and provide feedback, ensuring mail senders have the ability to take action. However, ESPs also have a responsibility to keep bad actors off their platforms and will block bad senders utilizing them as an ESP resource.

Most ESPs give mail senders the flexibility to manage their sender reputation through both deployment options and delivery statistics. Common deployment options include shared and dedicated IP addresses and pools. Shared IP pools comprise of many individual senders all using the same IP space. The IP reputation is shared across the pool and can both positively and negatively influence senders in the pool based on the majority of traffic. Dedicated IPs are managed directly by a single customer, and can even be dedicated to specific functions. For example a set of IPs can be dedicated to transaction emails vs. pure marketing communications. Many customers prefer dedicated environments because of the ability directly influence deliverability and prevent external influence on their sending reputation.

Statistics that may prove to be leading indicators of deliverability problems include:

  • Mailbox placement (Inbox vs. Junk folder)
  • Detailed email deliverability statistics (bounce rates, etc.)
  • Blacklist monitoring

However, the most important factor impacting mailbox placement is having explicit permission from the end user, with agreed upon content and frequency. End users should know when they are opting into email communications with senders. It is not recommended to use purchased or rented lists, and doing so is typically in violation of the use policy of most ESPs. To stay compliant, it is important to keep your contact preferences up to date and quickly react to unsubscribes and complaints.

Email Analytics

In addition to measuring your sender reputation, metrics are also an important part of measuring the success of your email campaign. Part of calculating the ROI of your outbound email effort is understanding how many emails were opened and whether the end user completed any calls to action (CTAs) in the mail, like clicking a link.

Email analytics from your ESP should include the end-to-end lifecycle of the mail, including:

  • Deliverability rates
  • Delivery rates per ISP
  • Unique Open rates
  • Unique Click through rates
  • Unsubscribes
  • Complaints

Ultimately, in combination with web commerce integration, these metrics can help you determine conversion rates of your campaign to a paying customer.

Conclusion

Connecting to end customers and driving engagement is the goal of every brand. But to make each connection count, brands must ensure that their messages are being delivered through purpose-built infrastructure. Email Service Providers (ESPs) should be part of every organization’s successful email marketing strategy.

Amazon SES has been the trustworthy, flexible and affordable email service provider for developers and marketers since 2012. Learn more here: https://aws.amazon.com/ses/

1https://www.statista.com/statistics/255080/number-of-e-mail-users-worldwide/

2https://www.statista.com/statistics/984615/consumer-brand-communications-channels/

 

About the Author
Heidi Gloudemans is a Senior PMM on Amazon SES and Amazon Pinpoint

Netflix at AWS re:Invent 2019

Post Syndicated from Netflix Technology Blog original https://medium.com/netflix-techblog/netflix-at-aws-re-invent-2019-e09bfc144831?source=rss----2615bd06b42e---4

by Shefali Vyas Dalal

AWS re:Invent is a couple weeks away and our engineers & leaders are thrilled to be in attendance yet again this year! Please stop by our “Living Room” for an opportunity to connect or reconnect with Netflixers. We’ve compiled our speaking events below so you know what we’ve been working on. We look forward to seeing you there!

Monday — December 2

1pm-2pm CMP 326-R Capacity Management Made Easy with Amazon EC2 Auto Scaling

Vadim Filanovsky, Senior Performance Engineer & Anoop Kapoor, AWS

Abstract:Amazon EC2 Auto Scaling offers a hands-free capacity management experience to help customers maintain a healthy fleet, improve application availability, and reduce costs. In this session, we deep-dive into how Amazon EC2 Auto Scaling works to simplify continuous fleet management and automatic scaling with changing load. Netflix delivers shows like Sacred Games, Stranger Things, Money Heist, and many more to more than 150 million subscribers across 190+ countries around the world. Netflix shares how Amazon EC2 Auto Scaling allows its infrastructure to automatically adapt to changing traffic patterns in order to keep its audience entertained and its costs on target.

4:45pm-5:45pm NFX 202 A day in the life of a Netflix Engineer

Dave Hahn, SRE Engineering Manager

Abstract: Netflix is a large, ever-changing ecosystem serving millions of customers across the globe through cloud-based systems and a globally distributed CDN. This entertaining romp through the tech stack serves as an introduction to how we think about and design systems, the Netflix approach to operational challenges, and how other organizations can apply our thought processes and technologies. In this session, we discuss the technologies used to run a global streaming company, growing at scale, billions of metrics, benefits of chaos in production, and how culture affects your velocity and uptime.

4:45pm-5:45pm NFX 209 File system as a service at Netflix

Kishore Kasi, Senior Software Engineer

Abstract: As Netflix grows in original content creation, its need for storage is also increasing at a rapid pace. Technology advancements in content creation and consumption have also increased its data footprint. To sustain this data growth at Netflix, it has deployed open-source software Ceph using AWS services to achieve the required SLOs of some of the post-production workflows. In this talk, we share how Netflix deploys systems to meet its demands, Ceph’s design for high availability, and results from our benchmarking.

Tuesday — December 3

11:30am-12:30pm NFX 208 Netflix’s container journey to bare metal Amazon EC2

Andrew Spyker, Compute Platform Engineering Manager

Abstract: In 2015, Netflix started supporting containers as part of their compute platform. Over the years, this platform took on support for both elastic online services and fully featured batch workloads supporting use cases across Netflix engineering. It launches more than four million containers per week across thousands of underlying hosts. The release of Amazon EC2 bare metal instances gave direct access to host processors and memory while providing a control plane for these container hosts. In 2019, Netflix moved thousands of container hosts to bare metal. This talk explores the journey, learnings, and improvements to performance analysis, efficiency, reliability, and security.

5:30pm-6:30pm CMP 326-R Capacity Management Made Easy

Vadim Filanovsky, Senior Performance Engineer & Anoop Kapoor, AWS

Abstract: Amazon EC2 Auto Scaling offers a hands-free capacity management experience to help customers maintain a healthy fleet, improve application availability, and reduce costs. In this session, we deep-dive into how Amazon EC2 Auto Scaling works to simplify continuous fleet management and automatic scaling with changing load. Netflix delivers shows like Sacred Games, Stranger Things, Money Heist, and many more to more than 150 million subscribers across 190+ countries around the world. Netflix shares how Amazon EC2 Auto Scaling allows its infrastructure to automatically adapt to changing traffic patterns in order to keep its audience entertained and its costs on target.

Wednesday — December 4

10am-11am NFX 203 From Pitch to Play: The technology behind going from ideas to streaming

Ryan Schroeder, Senior Software Engineer

Abstract: It takes a lot of different technologies and teams to get entertainment from the idea stage through being available for streaming on the service. This session looks at what it takes to accept, produce, encode, and stream your favorite content. We explore all the systems necessary to make and stream content from Netflix.

1pm-2pm NFX 207 Benchmarking stateful services in the cloud

Vinay Chella, Data Platform Engineering Manager

Abstract: AWS cloud services make it possible to achieve millions of operations per second in a scalable fashion across multiple regions. Netflix runs dozens of stateful services on AWS under strict sub-millisecond tail-latency requirements, which brings unique challenges. In order to maintain performance, benchmarking is a vital part of our system’s lifecycle. In this session, we share our philosophy and lessons learned over the years of operating stateful services in AWS. We showcase our case studies, open-source tools in benchmarking, and how we ensure that AWS cloud services are serving our needs without compromising on tail latencies.

3:15pm-4:15pm OPN 209 Netflix’s application deployment at scale

Andy Glover, Director Delivery Engineering & Paul Roberts, AWS

Abstract: Spinnaker is an open-source continuous-delivery platform created by Netflix to improve its developers’ efficiency and reduce the time it takes to get an application into production. Netflix has over 140 million members, and in this session, Netflix shares the tooling it uses to deploy applications to meet its customers’ needs. Join us to learn why Netflix created Spinnaker, how the platform is being used at scale, how the company works with the broader open-source community, and the work it’s doing with AWS to build out a new functions compute primitive.

4pm-5pm OPN 303-R BPF Performance Analysis

Brendan Gregg, Senior Performance Engineer

Abstract: Extended BPF (eBPF) is an open-source Linux technology that powers a whole new class of software: mini programs that run on events. Among its many uses, BPF can be used to create powerful performance-analysis tools capable of analyzing everything: CPUs, memory, disks, file systems, networking, languages, applications, and more. In this session, Netflix’s Brendan Gregg tours BPF tracing capabilities, including many new open-source performance analysis tools he developed for his new book “BPF Performance Tools: Linux System and Application Observability.” The talk also includes examples of using these tools in the Amazon Elastic Compute Cloud (Amazon EC2) cloud.

Thursday — December 5

12:15pm-1:15pm NFX 205 Monitoring anomalous application behavior

Travis McPeak, Application Security Engineering Manager & William Bengston, Director HashiCorp

Abstract: AWS CloudTrail provides a wealth of information on your AWS environment. In addition, teams can use it to perform basic anomaly detection by adding state. In this talk, Travis McPeak of Netflix and Will Bengtson introduce a system built strictly with off-the-shelf AWS components that tracks CloudTrail activity across multi-account environments and sends alerts when applications perform anomalous actions. By watching applications for anomalous actions, security and operations teams can monitor unusual and erroneous behavior. We share everything attendees need to implement CloudTrail in their own organizations.

1pm-2pm OPN 303-R1 BPF Performance Analysis

Brendan Gregg, Senior Performance Engineer

Abstract: Extended BPF (eBPF) is an open-source Linux technology that powers a whole new class of software: mini programs that run on events. Among its many uses, BPF can be used to create powerful performance-analysis tools capable of analyzing everything: CPUs, memory, disks, file systems, networking, languages, applications, and more. In this session, Netflix’s Brendan Gregg tours BPF tracing capabilities, including many new open-source performance analysis tools he developed for his new book “BPF Performance Tools: Linux System and Application Observability.” The talk also includes examples of using these tools in the Amazon Elastic Compute Cloud (Amazon EC2) cloud.

1:45pm-2:45pm NFX 201 More Data Science with less engineering: ML Infrastructure

Ville Tuulos, Machine Learning Infrastructure Engineering Manager

Abstract: Netflix is known for its unique culture that gives an extraordinary amount of freedom to individual engineers and data scientists. Our data scientists are expected to develop and operate large machine learning workflows autonomously without the need to be deeply experienced with systems or data engineering. Instead, we provide them with delightfully usable ML infrastructure that they can use to manage a project’s lifecycle. Our end-to-end ML infrastructure, Metaflow, was designed to leverage the strengths of AWS: elastic compute; high-throughput storage; and dynamic, scalable notebooks. In this session, we present our human-centric design principles that enable the autonomy our engineers enjoy.


Netflix at AWS re:Invent 2019 was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Predictive Segmentation Using Amazon Pinpoint and Amazon SageMaker Solution

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/predictive-segmentation-using-amazon-pinpoint-and-amazon-sagemaker-solution/

Note: This post was written by Ryan Lowe, an AWS Solution Architect and the author of the Predictive Segmentation Using Amazon Pinpoint and Amazon SageMaker Solution.


Businesses increasingly find themselves spending more time competing for a decreasing share of customers’ attention. A natural result of this competition is an increase in the cost of acquiring new customers. For this reason, it’s important for businesses to not only retain their customer bases, and to ensure that they stay engaged. Machine Learning is a powerful tool that can help you meet this need. By using machine learning to create customer segments in Amazon Pinpoint, you can create more relevant and targeted communication experiences for your customers, which can lower your churn rates, increase loyalty, and drive higher conversion rates.

Today, we’re launching a new AWS Solution: Predictive Segmentation Using Amazon Pinpoint and Amazon SageMaker. This solution provides a simple way to use machine learning to more effectively target your customers. Your Data Science team can use Amazon SageMaker to train their models using existing customer engagement data. They can then quickly connect their models directly to Amazon Pinpoint to create highly targeted customer segments. Marketers can then use these segments to start sending messages. The event data for those events is then fed back into Amazon SageMaker, where it helps to train and refine the machine learning model.

The following diagram illustrates the flow of data in this solution.

 

In this solution, customer engagement and segmentation data are sent from Amazon Pinpoint to Amazon S3 buckets via Amazon Kinesis. The data in these S3 buckets is crawled daily and added to an AWS Glue Data Catalog.

Separately, a daily process, orchestrated by AWS Step Functions, uses a series of Lambda functions to query customer data. The process uses Amazon Athena to execute a series of queries against the AWS Glue Data Catalog. Amazon SageMaker uses this data to create predictions based on a trained ML model.

When you deploy this solution, it creates fictitious customer and engagement data. You can combine this sample data with the Jupyter notebook provided with this solution to train and deploy a basic churn model.

Before you deploy this solution in production, we recommend that you consult with a data scientist who can train a machine learning model that’s tailored to your actual customer data.

To get started, visit the Predictive Segmentation Using Amazon Pinpoint and Amazon SageMaker page on the AWS Solutions site.

Building Your First Journey in Amazon Pinpoint

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/building-your-first-journey-in-amazon-pinpoint/

Note: This post was written by Zach Barbitta, the Product Lead for Pinpoint Journeys, and Josh Kahn, an AWS Solution Architect.


We recently added a new feature called journeys to Amazon Pinpoint. With journeys, you can create fully automated, multi-step customer engagements through an easy to use, drag-and-drop interface.

In this post, we’ll provide a quick overview of the features and capabilities of Amazon Pinpoint journeys. We’ll also provide a blueprint that you can use to build your first journey.

What is a journey?

Think of a journey as being similar to a flowchart. Every journey begins by adding a segment of participants to it. These participants proceed through steps in the journey, which we call activities. Sometimes, participants split into different branches. Some participants end their journeys early, while some proceed through several steps. The experience of going down one path in the journey can be very different from the experience of going down a different path. In a journey, you determine the outcomes that are most relevant for your customers, and then create the structure that leads to those outcomes.

Your journey can contain several different types of activities, each of which does something different when journey participants arrive on it. For example, when participants arrive on an Email activity, they receive an email. When they land on a Random Split activity, they’re randomly separated into one of up to five different groups. You can also separate customers based on their attributes, or based on their interactions with previous steps in the journey, by using a Yes/no Split or a Multivariate Split activity. There are six different types of activities that you can add to your journeys. You can find more information about these activity types in our User Guide.

The web-based Amazon Pinpoint management console includes an easy-to-use, drag-and-drop interface for creating your journeys. You can also create journeys programmatically by using the Amazon Pinpoint API, the AWS CLI, or an AWS SDK. Best of all, you can use the API to modify journeys that you created in the console, and vice-versa.

Planning our journey

The flexible nature of journeys makes it easy to create customer experiences that are timely, personalized, and relevant. In this post, we’ll assume the role of a music streaming service. We’ll create a journey that takes our customers through the following steps:

  1. When customers sign up, we’ll send them an email that highlights some of the cool features they’ll get by using our service.
  2. After 1 week, we’ll divide customers into two separate groups based whether or not they opened the email we sent them when they signed up.
  3. To the group of customers who opened the first message, we’ll send an email that contains additional tips and tricks.
  4. To the group who didn’t open the first message, we’ll send an email that reiterates the basic information that we mentioned in the first message.

The best part about journeys in Amazon Pinpoint is that you can set them up in just a few minutes, without having to do any coding, and without any specialized training (unless you count reading this blog post as training).

After we launch this journey, we’ll be able to view metrics through the same interface that we used to create the journey. For example, for each split activity, we’ll be able to see how many participants were sent down each path. For each Email activity, we’ll be able to view the total number of sends, deliveries, opens, clicks, bounces, and unsubscribes. We’ll also be able to view these metrics as aggregated totals for the entire journey. These metrics can help you discover what worked and what didn’t work in your journey, so that you can build more effective journeys in the future.

First steps

There are a few prerequisites that we have to complete before we can start creating the journey.

Create Endpoints

When a new user signs up for your service, you have to create an endpoint record for that user. One way to do this is by using AWS Amplify (if you use Amplify’s built-in analytics tools, Amplify creates Amazon Pinpoint endpoints). You can also use AWS Lambda to call Amazon Pinpoint’s UpdateEndpoint API. The exact method that you use to create these endpoints is up to your engineering team.

When your app creates new endpoints, they should include a user attribute that identifies them as someone who should participate in the journey. We’ll use this attribute to create our segment of journey participants.

If you just want to get started without engaging your engineering team, you can import a segment of endpoints to use in your journey. To learn more, see Importing Segments in the Amazon Pinpoint User Guide.

Verify an Email Address

In Amazon Pinpoint, you have to verify the email addresses that you use to send email. By verifying an email address, you prove that you own it, and you grant Amazon Pinpoint permission to send your email from that address. For more information, see Verifying Email Identities in the Amazon Pinpoint User Guide.

Create a Template

Before you can send email from a journey, you have to create email templates. Email templates are pre-defined message bodies that you can use to send email messages to your customers. You can learn more about creating email templates in the Amazon Pinpoint User Guide.

To create the journey that we discussed earlier, we’ll need at least three email templates: one that contains the basic information that we want to share with new customers, one with more advanced content for users who opened the first email, and another that reiterates the content in the first message.

Create a Segment

Finally, you need to create the segment of users who will participate in the journey. If your service creates an endpoint record that includes a specific user attribute, you can use this attribute to create your segment. You can learn more about creating segments in the Amazon Pinpoint User Guide.

Building the Journey

Now that we’ve completed the prerequisites, we can start building our journey. We’ll break the process down into a few quick steps.

Set it up

Start by signing in to the Amazon Pinpoint console. In the navigation pane on the right side of the page, choose Journeys, and then choose Create a journey. In the upper left corner of the page, you’ll see an area where you can enter a name for the journey. Give the journey a name that helps you identify it.

Add participants

Every journey begins with a Journey entry activity. In this activity, you define the segment that will participate in the journey. On the Journey entry activity, choose Set entry condition. In the list, choose the segment that contains the participants for the journey.

On this activity, you can also control how often new participants are added to the journey. Because new customers could sign up for our service at any time, we want to update the segment regularly. For this example, we’ll set up the journey to look for new segment members once every hour.

Send the initial email

Now we can configure the first message that we’ll send in our journey. Under the Journey entry activity that you just configured, choose Add activity, and then choose Send email. Choose the email template that you want to use in the email, and then specify the email address that you want to send the email from.

Add the split

Under the Email activity, choose Add activity, and then choose Yes/no split. This type of activity looks for all journey participants who meet a condition and sends them all down a path (the “Yes” path). All participants who don’t meet that condition are sent down a “No” path. This is similar to the Multivariate split activity. The difference between the two is that the Multivariate split can produce up to four branches, each with its own criteria, plus an “Else” branch for everyone who doesn’t meet the criteria in the other branches. A Yes/no split activity, on the other hand, can only ever have two branches, “Yes” and “No”. However, unlike the Multivariate split activity, the “Yes” branch in a Yes/no split activity can contain more than one matching criteria.

In this case, we’ll set up the “Yes” branch to look for email click events in the Email activity that occurred earlier in the journey. We also know that the majority of recipients who open the email aren’t going to do so the instant they receive the email. For this reason, we’ll adjust the Condition evaluation setting for the activity to wait for 7 days before looking for open events. This gives our customers plenty of time to receive the message, check their email, open the message, and, if they’re interested in the content of the message, click a link in the message. When you finish setting up the split activity, it should resemble the example shown in the following image.

Continue building the branches

From here, you can continue building the branches of the journey. In each branch, you’ll send a different email template based on the needs of the participants in that branch. Participants in the “Yes” branch will receive an email template that contains more advanced information, while those in the “No” branch will receive a template that revisits the content from the original message.

To learn more about setting up the various types of journey activities, see Setting Up Journey Activities in the Amazon Pinpoint User Guide.

Review the journey

When you finish adding branches and activities to your journey, choose the Review button in the top right corner of the page. When you do, the Review your journey pane opens on the left side of the screen. The Review your journey pane shows you errors that need to be fixed in your journey. It also gives you some helpful recommendations, and provides some best practices that you can use to optimize your journey. You can click any of the notifications in the Review your journey page to move immediately to the journey activity that the message applies to. When you finish addressing the errors and reviewing the recommendations, you can mark the journey as reviewed.

Testing and publishing the journey

After you review the journey, you have a couple options. You can either launch the journey immediately, or test the journey before you launch it.

If you want to test the journey, close the Review your journey pane. Then, on the Actions menu, choose Test. When you test your journey, you have to choose a segment of testers. The segment that you choose should only contain the internal recipients on your team who you want to test the journey before you publish it. You also have the option of changing the wait times in your journey, or skipping them altogether. If you choose a wait time of 1 hour here, it makes it much easier to test the Yes/no split activity (which, in the production version of the journey, contains a 7 day wait). To learn more about testing your journey, including some tips and best practices, see Testing Your Journey in the Amazon Pinpoint User Guide.

When you finish your testing, complete the review process again. On the final page of content in the Review your journey pane, choose Publish. After a short delay, your journey goes live. That’s it! You’ve created and launched your very first journey!

Wrapping up

We’re very excited about Pinpoint Journeys and look forward to seeing what you build with it. If you have questions, leave us a message on the Amazon Pinpoint Developer Forum and check out the Amazon Pinpoint documentation.

Visit the AWS Digital User Engagement team at AWS re:Invent 2019

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/visit-the-aws-digital-user-engagement-team-at-aws-reinvent-2019/

AWS re:Invent 2019 is less than 50 days away, and that means it’s time to start planning your agenda. The Digital User Engagement team is hosting several builders sessions, chalk talks, and workshops this year. Come join us and learn more about using Amazon Pinpoint and Amazon SES to engage with and delight your customers.

Visit our booth

You’ll find our booth in the Expo Hall in the Venetian. Stop by to meet the team, see a demo, and pick up some swag!

Leadership session

EUC206: How AWS is defining the future of engagement and messaging

  • What: Simon Poile, the General Manager of the AWS Digital User Engagement team, talks about how AWS is building on Amazon’s customer-centric culture of innovation to help you better engage your customers. You’ll also hear from AWS customer Coinbase, which uses Amazon Pinpoint to delight its customers while growing its business.
  • When: Wednesday, Dec 4, 10:45 AM – 11:45 AM
  • Where: MGM, Level 3, Chairman’s Ballroom 368

Sessions

EUC207: Build high-volume email applications with Amazon SES

  • What: Companies in many industries use AWS to send millions of emails every day, including Amazon.com. In this session, learn how to build applications using the highly scalable, highly reliable, and multi-tenant-capable email infrastructure of Amazon Simple Email Service (Amazon SES). You also learn how to monitor delivery rates and other important metrics, and how to use this data to improve deliverability. Members of the Amazon.com team discuss the architecture of their multi-tenant email-sending platform, the historical challenges they faced, and the ways Amazon Pinpoint and Amazon SES helped them meet their goals around Prime Day, Cyber Monday, and other retail events.
  • When: Monday, Dec 2, 1:00 PM – 2:00 PM
  • Where: MGM, Level 1, Grand Ballroom 119

Chalk talks

EUC328: Engage with your customers using SMS text messages

  • What: Text messages form a vital part of customer-engagement strategy for organizations around the world. In this workshop, learn how to use Amazon Pinpoint to send promotional, transactional, and two-way SMS messages. You also see demonstrations of how other AWS customers use SMS messaging to engage with their customers.
  • When: Wednesday, Dec 4, 2:30 PM – 3:30 PM
  • Where: Bellagio, Bellagio Ballroom 5

EUC336: Surprise and delight customers with location-based notifications

  • What: In this chalk talk, learn how to use AWS Amplify, AWS AppSync, and Amazon Pinpoint to geo-target customers. We teach you how to build and configure geofences to trigger location-based mobile-app notifications. We also walk you through the published solution and provide dedicated time for Q&A with an AWS solutions architect.
  • When: Thursday, Dec 5, 2:30 PM – 3:30 PM
  • Where: Aria, Plaza Level East, Orovada 3

AIM346-R and AIM346-R1: Personalized user engagement with machine learning

  • What: In this chalk talk, we discuss how to use Amazon Personalize and Amazon Pinpoint to provide a personalized, omni-channel experience starting in your mobile application. We discuss best practices for real-time updates, personalized notifications (push), and messaging (email and text) that drives user engagement and product discovery. We also demonstrate how other mobile services can be used to facilitate rapid prototyping.
Session #WhenWhere
AIM346-RMonday, Dec 2, 11:30 AM – 12:30 PMBellagio, Bellagio Ballroom 7
AIM346-R1Tuesday, Dec 3, 4:00 PM – 5:00 PMAria, Plaza Level East, Orovada 3

ENT315: Improve message deliverability to ensure customer reach

  • What: Do you have outbound and inbound email requirements? Is email a critical workload for your enterprise? Several factors determine whether your email messages reach your recipients. In this chalk talk, learn how to safely migrate your outbound and inbound email volumes over to AWS and Amazon Simple Email Service (Amazon SES). Learn how to onboard, safely ramp up, and ensure that business continues without disruption. Also learn best practices for delivering email messages into your customers’ inboxes rather than their spam folders, and receive guidance on scaling and improving the deliverability of your email campaigns.
  • When: Thursday, Dec 5, 11:30 AM – 12:30 PM
  • Where: Aria, Plaza Level East, Orovada 3

Builder’s sessions

EUC308-R and EUC308-R1: Build and deploy your own two-way text chatbot

  • What: In this builders session, you build an AI-powered chatbot that your customers can engage with by sending SMS messages. Your chatbot can help customers quickly ask questions, get answers, book appointments, check order status, and much more.
Session #WhenWhere
EUC308-RTuesday, Dec 3, 1:00 PM – 2:00 PMMirage, Grand Ballroom B – Table 9
EUC308-R1Wednesday, Dec 4, 4:00 PM – 5:00 PMAria, Level 1 West, Bristlecone 2 – Table 2

EUC309-R and EUC309-R1: Build your own omnichannel e-commerce experience

  • What: In this hands-on session, you learn how to integrate AWS Amplify and Amazon Pinpoint to create a retail website. You use the event data that’s generated by customers’ activities on your site to send custom-tailored emails and push notifications, creating a curated, omnichannel experience. This session is intended for builders who want to expand the user-engagement capabilities of their sites and apps.
Session #WhenWhere
EUC309-RMonday, Dec 2, 12:15 PM – 1:15 PMAria, Level 1 West, Bristlecone 4 – Table 8
EUC309-R1Tuesday, Dec 3, 11:30 AM – 12:30 PMAria, Level 1 West, Bristlecone 2 – Table 1

EUC322: Improve customer engagement by predicting user behavior

  • What: In this hands-on session, you learn how to use Amazon SageMaker and Amazon Pinpoint to create customer engagement scenarios powered by machine learning. You use cross-channel customer-activity and demographic data to train your own behavioral models. After you use your model to categorize your customers, you use Amazon Pinpoint to send engagement campaigns that are optimized to reengage users. This session is intended for builders, marketers, or data scientists who want to improve user engagement using machine learning.
  • When: Monday, Dec 2, 10:45 AM – 11:45 AM
  • Where: Aria, Level 1 West, Bristlecone 4 – Table 2

Sending Push Notifications to iOS 13 Devices with Amazon SNS

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/sending-push-notifications-to-ios-13-devices-with-amazon-sns/

Note: This post was written by Alan Chuang, a Senior Product Manager on the AWS Messaging team.


On September 19, 2019, Apple released iOS 13. This update introduced changes to the Apple Push Notification service (APNs) that can impact your existing workloads. Amazon SNS has made some changes that ensure uninterrupted delivery of push notifications to iOS 13 devices.

iOS 13 introduced a new and required header called apns-push-type. The value of this header informs APNs of the contents of your notification’s payload so that APNs can respond to the message appropriately. The possible values for this header are:

  • alert
  • background
  • voip
  • complication
  • fileprovider
  • mdm

Apple’s documentation indicates that the value of this header “must accurately reflect the contents of your notification’s payload. If there is a mismatch, or if the header is missing on required systems, APNs may return an error, delay the delivery of the notification, or drop it altogether.”

We’ve made some changes to the Amazon SNS API that make it easier for developers to handle this breaking change. When you send push notifications of the alert or background type, Amazon SNS automatically sets the apns-push-type header to the appropriate value for your message. For more information about creating an alert type and a background type notification, see Generating a Remote Notification and Pushing Background Updates to Your App on the Apple Developer website.

Because you might not have enough time to react to this breaking change, Amazon SNS provides two options:

  • If you want to set the apns-push-type header value yourself, or the contents of your notification’s payload require the header value to be set to voip, complication, fileprovider, or mdm, Amazon SNS lets you set the header value as a message attribute using the Amazon SNS Publish API action. For more information, see Specifying Custom APNs Header Values and Reserved Message Attributes for Mobile Push Notifications in the Amazon SNS Developer Guide.
  • If you send push notifications as alert or background type, and if the contents of your notification’s payload follow the format described in the Apple Developer documentation, then Amazon SNS automatically sets the correct header value. To send a background notification, the format requires the aps dictionary to have only the content-available field set to 1. For more information about creating an alert type and a background type notification, see Generating a Remote Notification and Pushing Background Updates to Your App on the Apple Developer website.

We hope these changes make it easier for you to send messages to your customers who use iOS 13. If you have questions about these changes, please open a ticket with our Customer Support team through the AWS Management Console.

Predictive User Engagement using Amazon Pinpoint and Amazon Personalize

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/predictive-user-engagement-using-amazon-pinpoint-and-amazon-personalize/

Note: This post was written by John Burry, a Solution Architect on the AWS Customer Engagement team.


Predictive User Engagement (PUE) refers to the integration of machine learning (ML) and customer engagement services. By implementing a PUE solution, you can combine ML-based predictions and recommendations with real-time notifications and analytics, all based on your customers’ behaviors.

This blog post shows you how to set up a PUE solution by using Amazon Pinpoint and Amazon Personalize. Best of all, you can implement this solution even if you don’t have any prior machine learning experience. By completing the steps in this post, you’ll be able to build your own model in Personalize, integrate it with Pinpoint, and start sending personalized campaigns.

Prerequisites

Before you complete the steps in this post, you need to set up the following:

  • Create an admin user in Amazon Identity and Account Management (IAM). For more information, see Creating Your First IAM Admin User and Group in the IAM User Guide. You need to specify the credentials of this user when you set up the AWS Command Line Interface.
  • Install Python 3 and the pip package manager. Python 3 is installed by default on recent versions of Linux and macOS. If it isn’t already installed on your computer, you can download an installer from the Python website.
  • Use pip to install the following modules:
    • awscli
    • boto3
    • jupyter
    • matplotlib
    • sklearn
    • sagemaker

    For more information about installing modules, see Installing Python Modules in the Python 3.X Documentation.

  • Configure the AWS Command Line Interface (AWS CLI). During the configuration process, you have to specify a default AWS Region. This solution uses Amazon Sagemaker to build a model, so the Region that you specify has to be one that supports Amazon Sagemaker. For a complete list of Regions where Sagemaker is supported, see AWS Service Endpoints in the AWS General Reference. For more information about setting up the AWS CLI, see Configuring the AWS CLI in the AWS Command Line Interface User Guide.
  • Install Git. Git is installed by default on most versions of Linux and macOS. If Git isn’t already installed on your computer, you can download an installer from the Git website.

Step 1: Create an Amazon Pinpoint Project

In this section, you create and configure a project in Amazon Pinpoint. This project contains all of the customers that we will target, as well as the recommendation data that’s associated with each one. Later, we’ll use this data to create segments and campaigns.

To set up the Amazon Pinpoint project

  1. Sign in to the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint/.
  2. On the All projects page, choose Create a project. Enter a name for the project, and then choose Create.
  3. On the Configure features page, under SMS and voice, choose Configure.
  4. Under General settings, select Enable the SMS channel for this project, and then choose Save changes.
  5. In the navigation pane, under Settings, choose General settings. In the Project details section, copy the value under Project ID. You’ll need this value later.

Step 2: Create an Endpoint

In Amazon Pinpoint, an endpoint represents a specific method of contacting a customer, such as their email address (for email messages) or their phone number (for SMS messages). Endpoints can also contain custom attributes, and you can associate multiple endpoints with a single user. In this example, we use these attributes to store the recommendation data that we receive from Amazon Personalize.

In this section, we create a new endpoint and user by using the AWS CLI. We’ll use this endpoint to test the SMS channel, and to test the recommendations that we receive from Personalize.

To create an endpoint by using the AWS CLI

  1. At the command line, enter the following command:
    aws pinpoint update-endpoint --application-id <project-id> \
    --endpoint-id 12456 --endpoint-request "Address='<mobile-number>', \
    ChannelType='SMS',User={UserAttributes={recommended_items=['none']},UserId='12456'}"

    In the preceding example, replace <project-id> with the Amazon Pinpoint project ID that you copied in Step 1. Replace <mobile-number> with your phone number, formatted in E.164 format (for example, +12065550142).

Note that this endpoint contains hard-coded UserId and EndpointId values of 12456. These IDs match an ID that we’ll create later when we generate the Personalize data set.

Step 3: Create a Segment and Campaign in Amazon Pinpoint

Now that we have an endpoint, we need to add it to a segment so that we can use it within a campaign. By sending a campaign, we can verify that our Pinpoint project is configured correctly, and that we created the endpoint correctly.

To create the segment and campaign

  1. Open the Pinpoint console at http://console.aws.amazon.com/pinpoint, and then choose the project that you created in Step 1.
  2. In the navigation pane, choose Segments, and then choose Create a segment.
  3. Name the segment “No recommendations”. Under Segment group 1, on the Add a filter menu, choose Filter by user.
  4. On the Choose a user attribute menu, choose recommended-items. Set the value of the filter to “none”.
  5. Confirm that the Segment estimate section shows that there is one eligible endpoint, and then choose Create segment.
  6. In the navigation pane, choose Campaigns, and then choose Create a campaign.
  7. Name the campaign “SMS to users with no recommendations”. Under Choose a channel for this campaign, choose SMS, and then choose Next.
  8. On the Choose a segment page, choose the “No recommendations” segment that you just created, and then choose Next.
  9. In the message editor, type a test message, and then choose Next.
  10. On the Choose when to send the campaign page, keep all of the default values, and then choose Next.
  11. On the Review and launch page, choose Launch campaign. Within a few seconds, you receive a text message at the phone number that you specified when you created the endpoint.

Step 4: Load sample data into Amazon Personalize

At this point, we’ve finished setting up Amazon Pinpoint. Now we can start loading data into Amazon Personalize.

To load the data into Amazon Personalize

  1. At the command line, enter the following command to clone the sample data and Jupyter Notebooks to your computer:
    git clone https://github.com/markproy/personalize-car-search.git

  2. At the command line, change into the directory that contains the data that you just cloned. Enter the following command:
    jupyter notebook

    A new window opens in your web browser.

  3. In your web browser, open the first notebook (01_generate_data.ipynb). On the Cell menu, choose Run all. Wait for the commands to finish running.
  4. Open the second notebook (02_make_dataset_group.ipynb). In the first step, replace the value of the account_id variable with the ID of your AWS account. Then, on the Cell menu, choose Run all. This step takes several minutes to complete. Make sure that all of the commands have run successfully before you proceed to the next step.
  5. Open the third notebook (03_make_campaigns.ipynb). In the first step, replace the value of the account_id variable with the ID of your AWS account. Then, on the Cell menu, choose Run all. This step takes several minutes to complete. Make sure that all of the commands have run successfully before you proceed to the next step.
  6. Open the fourth notebook (04_use_the_campaign.ipynb). In the first step, replace the value of the account_id variable with the ID of your AWS account. Then, on the Cell menu, choose Run all. This step takes several minutes to complete.
  7. After the fourth notebook is finished running, choose Quit to terminate the Jupyter Notebook. You don’t need to run the fifth notebook for this example.
  8. Open the Amazon Personalize console at http://console.aws.amazon.com/personalize. Verify that Amazon Personalize contains one dataset group named car-dg.
  9. In the navigation pane, choose Campaigns. Verify that it contains all of the following campaigns, and that the status for each campaign is Active:
    • car-popularity-count
    • car-personalized-ranking
    • car-hrnn-metadata
    • car-sims
    • car-hrnn

Step 5: Create the Lambda function

We’ve loaded the data into Amazon Personalize, and now we need to create a Lambda function to update the endpoint attributes in Pinpoint with the recommendations provided by Personalize.

The version of the AWS SDK for Python that’s included with Lambda doesn’t include the libraries for Amazon Personalize. For this reason, you need to download these libraries to your computer, put them in a .zip file, and upload the entire package to Lambda.

To create the Lambda function

  1. In a text editor, create a new file. Paste the following code.
    # Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
    #
    # This file is licensed under the Apache License, Version 2.0 (the "License").
    # You may not use this file except in compliance with the License. A copy of the
    # License is located at
    #
    # http://aws.amazon.com/apache2.0/
    #
    # This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
    # OF ANY KIND, either express or implied. See the License for the specific
    # language governing permissions and limitations under the License.
    
    AWS_REGION = "<region>"
    PROJECT_ID = "<project-id>"
    CAMPAIGN_ARN = "<car-hrnn-campaign-arn>"
    USER_ID = "12456"
    endpoint_id = USER_ID
    
    from datetime import datetime
    import json
    import boto3
    import logging
    from botocore.exceptions import ClientError
    
    DATE = datetime.now()
    
    personalize           = boto3.client('personalize')
    personalize_runtime   = boto3.client('personalize-runtime')
    personalize_events    = boto3.client('personalize-events')
    pinpoint              = boto3.client('pinpoint')
    
    def lambda_handler(event, context):
        itemList = get_recommended_items(USER_ID,CAMPAIGN_ARN)
        response = update_pinpoint_endpoint(PROJECT_ID,endpoint_id,itemList)
    
        return {
            'statusCode': 200,
            'body': json.dumps('Lambda execution completed.')
        }
    
    def get_recommended_items(user_id, campaign_arn):
        response = personalize_runtime.get_recommendations(campaignArn=campaign_arn, 
                                                           userId=str(user_id), 
                                                           numResults=10)
        itemList = response['itemList']
        return itemList
    
    def update_pinpoint_endpoint(project_id,endpoint_id,itemList):
        itemlistStr = []
        
        for item in itemList:
            itemlistStr.append(item['itemId'])
    
        pinpoint.update_endpoint(
        ApplicationId=project_id,
        EndpointId=endpoint_id,
        EndpointRequest={
                            'User': {
                                'UserAttributes': {
                                    'recommended_items': 
                                        itemlistStr
                                }
                            }
                        }
        )    
    
        return
    

    In the preceding code, make the following changes:

    • Replace <region> with the name of the AWS Region that you want to use, such as us-east-1.
    • Replace <project-id> with the ID of the Amazon Pinpoint project that you created earlier.
    • Replace <car-hrnn-campaign-arn> with the Amazon Resource Name (ARN) of the car-hrnn campaign in Amazon Personalize. You can find this value in the Amazon Personalize console.
  2. Save the file as pue-get-recs.py.
  3. Create and activate a virtual environment. In the virtual environment, use pip to download the latest versions of the boto3 and botocore libraries. For complete procedures, see Updating a Function with Additional Dependencies With a Virtual Environment in the AWS Lambda Developer Guide. Also, add the pue-get-recs.py file to the .zip file that contains the libraries.
  4. Open the IAM console at http://console.aws.amazon.com/iam. Create a new role. Attach the following policy to the role:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogStream",
                    "logs:DescribeLogGroups",
                    "logs:CreateLogGroup",
                    "logs:PutLogEvents",
                    "personalize:GetRecommendations",
                    "mobiletargeting:GetUserEndpoints",
                    "mobiletargeting:GetApp",
                    "mobiletargeting:UpdateEndpointsBatch",
                    "mobiletargeting:GetApps",
                    "mobiletargeting:GetEndpoint",
                    "mobiletargeting:GetApplicationSettings",
                    "mobiletargeting:UpdateEndpoint"
                ],
                "Resource": "*"
            }
        ]
    }
    
  5. Open the Lambda console at http://console.aws.amazon.com/lambda, and then choose Create function.
  6. Create a new Lambda function from scratch. Choose the Python 3.7 runtime. Under Permissions, choose Use an existing role, and then choose the IAM role that you just created. When you finish, choose Create function.
  7. Upload the .zip file that contains the Lambda function and the boto3 and botocore libraries.
  8. Under Function code, change the Handler value to pue-get-recs.lambda_handler. Save your changes.

When you finish creating the function, you can test it to make sure it was set up correctly.

To test the Lambda function

  1. On the Select a test event menu, choose Configure test events. On the Configure test events window, specify an Event name, and then choose Create.
  2. Choose the Test button to execute the function.
  3. If the function executes successfully, open the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint.
  4. In the navigation pane, choose Segments, and then choose the “No recommendations” segment that you created earlier. Verify that the number under total endpoints is 0. This is the expected value; the segment is filtered to only include endpoints with no recommendation attributes, but when you ran the Lambda function, it added recommendations to the test endpoint.

Step 7: Create segments and campaigns based on recommended items

In this section, we’ll create a targeted segment based on the recommendation data provided by our Personalize dataset. We’ll then use that segment to create a campaign.

To create a segment and campaign based on personalized recommendations

  1. Open the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint. On the All projects page, choose the project that you created earlier.
  2. In the navigation pane, choose Segments, and then choose Create a segment. Name the new segment “Recommendations for product 26304”.
  3. Under Segment group 1, on the Add a filter menu, choose Filter by user. On the Choose a user attribute menu, choose recommended-items. Set the value of the filter to “26304”. Confirm that the Segment estimate section shows that there is one eligible endpoint, and then choose Create segment.
  4. In the navigation pane, choose Campaigns, and then choose Create a campaign.
  5. Name the campaign “SMS to users with recommendations for product 26304”. Under Choose a channel for this campaign, choose SMS, and then choose Next.
  6. On the Choose a segment page, choose the “Recommendations for product 26304” segment that you just created, and then choose Next.
  7. In the message editor, type a test message, and then choose Next.
  8. On the Choose when to send the campaign page, keep all of the default values, and then choose Next.
  9. On the Review and launch page, choose Launch campaign. Within a few seconds, you receive a text message at the phone number that you specified when you created the endpoint.

Next steps

Your PUE solution is now ready to use. From here, there are several ways that you can make the solution your own:

  • Expand your usage: If you plan to continue sending SMS messages, you should request a spending limit increase.
  • Extend to additional channels: This post showed the process of setting up an SMS campaign. You can add more endpoints—for the email or push notification channels, for example—and associate them with your users. You can then create new segments and new campaigns in those channels.
  • Build your own model: This post used a sample data set, but Amazon Personalize makes it easy to provide your own data. To start building a model with Personalize, you have to provide a data set that contains information about your users, items, and interactions. To learn more, see Getting Started in the Amazon Personalize Developer Guide.
  • Optimize your model: You can enrich your model by sending your mobile, web, and campaign engagement data to Amazon Personalize. In Pinpoint, you can use event streaming to move data directly to S3, and then use that data to retrain your Personalize model. To learn more about streaming events, see Streaming App and Campaign Events in the Amazon Pinpoint User Guide.
  • Update your recommendations on a regular basis: Use the create-campaign API to create a new recurring campaign. Rather than sending messages, include the hook property with a reference to the ARN of the pue-get-recs function. By completing this step, you can configure Pinpoint to retrieve the most up-to-date recommendation data each time the campaign recurs. For more information about using Lambda to modify segments, see Customizing Segments with AWS Lambda in the Amazon Pinpoint Developer Guide.

Forward Incoming Email to an External Destination

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/forward-incoming-email-to-an-external-destination/

Note: This post was written by Vesselin Tzvetkov, an AWS Senior Security Architect, and by Rostislav Markov, an AWS Senior Engagement Manager.


Amazon SES has included support for incoming email for several years now. However, some customers have told us that they need a solution for forwarding inbound emails to domains that aren’t managed by Amazon SES. In this post, you’ll learn how to use Amazon SES, Amazon S3, and AWS Lambda to create a solution that forwards incoming email to an email address that’s managed outside of Amazon SES.

Architecture

This solution uses several AWS services to forward incoming emails to a single, external email address. The following diagram shows the flow of information in this solution.

The following actions occur in this solution:

  1. A new email is sent from an external sender to your domain. Amazon SES handles the incoming email for your domain.
  2. An Amazon SES receipt rule saves the incoming message in an S3 bucket.
  3. An Amazon SES receipt rule triggers the execution of a Lambda function.
  4. The Lambda function retrieves the message content from S3, and then creates a new message and sends it to Amazon SES.
  5. Amazon SES sends the message to the destination server.

Limitations

This solution works in all AWS Regions where Amazon SES is currently available. For a complete list of supported Regions, see AWS Service Endpoints in the AWS General Reference.

Prerequisites

In order to complete this procedure, you need to have a domain that receives incoming email. If you don’t already have a domain, you can purchase one through Amazon Route 53. For more information, see Registering a New Domain in the Amazon Route 53 Developer Guide. You can also purchase a domain through one of several third-party vendors.

Procedures

Step 1: Set up Your Domain

  1. In Amazon SES, verify the domain that you want to use to receive incoming email. For more information, see Verifying Domains in the Amazon SES Developer Guide.
  2. Add the following MX record to the DNS configuration for your domain:
    10 inbound-smtp.<regionInboundUrl>.amazonaws.com

    Replace <regionInboundUrl> with the URL of the email receiving endpoint for the AWS Region that you use Amazon SES in. For a complete list of URLs, see AWS Service Endpoints – Amazon SES in the AWS General Reference.

  3. If your account is still in the Amazon SES sandbox, submit a request to have it removed. For more information, see Moving Out of the Sandbox in the Amazon SES Developer Guide.

Step 2: Configure Your S3 Bucket

  1. In Amazon S3, create a new bucket. For more information, see Create a Bucket in the Amazon S3 Getting Started Guide.
  2. Apply the following policy to the bucket:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "AllowSESPuts",
                "Effect": "Allow",
                "Principal": {
                    "Service": "ses.amazonaws.com"
                },
                "Action": "s3:PutObject",
                "Resource": "arn:aws:s3:::<bucketName>/*",
                "Condition": {
                    "StringEquals": {
                        "aws:Referer": "<awsAccountId>"
                    }
                }
            }
        ]
    }

  3. In the policy, make the following changes:
    • Replace <bucketName> with the name of your S3 bucket.
    • Replace <awsAccountId> with your AWS account ID.

    For more information, see Using Bucket Policies and User Policies in the Amazon S3 Developer Guide.

Step 3: Create an IAM Policy and Role

  1. Create a new IAM Policy with the following permissions:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogStream",
                    "logs:CreateLogGroup",
                    "logs:PutLogEvents"
                ],
                "Resource": "*"
            },
            {
                "Sid": "VisualEditor1",
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "ses:SendRawEmail"
                ],
                "Resource": [
                    "arn:aws:s3:::<bucketName>/*",
                    "arn:aws:ses:<region>:<awsAccountId>:identity/*"
                ]
            }
        ]
    }

    In the preceding policy, make the following changes:

    • Replace <bucketName> with the name of the S3 bucket that you created earlier.
    • Replace <region> with the name of the AWS Region that you created the bucket in.
    • Replace <awsAccountId> with your AWS account ID. For more information, see Create a Customer Managed Policy in the IAM User Guide.
  2. Create a new IAM role. Attach the policy that you just created to the new role. For more information, see Creating Roles in the IAM User Guide.

Step 4: Create the Lambda Function

  1. In the Lambda console, create a new Python 3.7 function from scratch. For the execution role, choose the IAM role that you created earlier.
  2. In the code editor, paste the following code.
    # Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
    #
    # This file is licensed under the Apache License, Version 2.0 (the "License").
    # You may not use this file except in compliance with the License. A copy of the
    # License is located at
    #
    # http://aws.amazon.com/apache2.0/
    #
    # This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
    # OF ANY KIND, either express or implied. See the License for the specific
    # language governing permissions and limitations under the License.
    
    import os
    import boto3
    import email
    import re
    from botocore.exceptions import ClientError
    from email.mime.multipart import MIMEMultipart
    from email.mime.text import MIMEText
    from email.mime.application import MIMEApplication
    
    region = os.environ['Region']
    
    def get_message_from_s3(message_id):
    
        incoming_email_bucket = os.environ['MailS3Bucket']
        incoming_email_prefix = os.environ['MailS3Prefix']
    
        if incoming_email_prefix:
            object_path = (incoming_email_prefix + "/" + message_id)
        else:
            object_path = message_id
    
        object_http_path = (f"http://s3.console.aws.amazon.com/s3/object/{incoming_email_bucket}/{object_path}?region={region}")
    
        # Create a new S3 client.
        client_s3 = boto3.client("s3")
    
        # Get the email object from the S3 bucket.
        object_s3 = client_s3.get_object(Bucket=incoming_email_bucket,
            Key=object_path)
        # Read the content of the message.
        file = object_s3['Body'].read()
    
        file_dict = {
            "file": file,
            "path": object_http_path
        }
    
        return file_dict
    
    def create_message(file_dict):
    
        sender = os.environ['MailSender']
        recipient = os.environ['MailRecipient']
    
        separator = ";"
    
        # Parse the email body.
        mailobject = email.message_from_string(file_dict['file'].decode('utf-8'))
    
        # Create a new subject line.
        subject_original = mailobject['Subject']
        subject = "FW: " + subject_original
    
        # The body text of the email.
        body_text = ("The attached message was received from "
                  + separator.join(mailobject.get_all('From'))
                  + ". This message is archived at " + file_dict['path'])
    
        # The file name to use for the attached message. Uses regex to remove all
        # non-alphanumeric characters, and appends a file extension.
        filename = re.sub('[^0-9a-zA-Z]+', '_', subject_original) + ".eml"
    
        # Create a MIME container.
        msg = MIMEMultipart()
        # Create a MIME text part.
        text_part = MIMEText(body_text, _subtype="html")
        # Attach the text part to the MIME message.
        msg.attach(text_part)
    
        # Add subject, from and to lines.
        msg['Subject'] = subject
        msg['From'] = sender
        msg['To'] = recipient
    
        # Create a new MIME object.
        att = MIMEApplication(file_dict["file"], filename)
        att.add_header("Content-Disposition", 'attachment', filename=filename)
    
        # Attach the file object to the message.
        msg.attach(att)
    
        message = {
            "Source": sender,
            "Destinations": recipient,
            "Data": msg.as_string()
        }
    
        return message
    
    def send_email(message):
        aws_region = os.environ['Region']
    
    # Create a new SES client.
        client_ses = boto3.client('ses', region)
    
        # Send the email.
        try:
            #Provide the contents of the email.
            response = client_ses.send_raw_email(
                Source=message['Source'],
                Destinations=[
                    message['Destinations']
                ],
                RawMessage={
                    'Data':message['Data']
                }
            )
    
        # Display an error if something goes wrong.
        except ClientError as e:
            output = e.response['Error']['Message']
        else:
            output = "Email sent! Message ID: " + response['MessageId']
    
        return output
    
    def lambda_handler(event, context):
        # Get the unique ID of the message. This corresponds to the name of the file
        # in S3.
        message_id = event['Records'][0]['ses']['mail']['messageId']
        print(f"Received message ID {message_id}")
    
        # Retrieve the file from the S3 bucket.
        file_dict = get_message_from_s3(message_id)
    
        # Create the message.
        message = create_message(file_dict)
    
        # Send the email and print the result.
        result = send_email(message)
        print(result)
  3. Create the following environment variables for the Lambda function:

    KeyValue
    MailS3BucketThe name of the S3 bucket that you created earlier.
    MailS3PrefixThe path of the folder in the S3 bucket where you will store incoming email.
    MailSenderThe email address that the forwarded message will be sent from. This address has to be verified.
    MailRecipientThe address that you want to forward the message to.
    RegionThe name of the AWS Region that you want to use to send the email.
  4. Under Basic settings, set the Timeout value to 30 seconds.

(Optional) Step 5: Create an Amazon SNS Topic

You can optionally create an Amazon SNS topic. This step is helpful for troubleshooting purposes, or if you just want to receive additional notifications when you receive a message.

  1. Create a new Amazon SNS topic. For more information, see Creating a Topic in the Amazon SNS Developer Guide.
  2. Subscribe an endpoint, such as an email address, to the topic. For more information, see Subscribing an Endpoint to a Topic in the Amazon SNS Developer Guide.

Step 6: Create a Receipt Rule Set

  1. In the Amazon SES console, create a new Receipt Rule Set. For more information, see Creating a Receipt Rule Set in the Amazon SES Developer Guide.
  2. In the Receipt Rule Set that you just created, add a Receipt Rule. In the Receipt Rule, add an S3 Action. Set up the S3 Action to send your email to the S3 bucket that you created earlier.
  3. Add a Lambda action to the Receipt Rule. Configure the Receipt Rule to invoke the Lambda function that you created earlier.

For more information, see Setting Up a Receipt Rule in the Amazon SES Developer Guide.

Step 7: Test the Function

  • Send an email to an address that corresponds with an address in the Receipt Rule you created earlier. Make sure that the email arrives in the correct S3 bucket. In a minute or two, the email arrives in the inbox that you specified in the MailRecipient variable of the Lambda function.

Troubleshooting

If you send a test message, but it is never forwarded to your destination email address, do the following:

  • Make sure that the Amazon SES Receipt Rule is active.
  • Make sure that the email address that you specified in the MailRecipient variable of the Lambda function is correct.
  • Subscribe an email address or phone number to the SNS topic. Send another test email to your domain. Make sure that SNS sends a Received notification to your subscribed email address or phone number.
  • Check the CloudWatch Log for your Lambda function to see if any errors occurred.

If you send a test email to your receiving domain, but you receive a bounce notification, do the following:

  • Make sure that the verification process for your domain completed successfully.
  • Make sure that the MX record for your domain specifies the correct Amazon SES receiving endpoint.
  • Make sure that you’re sending to an address that is handled by the receipt rule.

Costs of using this solution

The cost of implementing this solution is minimal. If you receive 10,000 emails per month, and each email is 2KB in size, you pay $1.00 for your use of Amazon SES. For more information, see Amazon SES Pricing.

You also pay a small charge to store incoming emails in Amazon S3. The charge for storing 1,000 emails that are each 2KB in size is less than one cent. Your use of Amazon S3 might qualify for the AWS Free Usage Tier. For more information, see Amazon S3 Pricing.

Finally, you pay for your use of AWS Lambda. With Lambda, you pay for the number of requests you make, for the amount of compute time that you use, and for the amount of memory that you use. If you use Lambda to forward 1,000 emails that are each 2KB in size, you pay no more than a few cents. Your use of AWS Lambda might qualify for the AWS Free Usage Tier. For more information, see AWS Lambda Pricing.

Note: These cost estimates don’t include the costs associated with purchasing a domain, since many users already have their own domains. The cost of obtaining a domain is the most expensive part of implementing this solution.

Conclusion

This solution makes it possible to forward incoming email from one of your Amazon SES verified domains to an email address that isn’t necessarily verified. It’s also useful if you have multiple AWS accounts, and you want incoming messages to be sent from each of those accounts to a single destination. We hope you’ve found this tutorial to be helpful!

Sending Push Notifications to iOS 13 and watchOS 6 Devices

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/sending-push-notifications-to-ios-13-and-watchos-6-devices/

Last week, we made some changes to the way that Amazon Pinpoint sends Apple Push Notification service (APNs) push notifications.

In June, Apple announced that push notifications sent to iOS 13 and watchOS devices would require the new apns-push-type header. APNs uses this header to determine whether a notification should be shown on the display of the recipient’s device, or if it should be sent to the background.

When you use Amazon Pinpoint to send an APNs message, you can choose whether you want to send the message as a standard message, or as a silent notification. Amazon Pinpoint uses your selection to determine which value to apply to the apns-push-type header: if you send the message as a standard message, Amazon Pinpoint automatically sets the value of the apns-push-type header to alert; if you send the message as a silent notification, it sets the apns-push-type header to silent. Amazon Pinpoint applies these settings automatically—you don’t have to do any additional work in order to send messages to recipients with iOS 13 and watchOS 6 devices.

One last thing to keep in mind: if you specify the raw content of an APNs push notification, the message payload has to include the content-available key. The value of the content-available key has to be an integer, and can only be 0 or 1. If you’re sending an alert, set the value of content-available to 0. If you’re sending a background notification, set the value of content-available to 1. Additionally, background notification payloads can’t include the alert, badge, or sound keys.

To learn more about sending APNs notifications, see Generating a Remote Notification and Pushing Background Updates to Your App on the Apple Developer website.

Create an SMS Chatbot with Amazon Pinpoint and Lex

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/create-an-sms-chatbox-with-amazon-pinpoint-and-lex/

Note: This post was written by Ilya Pupko, Senior Consultant for the AWS Digital User Engagement team, and by Gopinath Srinivasan, an AWS Enterprise Solution Architect.


A major advantage of using Amazon Pinpoint for your customer engagement workflows is its ability to tightly integrate with other AWS services. These integrations make it possible to engage in deeper conversations with your customers, as opposed to simply sending one-directional, one-size-fits-all messages.

In this tutorial, we look at the process of creating an SMS-based chatbot using Amazon Lex. This chatbot will help our customers schedule appointments. We’ll use Amazon Pinpoint to send responses from the chatbot over the SMS channel, and we’ll use AWS Lambda to connect the two services together. The following image illustrates the architecture that we’ll create in this tutorial.

The steps in this post are intended to provide general guidance, rather than specific procedures. If you’ve used other AWS services in the past, most of the concepts here will be familiar. If not, don’t worry—we’ve included links to the documentation to make things easier.

Step 1: Set up a project in Amazon Pinpoint

The first step in setting up the chatbot is to create a new Amazon Pinpoint project that can send and receive SMS messages. To be able to receive incoming SMS messages, you also need to obtain a dedicated phone number.

To create a new SMS project

  1. Sign in to the Amazon Pinpoint console at https://console.aws.amazon.com/pinpoint.
  2. Create a new Amazon Pinpoint project, and enable the SMS channel for the project. For more information, see https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-setup.html.
  3. Request a long code for your country. For more information, see https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-voice-manage.html#channels-voice-manage-request-phone-numbers.
  4. Enable the two-way SMS feature for the dedicated long code that you just purchased. Under Incoming message destination, choose Create a new SNS topic, and name it LexPinpointIntegrationDemo. For more information about setting up two-way SMS, see https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-two-way.html.

Step 2: Create a Lex chatbot

Now it’s time to create your bot. For the purposes of this example, we’ll use a bot that’s pre-configured to handle appointment requests. Later, you can customize this bot to fit your needs by specifying additional intents.

To create your bot

  1. Sign in to the Lex console at https://console.aws.amazon.com/lex.
  2. Create a new bot. On the Create your bot page, choose the ScheduleAppointment sample. Use the default IAM role. For COPPA, choose No. Note the name that you specified for the bot—you need to refer to this name in the Lambda function that you create later. For more information about creating bots in Lex, see https://docs.aws.amazon.com/lex/latest/dg/gs-console.html.
  3. When the bot finishes building, choose Publish. For Create an alias, enter Latest. Choose Publish.

Step 3: Set up the Lambda backend

After you create your Lex bot, you have to create a Lambda function that allows your Lex bot to send messages through Amazon Pinpoint.

To create the Lambda function

  1. Sign in to the Lambda console at https://console.aws.amazon.com/lambda.
  2. Create a new Node.js 10.x function from scratch. Create a new IAM role with the default permissions. For more information about creating functions, see https://docs.aws.amazon.com/lambda/latest/dg/getting-started-create-function.html.
  3. In the Designer section, choose Add trigger. Add a new SNS trigger, and then choose the LexPinpointIntegrationDemo topic that you created earlier.
  4. In the Lambda code editor, paste the following code:
    "use strict";
    
    const AWS = require('aws-sdk');
    AWS.config.update({
        region: process.env.Region
    });
    const pinpoint = new AWS.Pinpoint();
    const lex = new AWS.LexRuntime();
    
    var AppId = process.env.PinpointApplicationId;
    var BotName = process.env.BotName;
    var BotAlias = process.env.BotAlias;
    
    exports.handler = (event, context)  => {
        /*
        * Event info is sent via the SNS subscription: https://console.aws.amazon.com/sns/home
        * 
        * - PinpointApplicationId is your Pinpoint Project ID.
        * - BotName is your Lex Bot name.
        * - BotAlias is your Lex Bot alias (aka Lex function/flow).
        */
        console.log('Received event: ' + event.Records[0].Sns.Message);
        var message = JSON.parse(event.Records[0].Sns.Message);
        var customerPhoneNumber = message.originationNumber;
        var chatbotPhoneNumber = message.destinationNumber;
        var response = message.messageBody.toLowerCase();
        var userId = customerPhoneNumber.replace("+1", "");
    
        var params = {
            botName: BotName,
            botAlias: BotAlias,
            inputText: response,
            userId: userId
        };
        response = lex.postText(params, function (err, data) {
            if (err) {
                console.log(err, err.stack);
                //return null;
            }
            else if (data != null && data.message != null) {
                console.log("Lex response: " + data.message);
                sendResponse(customerPhoneNumber, chatbotPhoneNumber, response.response.data.message);
            }
            else {
                console.log("Lex did not send a message back!");
            }
        });
    }
    
    function sendResponse(custPhone, botPhone, response) {
        var paramsSMS = {
            ApplicationId: AppId,
            MessageRequest: {
                Addresses: {
                    [custPhone]: {
                        ChannelType: 'SMS'
                    }
                },
                MessageConfiguration: {
                    SMSMessage: {
                        Body: response,
                        MessageType: "TRANSACTIONAL",
                        OriginationNumber: botPhone
                    }
                }
            }
        };
        pinpoint.sendMessages(paramsSMS, function (err, data) {
            if (err) {
                console.log("An error occurred.\n");
                console.log(err, err.stack);
            }
            else if (data['MessageResponse']['Result'][custPhone]['DeliveryStatus'] != "SUCCESSFUL") {
                console.log("Failed to send SMS response:");
                console.log(data['MessageResponse']['Result']);
            }
            else {
                console.log("Successfully sent response via SMS from " + botPhone + " to " + custPhone);
            }
        });
    }
  5. In the Environment variables section, add the following variables:

    KeyValue
    PinpointApplicationIdThe name of the Pinpoint project that you created earlier.
    BotNameThe name of the Lex bot that you created earlier.
    BotAliasLatest
    RegionThe AWS Region that you created the Amazon Pinpoint project and Lex bot in.
  6. Under Execution role, choose View the LexPinpointIntegrationDemoLambda role.
  7. In the IAM console, add an inline policy. Paste the following into the policy editor:
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Sid":"Logs",
             "Effect":"Allow",
             "Action":[
                "logs:CreateLogStream",
                "logs:CreateLogGroup",
                "logs:PutLogEvents"
             ],
             "Resource":[
                "arn:aws:logs:*:*:*"
             ]
          },
          {
             "Sid":"Pinpoint",
             "Effect":"Allow",
             "Action":[
                "mobiletargeting:SendMessages"
             ],
             "Resource":[
                "arn:aws:mobiletargeting:<REGION>:<ACCOUNTID>:apps/*"
             ]
          },
          {
             "Sid":"Lex",
             "Effect":"Allow",
             "Action":[
                "lex:PostContent",
                "lex:PostText"
             ],
             "Resource":[
                "arn:aws:lex:<REGION>:<ACCOUNTID>:bot/<BOTNAME>"
             ]
          }
       ]
    }

    In the preceding code, make the following changes:

    • Replace <REGION> with the name of the AWS Region that you created the Amazon Pinpoint project and the Lex bot in (such as us-east-1).
    • Replace <ACCOUNTID> with your AWS account ID.
    • Replace <BOTNAME> with the name of your Lex bot.
  8. When you finish, save the policy as PinpointLexFunctionRole.

Step 4: Test the chatbot

Your SMS chatbot is now set up and ready to use! You can test it by sending a message (such as “Schedule an appointment”) to the long code that you obtained earlier. The chatbot responds, asking what type of appointment you want to book, and at what time.

Conclusion

Now that you’ve created your chatbot, you can start to customize it to fit your specific use case. For example, you can enhance the chatbot’s conversational abilities by adding intents, or you could expand on the Lambda function to integrate it with a third-party scheduling tool.

To learn more about configuring Amazon Lex, see the Amazon Lex Developer Guide.

Finally, you can find the latest updates to the code that’s associated with this tutorial in the amazon-pinpoint-lex-bot repository on Github.

Building progressive web apps that use the analytics features of Amazon Pinpoint

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/building-progressive-web-apps-that-use-the-analytics-features-of-amazon-pinpoint/

Last week, our colleague Ed Lima published a post on the AWS Mobile blog about building apps with Amplify. This post shows how to use a variety of AWS services to build a progressive web app that collects survey information from customers. After collecting the customer information, the application sends usage data to Amazon Pinpoint for additional analysis.

We thought that several of our customers would find this post to be helpful as they develop their own apps. To learn more, visit https://aws.amazon.com/blogs/mobile/building-progressive-web-apps-with-the-amplify-framework-and-aws-appsync/.

Applying Netflix DevOps Patterns to Windows

Post Syndicated from Netflix Technology Blog original https://medium.com/netflix-techblog/applying-netflix-devops-patterns-to-windows-2a57f2dbbf79?source=rss----2615bd06b42e---4

Baking Windows with Packer

By Justin Phelps and Manuel Correa

Customizing Windows images at Netflix was a manual, error-prone, and time consuming process. In this blog post, we describe how we improved the methodology, which technologies we leveraged, and how this has improved service deployment and consistency.

Artisan Crafted Images

In the Netflix full cycle DevOps culture the team responsible for building a service is also responsible for deploying, testing, infrastructure, and operation of that service. A key responsibility of Netflix engineers is identifying gaps and pain points in the development and operation of services. Though the majority of our services run on Linux Amazon Machine Images (AMIs), there are still many services critical to the Netflix Playback Experience running on Windows Elastic Compute Cloud (EC2) instances at scale.

We looked at our process for creating a Windows AMI and discovered it was error-prone and full of toil. First, an engineer would launch an EC2 instance and wait for the instance to come online. Once the instance was available, the engineer would use a remote administration tool like RDP to login to the instance to install software and customize settings. This image was then saved as an AMI and used in an Auto Scale Group to deploy a cluster of instances. Because this process was time consuming and painful, our Windows instances were usually missing the latest security updates from Microsoft.

Last year, we decided to improve the AMI baking process. The challenges with service management included:

  • Stale documentation
  • OS Updates
  • High cognitive overhead
  • A lack of continuous testing

Scaling Image Creation

Our existing AMI baking tool Aminator does not support Windows so we had to leverage other tools. We had several goals in mind when trying to improve the baking methodology:

Configuration as Code

The first part of our new Windows baking solution is Packer. Packer allows you to describe your image customization process as a JSON file. We make use of the amazon-ebs Packer builder to launch an EC2 instance. Once online, Packer uses WinRM to copy files and run PowerShell scripts against the instance. If all of the configuration steps are successful then Packer saves a new AMI. The configuration file, referenced scripts, and artifact dependency definitions all live in an internal git repository. We now have the software and instance configuration as code. This means changes can be tracked and reviewed like any other code change.

Packer requires specific information for your baking environment and extensive AWS IAM permissions. In order to simplify the use of Packer for our software developers, we bundled Netflix-specific AWS environment information and helper scripts. Initially, we did this with a git repository and Packer variable files. There was also a special EC2 instance where Packer was executed as Jenkins jobs. This setup was better than manually baking images but we still had some ergonomic challenges. For example, it became cumbersome to ensure users of Packer received updates.

The last piece of the puzzle was finding a way to package our software for installation on Windows. This would allow for reuse of helper scripts and infrastructure tools without requiring every user to copy that solution into their Packer scripts. Ideally, this would work similar to how applications are packaged in the Animator process. We solved this by leveraging Chocolatey, the package manager for Windows. Chocolatey packages are created and then stored in an internal artifact repository. This repository is added as a source for the choco install command. This means we can create and reuse packages that help integrate Windows into the Netflix ecosystem.

Leverage Spinnaker for Continuous Delivery

Flow chart showing how Docker image inheretance is used in the creation of a Windows AMI.
The Base Dockerfile allows updates of Packer, helper scripts, and environment configuration to propagate through the entire Windows Baking process.

To make the baking process more robust we decided to create a Docker image that contains Packer, our environment configuration, and helper scripts. Downstream users create their own Docker images based on this base image. This means we can update the base image with new environment information and helper scripts, and users get these updates automatically. With their new Docker image, users launch their Packer baking jobs using Titus, our container management system. The Titus job produces a property file as part of a Spinnaker pipeline. The resulting property file contains the AMI ID and is consumed by later pipeline stages for deployment. Running the bake in Titus removed the single EC2 instance limitation, allowing for parallel execution of the jobs.

Now each change in the infrastructure is tested, canaried, and deployed like any other code change. This process is automated via a Spinnaker pipeline:

Screenshot of an example Spinnaker pipeline showing Docker image, Windows AMI, Canary Analysis, and Deployment stages.
Example Spinnaker pipeline showing the bake, canary, and deployment stages.

In the canary stage, Kayenta is used to compare metrics between a baseline (current AMI) and the canary (new AMI). The canary stage will determine a score based on metrics such as CPU, threads, latency, and GC pauses. If this score is within a healthy threshold the AMI is deployed to each environment. Running a canary for each change and testing the AMI in production allows us to capture insights around impact on Windows updates, script changes, tuning web server configuration, among others.

Eliminate Toil

Automating these tedious operational tasks allows teams to move faster. Our engineers no longer have to manually update Windows, Java, Tomcat, IIS, and other services. We can easily test server tuning changes, software upgrades, and other modifications to the runtime environment. Every code and infrastructure change goes through the same testing and deployment pipeline.

Reaping the Benefits

Changes that used to require hours of manual work are now easy to modify, test, and deploy. Other teams can quickly deploy secure and reproducible instances in an automated fashion. Services are more reliable, testable, and documented. Changes to the infrastructure are now reviewed like any other code change. This removes unnecessary cognitive load and documents tribal knowledge. Removing toil has allowed the team to focus on other features and bug fixes. All of these benefits reduce the risk of a customer-affecting outage. Adopting the Immutable Server pattern for Windows using Packer and Chocolatey has paid big dividends.


Applying Netflix DevOps Patterns to Windows was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Creating custom Pinpoint dashboards using Amazon QuickSight, part 3

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/creating-custom-pinpoint-dashboards-using-amazon-quicksight-part-3/

Note: This post was written by Manan Nayar and Aprajita Arora, Software Development Engineers on the AWS Digital User Engagement team.


This is the third and final post in our series about creating custom visualizations of your Amazon Pinpoint metrics using Amazon QuickSight.

In our first post, we used the Metrics APIs to retrieve specific Key Performance Indicators (KPIs), and then created visualizations using QuickSight. In the second post, we used the event stream feature in Amazon Pinpoint to enable more in-depth analyses.

The examples in the first two posts used Amazon S3 to store the metrics that we retrieved from Amazon Pinpoint. This post takes a different approach, using Amazon Redshift to store the data. By using Redshift to store this data, you gain the ability to create visualizations on large data sets. This example is useful in situations where you have a large volume of event data, and situations where you need to store your data for long periods of time.

Step 1: Provision the storage

The first step in setting up this solution is to create the destinations where you’ll store the Amazon Pinpoint event data. Since we’ll be storing the data in Amazon Redshift, we need to create a new Redshift cluster. We’ll also create an S3 bucket, which will house the original event data that’s streamed from Amazon Pinpoint.

To create the Redshift cluster and the S3 bucket

  1. Create a new Redshift cluster. To learn more, see the Amazon Redshift Getting Started Guide.
  2. Create a new table in the Redshift cluster that contains the appropriate columns. Use the following query to create the table:
    create table if not exists pinpoint_events_table(
      rowid varchar(255) not null,
      project_key varchar(100) not null,
      event_type varchar(100) not null,
      event_timestamp timestamp not null,
      campaign_id varchar(100),
      campaign_activity_id varchar(100),
      treatment_id varchar(100),
      PRIMARY KEY (rowid)
    );
  3. Create a new Amazon S3 bucket. For complete procedures, see Create a Bucket in the Amazon S3 Getting Started Guide.

Step 2: Set up the event stream

This example uses the event stream feature of Amazon Pinpoint to send event data to S3. Later, we’ll create a Lambda function that sends the event data to your Redshift cluster when new event data is added to the S3 bucket. This method lets us store the original event data in S3, and transfer a subset of that data to Redshift for analysis.

To set up the event stream

  1. Sign in to the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint. In the list of projects, choose the project that you want to enable event streaming for.
  2. Under Settings, choose Event stream.
  3. Choose Edit, and then configure the event stream to use Amazon Kinesis Data Firehose. If you don’t already have a Kinesis Data Firehose stream, follow the link to create one in the Kinesis console. Configure the stream to send data to an S3 bucket. For more information about creating streams, see Creating an Amazon Kinesis Data Firehose Delivery Stream.
  4. Under IAM role, choose Automatically create a role. Choose Save.

Step 3: Create the Lambda function

In this section, you create a Lambda function that processes the raw event stream data, and then writes it to a table in your Redshift cluster.
To create the Lambda function:

  1. Download the psycopg2 binary from https://github.com/jkehler/awslambda-psycopg2. This Python library lets you interact with PostgreSQL databases, such as Amazon Redshift. It contains certain libraries that aren’t included in Lambda.
    • Note: This Github repository is not an official AWS-managed repository.
  2. Within the awslambda-psycopg2-master folder, you’ll find a folder called psycopg2-37. Rename the folder to psycopg2 (you may need to delete the existing folder with that name), and then compress the entire folder to a .zip file.
  3. Create a new Lambda function from scratch, using the Python 3.7 runtime.
  4. Upload the psycopg2.zip file that you created in step 1 to Lambda.
  5. In Lambda, create a new function called lambda_function.py. Paste the following code into the function:
    import datetime
    import json
    import re
    import uuid
    import os
    import boto3
    import psycopg2
    from psycopg2 import Error
    
    cluster_redshift = "<clustername>"
    dbname_redshift = "<dbname>"
    user_redshift = "<username>"
    password_redshift = "<password>"
    endpoint_redshift = "<endpoint>"
    port_redshift = "5439"
    table_redshift = "pinpoint_events_table"
    
    # Get the file that contains the event data from the appropriate S3 bucket.
    def get_file_from_s3(bucket, key):
        s3 = boto3.client('s3')
        obj = s3.get_object(Bucket=bucket, Key=key)
        text = obj["Body"].read().decode()
    
        return text
    
    # If the object that we retrieve contains newline-delineated JSON, split it into
    # multiple objects.
    def clean_and_split(json_raw):
        json_delimited = re.sub('}\s{','}---X-DELIMITER---{',json_raw)
        json_clean = re.sub('\s+','',json_delimited)
        data = json_clean.split("---X-DELIMITER---")
    
        return data
    
    # Set all of the variables that we'll use to create the new row in Redshift.
    def set_variables(in_json):
    
        for line in in_json:
            content = json.loads(line)
            app_id = content['application']['app_id']
            event_type = content['event_type']
            event_timestamp = datetime.datetime.fromtimestamp(content['event_timestamp'] / 1e3).strftime('%Y-%m-%d %H:%M:%S')
    
            if (content['attributes'].get('campaign_id') is None):
                campaign_id = ""
            else:
                campaign_id = content['attributes']['campaign_id']
    
            if (content['attributes'].get('campaign_activity_id') is None):
                campaign_activity_id = ""
            else:
                campaign_activity_id = content['attributes']['campaign_activity_id']
    
            if (content['attributes'].get('treatment_id') is None):
                treatment_id = ""
            else:
                treatment_id = content['attributes']['treatment_id']
    
            write_to_redshift(app_id, event_type, event_timestamp, campaign_id, campaign_activity_id, treatment_id)
                
    # Write the event stream data to the Redshift table.
    def write_to_redshift(app_id, event_type, event_timestamp, campaign_id, campaign_activity_id, treatment_id):
        row_id = str(uuid.uuid4())
    
        query = ("INSERT INTO " + table_redshift + "(rowid, project_key, event_type, "
                + "event_timestamp, campaign_id, campaign_activity_id, treatment_id) "
                + "VALUES ('" + row_id + "', '"
                + app_id + "', '"
                + event_type + "', '"
                + event_timestamp + "', '"
                + campaign_id + "', '"
                + campaign_activity_id + "', '"
                + treatment_id + "');")
    
        try:
            conn = psycopg2.connect(user = user_redshift,
                                    password = password_redshift,
                                    host = endpoint_redshift,
                                    port = port_redshift,
                                    database = dbname_redshift)
    
            cur = conn.cursor()
            cur.execute(query)
            conn.commit()
            print("Updated table.")
    
        except (Exception, psycopg2.DatabaseError) as error :
            print("Database error: ", error)
        finally:
            if (conn):
                cur.close()
                conn.close()
                print("Connection closed.")
    
    # Handle the event notification that we receive when a new item is sent to the 
    # S3 bucket.
    def lambda_handler(event,context):
        print("Received event: \n" + str(event))
    
        bucket = event['Records'][0]['s3']['bucket']['name']
        key = event['Records'][0]['s3']['object']['key']
        data = get_file_from_s3(bucket, key)
    
        in_json = clean_and_split(data)
    
        set_variables(in_json)

    In the preceding code, make the following changes:

    • Replace <clustername> with the name of the cluster.
    • Replace <dbname> with the name of the database.
    • Replace <username> with the user name that you specified when you created the Redshift cluster.
    • Replace <password> with the password that you specified when you created the Redshift cluster.
    • Replace <endpoint> with the endpoint address of the Redshift cluster.
  6. In IAM, update the execution role that’s associated with the Lambda function to include the GetObject permission for the S3 bucket that contains the event data. For more information, see Editing IAM Policies in the AWS IAM User Guide.

Step 4: Set up notifications on the S3 bucket

Now that we’ve created the Lambda function, we’ll set up a notification on the S3 bucket. In this case, the notification will refer to the Lambda function that we created in the previous section. Every time a new file is added to the bucket, the notification will cause the Lambda function to run.

To create the event notification

  1. In S3, create a new bucket notification. The notification should be triggered when PUT events occur, and should trigger the Lambda function that you created in the previous section. For more information about creating notifications, see Configuring Amazon S3 Event Notifications in the Amazon S3 Developer Guide.
  2. Test the event notification by sending a test campaign. If you send an email campaign, your Redshift database should contain events such as _campaign.send, _email.send, _email.delivered, and others. You can check the contents of the Redshift table by running the following query in the Query Editor in the Redshift console:
    select * from pinpoint_events_table;

Step 5: Add the data set in Amazon QuickSight

If your Lambda function is sending event data to Redshift as expected, you can use your Redshift database to create a new data set in Amazon QuickSight. QuickSight includes an automatic database discovery feature that helps you add your Redshift database as a data set with only a few clicks. For more information, see Creating a Data Set from a Database in the Amazon QuickSight User Guide.

Step 6: Create your visualizations

Now that QuickSight is retrieving information from your Redshift database, you can use that data to create visualizations. To learn more about creating visualizations in QuickSight, see Creating an Analysis in the Amazon QuickSight User Guide.

This brings us to the end of our series. While these posts focused on using Amazon QuickSight to visualize your analytics data, you can also use these same techniques to create visualizations using 3rd party applications. We hope you enjoyed this series, and we can’t wait to see what you build using these examples!

Creating custom Pinpoint dashboards using Amazon QuickSight, part 2

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/creating-custom-pinpoint-dashboards-using-amazon-quicksight-part-2/

Note: This post was written by Manan Nayar and Aprajita Arora, Software Development Engineers on the AWS Digital User Engagement team.


In our previous post, we discussed the process of visualizing specific, pre-aggregated Amazon Pinpoint metrics—such as delivery rate or open rate—using the Amazon Pinpoint Metrics APIs. In that example, we showed how to create a Lambda function that retrieves your metrics, and then make those metrics available for creating visualizations in Amazon QuickSight.

This post discusses shows a different approach to exporting data from Amazon Pinpoint and using it to build visualizations. Rather than retrieve specific metrics, you can use the event stream feature in Amazon Pinpoint to export raw event data. You can use this data in Amazon QuickSight to create in-depth analyses of your data, as opposed to visualizing pre-calculated metrics. As an added benefit, when you use this solution, you don’t have to modify any code, and the underlying data is updated every few minutes.

Step 1: Configure the event stream in Amazon Pinpoint

The Amazon Pinpoint event stream includes information about campaign events (such as campaign.send) and application events (such as session.start). It also includes response information related to all of the emails and SMS messages that you send from your Amazon Pinpoint project, regardless of whether they were sent from campaigns or on a transactional basis. When you enable event streams, Amazon Pinpoint automatically sends this data to your S3 bucket (via Amazon Kinesis Data Firehose) every few minutes.

To set up the event stream

  1. Sign in to the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint. In the list of projects, choose the project that you want to enable event streaming for.
  2. Under Settings, choose Event stream.
  3. Choose Edit, and then configure the event stream to use Amazon Kinesis Data Firehose. If you don’t already have a Kinesis Data Firehose stream, follow the link to create one in the Kinesis console. Configure the stream to send data to an S3 bucket. For more information about creating streams, see Creating an Amazon Kinesis Data Firehose Delivery Stream.
  4. Under IAM role, choose Automatically create a role. Choose Save.

Step 2: Add a data set in Amazon QuickSight

Now that you’ve started streaming your Amazon Pinpoint data to S3, you can set Amazon QuickSight to look for data in the S3 bucket. You connect QuickSight to sources of data by creating data sets.

To create a data set

    1. In a text editor, create a new file. Paste the following code:
      {
          "fileLocations": [
              {
                  "URIPrefixes": [ 
                      "s3://<bucketName>/"          
                  ]
              }
          ],
          "globalUploadSettings": {
              "format": "JSON"
          }
      }

      In the preceding code, replace <bucketName> with the name of the S3 bucket that you’re using to store the event stream data. Save the file as manifest.json.

    2. Sign in to the QuickSight console at https://quicksight.aws.amazon.com.
    3. Create a new S3 data set. When prompted, choose the manifest file that you created in step 1. For more information about creating S3 data sets, see Creating a Data Set Using Amazon S3 Files in the Amazon QuickSight User Guide.
    4. Create a new analysis. From here, you can start creating visualizations of your data. To learn more, see Creating an Analysis in the Amazon QuickSight User Guide.

Step 3: Set the refresh rate for the data set

You can configure your data sets in Amazon QuickSight to automatically refresh on a certain schedule. In this section, you configure the data set to refresh every day, one minute before midnight.

To set the refresh schedule

  1. Go to the QuickSight start page at https://quicksight.aws.amazon.com/sn/start. Choose Manage data.
  2. Choose the data set that you created in the previous section.
  3. Choose Schedule refresh. Follow the prompts to set up a daily refresh schedule.

Step 4: Create your visualizations

From this point, you can start creating visualizations of your data. To learn more about creating visualizations, see Creating an Analysis in the Amazon QuickSight User Guide.

Creating custom Pinpoint dashboards using Amazon QuickSight, part 1

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/creating-custom-pinpoint-dashboards-using-amazon-quicksight-part-1/

Note: This post was written by Manan Nayar and Aprajita Arora, Software Development Engineers on the AWS Digital User Engagement team.


Amazon Pinpoint helps you create customer-centric engagement experiences across the mobile, web, and other messaging channels. It also provides a variety of Key Performance Indicators (KPIs) that you can use to track the performance of your messaging programs.

You can access these KPIs through the console, or by using the Amazon Pinpoint API. In some cases, you might want to create custom dashboards that aren’t included by default, or even combine these metrics with other data. Over the next few days, we’ll discuss several different methods that you can use to create your own custom dashboards.

In this post, you’ll learn how to use the Amazon Pinpoint API to retrieve metrics, and then display them in visualizations that you create in Amazon QuickSight. This option is ideal for creating custom dashboards that highlight a specific set of metrics, or for embedding these metrics in your existing application or website.

In the next post (which we’ll post on Monday, August 19), you’ll learn how to export raw event data to an S3 bucket, and use that data to create dashboards in by using QuickSight’s Super-fast, Parallel, In-memory Calculation Engine (SPICE). This option enables you to perform in-depth analyses and quickly update visualizations. It’s also cost-effective, because all of the event data is stored in an S3 bucket.

The final post (which we’ll post on Wednesday, August 21) will also discuss the process of creating visualizations from event stream data. However, in this solution, the data will be sent from Amazon Kinesis to a Redshift cluster. This option is ideal if you need to process very large volumes of event data.

Creating a QuickSight dashboard that uses specific metrics

You can use the Amazon Pinpoint API to programmatically access many of the metrics that are shown on the Analytics pages of the Amazon Pinpoint console. You can learn more about using the API to obtain specific KPIs in our recent blog post, Tracking Campaign Performance Using the Metrics APIs.

The following sections show you how to parse and store those results in Amazon S3, and then create custom dashboards by using Amazon Quicksight. The steps below are meant to provide general guidance, rather than specific procedures. If you’ve used other AWS services in the past, most of the concepts here will be familiar. If not, don’t worry—we’ve included links to the documentation to make things easier.

Step 1: Package the Dependencies

Lambda currently uses a version of the AWS SDK that is a few versions behind the current version. However, the ability to retrieve Pinpoint metrics programmatically is a relatively new feature. For this reason, you have to download the latest version of the SDK libraries to your computer, create a .zip archive, and then upload that archive to Lambda.

To package the dependencies

    1. Paste the following code into a text editor:
      from datetime import datetime
      import boto3
      import json
      
      AWS_REGION = "<us-east-1>"
      PROJECT_ID = "<projectId>"
      BUCKET_NAME = "<bucketName>"
      BUCKET_PREFIX = "quicksight-data"
      DATE = datetime.now()
      
      # Get today's push open rate KPI values.
      def get_kpi(kpi_name):
      
          client = boto3.client('pinpoint',region_name=AWS_REGION)
      
          response = client.get_application_date_range_kpi(
              ApplicationId=PROJECT_ID,
              EndTime=DATE.strftime("%Y-%m-%d"),
              KpiName=kpi_name,
              StartTime=DATE.strftime("%Y-%m-%d")
          )
          rows = response['ApplicationDateRangeKpiResponse']['KpiResult']['Rows'][0]['Values']
      
          # Create a JSON object that contains the values we'll use to build QuickSight visualizations.
          data = construct_json_object(rows[0]['Key'], rows[0]['Value'])
      
          # Send the data to the S3 bucket.
          write_results_to_s3(kpi_name, json.dumps(data).encode('UTF-8'))
      
      # Create the JSON object that we'll send to S3.
      def construct_json_object(kpi_name, value):
          data = {
              "applicationId": PROJECT_ID,
              "kpiName": kpi_name,
              "date": str(DATE),
              "value": value
          }
      
          return data
      
      # Send the data to the designated S3 bucket.
      def write_results_to_s3(kpi_name, data):
          # Create a file path with folders for year, month, date, and hour.
          path = (
              BUCKET_PREFIX + "/"
              + DATE.strftime("%Y") + "/"
              + DATE.strftime("%m") + "/"
              + DATE.strftime("%d") + "/"
              + DATE.strftime("%H") + "/"
              + kpi_name
          )
      
          client = boto3.client('s3')
      
          # Send the data to the S3 bucket.
          response = client.put_object(
              Bucket=BUCKET_NAME,
              Key=path,
              Body=bytes(data)
          )
      
      def lambda_handler(event, context):
          get_kpi('email-open-rate')
          get_kpi('successful-delivery-rate')
          get_kpi('unique-deliveries')

      In the preceding code, make the following changes:

      • Replace <us-east-1> with the name of the AWS Region that you use Amazon Pinpoint in.
      • Replace <projectId> with the ID of the Amazon Pinpoint project that the metrics are associated with.
      • Replace <bucketName> with the name of the Amazon S3 bucket that you want to use to store the data. For more information about creating S3 buckets, see Create a Bucket in the Amazon S3 Getting Started Guide.
      • Optionally, modify the lambda_handler function so that it calls the get_kpi function for the specific metrics that you want to retrieve.

      When you finish, save the file as retrieve_pinpoint_kpis.py.

  1. Use pip to download the latest versions of the boto3 and botocore libraries. Add these libraries to a .zip file. Also add retrieve_pinpoint_kpis.py to the .zip file. You can learn more about all of these tasks in Updating a Function with Additional Dependencies With a Virtual Environment in the AWS Lambda Developer Guide.

Step 2: Set up the Lambda function

In this section, you upload the package that you created in the previous section to Lambda.

To set up the Lambda function

  1. In the Lambda console, create a new function from scratch. Choose the Python 3.7 runtime.
  2. Choose a Lambda execution role that contains the following permissions:
    • Allows the action mobiletargeting:GetApplicationDateRangeKpi for the resource arn:aws:mobiletargeting:<awsRegion>:<yourAwsAccountId>:apps/*/kpis/*/*, where <awsRegion> is the Region where you use Amazon Pinpoint, and <yourAwsAccountId> is your AWS account number.
    • Allows the action s3:PutObject for the resource arn:aws:s3:::<my_bucket>/*, where <my_bucket> is the name of the S3 bucket where you want to store the metrics.
  3. Upload the .zip file that you created in the previous section.
  4. Change the Handler value to retrieve_pinpoint_kpis.lambda_handler.
  5. Save your changes.

Step 3: Schedule the execution of the function

At this point, the Lambda function is ready to run. The next step is to set up the trigger that will cause it to run. In this case, since we’re retrieving an entire day’s worth of data, we’ll set up a scheduled trigger that runs every day at 11:59 PM.

To set up the trigger

  1. In the Lambda console, in the Designer section, choose Add trigger.
  2. Create a new CloudWatch Events rule that uses the Schedule expression rule type.
  3. For the schedule expression, enter cron(59 23 ? * * *).

Step 4: Create QuickSight Analyses

Once the data is populated in S3, you can start creating analyses in Amazon QuickSight. The process of creating new analyses involves a couple of tasks: creating a new data set, and creating your visualizations.

To create analyses in QuickSight
1.    In a text editor, create a new file. Paste the following code:

{
    "fileLocations": [
        {
            "URIPrefixes": [ 
                "s3://<bucketName>/quicksight-data/"          
            ]
        }
    ],
    "globalUploadSettings": {
        "format": "JSON"
    }
}

In the preceding code, replace <bucketName> with the name of the S3 bucket that you’re using to store the metrics data. Save the file as manifest.json.
2.    Sign in to the QuickSight console at https://quicksight.aws.amazon.com.
3.    Create a new S3 data set. When prompted, choose the manifest file that you created in step 1. For more information about creating S3 data sets, see Creating a Data Set Using Amazon S3 Files in the Amazon QuickSight User Guide.
4.    Create a new analysis. From here, you can start creating visualizations of your data. To learn more, see Creating an Analysis in the Amazon QuickSight User Guide.