Tag Archives: AWS

Sending Push Notifications to iOS 13 and watchOS 6 Devices

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/sending-push-notifications-to-ios-13-and-watchos-6-devices/

Last week, we made some changes to the way that Amazon Pinpoint sends Apple Push Notification service (APNs) push notifications.

In June, Apple announced that push notifications sent to iOS 13 and watchOS devices would require the new apns-push-type header. APNs uses this header to determine whether a notification should be shown on the display of the recipient’s device, or if it should be sent to the background.

When you use Amazon Pinpoint to send an APNs message, you can choose whether you want to send the message as a standard message, or as a silent notification. Amazon Pinpoint uses your selection to determine which value to apply to the apns-push-type header: if you send the message as a standard message, Amazon Pinpoint automatically sets the value of the apns-push-type header to alert; if you send the message as a silent notification, it sets the apns-push-type header to silent. Amazon Pinpoint applies these settings automatically—you don’t have to do any additional work in order to send messages to recipients with iOS 13 and watchOS 6 devices.

One last thing to keep in mind: if you specify the raw content of an APNs push notification, the message payload has to include the content-available key. The value of the content-available key has to be an integer, and can only be 0 or 1. If you’re sending an alert, set the value of content-available to 0. If you’re sending a background notification, set the value of content-available to 1. Additionally, background notification payloads can’t include the alert, badge, or sound keys.

To learn more about sending APNs notifications, see Generating a Remote Notification and Pushing Background Updates to Your App on the Apple Developer website.

Create an SMS Chatbot with Amazon Pinpoint and Lex

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/create-an-sms-chatbox-with-amazon-pinpoint-and-lex/

Note: This post was written by Ilya Pupko, Senior Consultant for the AWS Digital User Engagement team, and by Gopinath Srinivasan, an AWS Enterprise Solution Architect.


A major advantage of using Amazon Pinpoint for your customer engagement workflows is its ability to tightly integrate with other AWS services. These integrations make it possible to engage in deeper conversations with your customers, as opposed to simply sending one-directional, one-size-fits-all messages.

In this tutorial, we look at the process of creating an SMS-based chatbot using Amazon Lex. This chatbot will help our customers schedule appointments. We’ll use Amazon Pinpoint to send responses from the chatbot over the SMS channel, and we’ll use AWS Lambda to connect the two services together. The following image illustrates the architecture that we’ll create in this tutorial.

The steps in this post are intended to provide general guidance, rather than specific procedures. If you’ve used other AWS services in the past, most of the concepts here will be familiar. If not, don’t worry—we’ve included links to the documentation to make things easier.

Step 1: Set up a project in Amazon Pinpoint

The first step in setting up the chatbot is to create a new Amazon Pinpoint project that can send and receive SMS messages. To be able to receive incoming SMS messages, you also need to obtain a dedicated phone number.

To create a new SMS project

  1. Sign in to the Amazon Pinpoint console at https://console.aws.amazon.com/pinpoint.
  2. Create a new Amazon Pinpoint project, and enable the SMS channel for the project. For more information, see https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-setup.html.
  3. Request a long code for your country. For more information, see https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-voice-manage.html#channels-voice-manage-request-phone-numbers.
  4. Enable the two-way SMS feature for the dedicated long code that you just purchased. Under Incoming message destination, choose Create a new SNS topic, and name it LexPinpointIntegrationDemo. For more information about setting up two-way SMS, see https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-two-way.html.

Step 2: Create a Lex chatbot

Now it’s time to create your bot. For the purposes of this example, we’ll use a bot that’s pre-configured to handle appointment requests. Later, you can customize this bot to fit your needs by specifying additional intents.

To create your bot

  1. Sign in to the Lex console at https://console.aws.amazon.com/lex.
  2. Create a new bot. On the Create your bot page, choose the ScheduleAppointment sample. Use the default IAM role. For COPPA, choose No. Note the name that you specified for the bot—you need to refer to this name in the Lambda function that you create later. For more information about creating bots in Lex, see https://docs.aws.amazon.com/lex/latest/dg/gs-console.html.
  3. When the bot finishes building, choose Publish. For Create an alias, enter Latest. Choose Publish.

Step 3: Set up the Lambda backend

After you create your Lex bot, you have to create a Lambda function that allows your Lex bot to send messages through Amazon Pinpoint.

To create the Lambda function

  1. Sign in to the Lambda console at https://console.aws.amazon.com/lambda.
  2. Create a new Node.js 10.x function from scratch. Create a new IAM role with the default permissions. For more information about creating functions, see https://docs.aws.amazon.com/lambda/latest/dg/getting-started-create-function.html.
  3. In the Designer section, choose Add trigger. Add a new SNS trigger, and then choose the LexPinpointIntegrationDemo topic that you created earlier.
  4. In the Lambda code editor, paste the following code:
    "use strict";
    
    const AWS = require('aws-sdk');
    AWS.config.update({
        region: process.env.Region
    });
    const pinpoint = new AWS.Pinpoint();
    const lex = new AWS.LexRuntime();
    
    var AppId = process.env.PinpointApplicationId;
    var BotName = process.env.BotName;
    var BotAlias = process.env.BotAlias;
    
    exports.handler = (event, context)  => {
        /*
        * Event info is sent via the SNS subscription: https://console.aws.amazon.com/sns/home
        * 
        * - PinpointApplicationId is your Pinpoint Project ID.
        * - BotName is your Lex Bot name.
        * - BotAlias is your Lex Bot alias (aka Lex function/flow).
        */
        console.log('Received event: ' + event.Records[0].Sns.Message);
        var message = JSON.parse(event.Records[0].Sns.Message);
        var customerPhoneNumber = message.originationNumber;
        var chatbotPhoneNumber = message.destinationNumber;
        var response = message.messageBody.toLowerCase();
        var userId = customerPhoneNumber.replace("+1", "");
    
        var params = {
            botName: BotName,
            botAlias: BotAlias,
            inputText: response,
            userId: userId
        };
        response = lex.postText(params, function (err, data) {
            if (err) {
                console.log(err, err.stack);
                //return null;
            }
            else if (data != null && data.message != null) {
                console.log("Lex response: " + data.message);
                sendResponse(customerPhoneNumber, chatbotPhoneNumber, response.response.data.message);
            }
            else {
                console.log("Lex did not send a message back!");
            }
        });
    }
    
    function sendResponse(custPhone, botPhone, response) {
        var paramsSMS = {
            ApplicationId: AppId,
            MessageRequest: {
                Addresses: {
                    [custPhone]: {
                        ChannelType: 'SMS'
                    }
                },
                MessageConfiguration: {
                    SMSMessage: {
                        Body: response,
                        MessageType: "TRANSACTIONAL",
                        OriginationNumber: botPhone
                    }
                }
            }
        };
        pinpoint.sendMessages(paramsSMS, function (err, data) {
            if (err) {
                console.log("An error occurred.\n");
                console.log(err, err.stack);
            }
            else if (data['MessageResponse']['Result'][custPhone]['DeliveryStatus'] != "SUCCESSFUL") {
                console.log("Failed to send SMS response:");
                console.log(data['MessageResponse']['Result']);
            }
            else {
                console.log("Successfully sent response via SMS from " + botPhone + " to " + custPhone);
            }
        });
    }
  5. In the Environment variables section, add the following variables:

    KeyValue
    PinpointApplicationIdThe name of the Pinpoint project that you created earlier.
    BotNameThe name of the Lex bot that you created earlier.
    BotAliasLatest
    RegionThe AWS Region that you created the Amazon Pinpoint project and Lex bot in.
  6. Under Execution role, choose View the LexPinpointIntegrationDemoLambda role.
  7. In the IAM console, add an inline policy. Paste the following into the policy editor:
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Sid":"Logs",
             "Effect":"Allow",
             "Action":[
                "logs:CreateLogStream",
                "logs:CreateLogGroup",
                "logs:PutLogEvents"
             ],
             "Resource":[
                "arn:aws:logs:*:*:*"
             ]
          },
          {
             "Sid":"Pinpoint",
             "Effect":"Allow",
             "Action":[
                "mobiletargeting:SendMessages"
             ],
             "Resource":[
                "arn:aws:mobiletargeting:<REGION>:<ACCOUNTID>:apps/*"
             ]
          },
          {
             "Sid":"Lex",
             "Effect":"Allow",
             "Action":[
                "lex:PostContent",
                "lex:PostText"
             ],
             "Resource":[
                "arn:aws:lex:<REGION>:<ACCOUNTID>:bot/<BOTNAME>"
             ]
          }
       ]
    }

    In the preceding code, make the following changes:

    • Replace <REGION> with the name of the AWS Region that you created the Amazon Pinpoint project and the Lex bot in (such as us-east-1).
    • Replace <ACCOUNTID> with your AWS account ID.
    • Replace <BOTNAME> with the name of your Lex bot.
  8. When you finish, save the policy as PinpointLexFunctionRole.

Step 4: Test the chatbot

Your SMS chatbot is now set up and ready to use! You can test it by sending a message (such as “Schedule an appointment”) to the long code that you obtained earlier. The chatbot responds, asking what type of appointment you want to book, and at what time.

Conclusion

Now that you’ve created your chatbot, you can start to customize it to fit your specific use case. For example, you can enhance the chatbot’s conversational abilities by adding intents, or you could expand on the Lambda function to integrate it with a third-party scheduling tool.

To learn more about configuring Amazon Lex, see the Amazon Lex Developer Guide.

Finally, you can find the latest updates to the code that’s associated with this tutorial in the amazon-pinpoint-lex-bot repository on Github.

Building progressive web apps that use the analytics features of Amazon Pinpoint

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/building-progressive-web-apps-that-use-the-analytics-features-of-amazon-pinpoint/

Last week, our colleague Ed Lima published a post on the AWS Mobile blog about building apps with Amplify. This post shows how to use a variety of AWS services to build a progressive web app that collects survey information from customers. After collecting the customer information, the application sends usage data to Amazon Pinpoint for additional analysis.

We thought that several of our customers would find this post to be helpful as they develop their own apps. To learn more, visit https://aws.amazon.com/blogs/mobile/building-progressive-web-apps-with-the-amplify-framework-and-aws-appsync/.

Applying Netflix DevOps Patterns to Windows

Post Syndicated from Netflix Technology Blog original https://medium.com/netflix-techblog/applying-netflix-devops-patterns-to-windows-2a57f2dbbf79?source=rss----2615bd06b42e---4

Baking Windows with Packer

By Justin Phelps and Manuel Correa

Customizing Windows images at Netflix was a manual, error-prone, and time consuming process. In this blog post, we describe how we improved the methodology, which technologies we leveraged, and how this has improved service deployment and consistency.

Artisan Crafted Images

In the Netflix full cycle DevOps culture the team responsible for building a service is also responsible for deploying, testing, infrastructure, and operation of that service. A key responsibility of Netflix engineers is identifying gaps and pain points in the development and operation of services. Though the majority of our services run on Linux Amazon Machine Images (AMIs), there are still many services critical to the Netflix Playback Experience running on Windows Elastic Compute Cloud (EC2) instances at scale.

We looked at our process for creating a Windows AMI and discovered it was error-prone and full of toil. First, an engineer would launch an EC2 instance and wait for the instance to come online. Once the instance was available, the engineer would use a remote administration tool like RDP to login to the instance to install software and customize settings. This image was then saved as an AMI and used in an Auto Scale Group to deploy a cluster of instances. Because this process was time consuming and painful, our Windows instances were usually missing the latest security updates from Microsoft.

Last year, we decided to improve the AMI baking process. The challenges with service management included:

  • Stale documentation
  • OS Updates
  • High cognitive overhead
  • A lack of continuous testing

Scaling Image Creation

Our existing AMI baking tool Aminator does not support Windows so we had to leverage other tools. We had several goals in mind when trying to improve the baking methodology:

Configuration as Code

The first part of our new Windows baking solution is Packer. Packer allows you to describe your image customization process as a JSON file. We make use of the amazon-ebs Packer builder to launch an EC2 instance. Once online, Packer uses WinRM to copy files and run PowerShell scripts against the instance. If all of the configuration steps are successful then Packer saves a new AMI. The configuration file, referenced scripts, and artifact dependency definitions all live in an internal git repository. We now have the software and instance configuration as code. This means changes can be tracked and reviewed like any other code change.

Packer requires specific information for your baking environment and extensive AWS IAM permissions. In order to simplify the use of Packer for our software developers, we bundled Netflix-specific AWS environment information and helper scripts. Initially, we did this with a git repository and Packer variable files. There was also a special EC2 instance where Packer was executed as Jenkins jobs. This setup was better than manually baking images but we still had some ergonomic challenges. For example, it became cumbersome to ensure users of Packer received updates.

The last piece of the puzzle was finding a way to package our software for installation on Windows. This would allow for reuse of helper scripts and infrastructure tools without requiring every user to copy that solution into their Packer scripts. Ideally, this would work similar to how applications are packaged in the Animator process. We solved this by leveraging Chocolatey, the package manager for Windows. Chocolatey packages are created and then stored in an internal artifact repository. This repository is added as a source for the choco install command. This means we can create and reuse packages that help integrate Windows into the Netflix ecosystem.

Leverage Spinnaker for Continuous Delivery

Flow chart showing how Docker image inheretance is used in the creation of a Windows AMI.
The Base Dockerfile allows updates of Packer, helper scripts, and environment configuration to propagate through the entire Windows Baking process.

To make the baking process more robust we decided to create a Docker image that contains Packer, our environment configuration, and helper scripts. Downstream users create their own Docker images based on this base image. This means we can update the base image with new environment information and helper scripts, and users get these updates automatically. With their new Docker image, users launch their Packer baking jobs using Titus, our container management system. The Titus job produces a property file as part of a Spinnaker pipeline. The resulting property file contains the AMI ID and is consumed by later pipeline stages for deployment. Running the bake in Titus removed the single EC2 instance limitation, allowing for parallel execution of the jobs.

Now each change in the infrastructure is tested, canaried, and deployed like any other code change. This process is automated via a Spinnaker pipeline:

Screenshot of an example Spinnaker pipeline showing Docker image, Windows AMI, Canary Analysis, and Deployment stages.
Example Spinnaker pipeline showing the bake, canary, and deployment stages.

In the canary stage, Kayenta is used to compare metrics between a baseline (current AMI) and the canary (new AMI). The canary stage will determine a score based on metrics such as CPU, threads, latency, and GC pauses. If this score is within a healthy threshold the AMI is deployed to each environment. Running a canary for each change and testing the AMI in production allows us to capture insights around impact on Windows updates, script changes, tuning web server configuration, among others.

Eliminate Toil

Automating these tedious operational tasks allows teams to move faster. Our engineers no longer have to manually update Windows, Java, Tomcat, IIS, and other services. We can easily test server tuning changes, software upgrades, and other modifications to the runtime environment. Every code and infrastructure change goes through the same testing and deployment pipeline.

Reaping the Benefits

Changes that used to require hours of manual work are now easy to modify, test, and deploy. Other teams can quickly deploy secure and reproducible instances in an automated fashion. Services are more reliable, testable, and documented. Changes to the infrastructure are now reviewed like any other code change. This removes unnecessary cognitive load and documents tribal knowledge. Removing toil has allowed the team to focus on other features and bug fixes. All of these benefits reduce the risk of a customer-affecting outage. Adopting the Immutable Server pattern for Windows using Packer and Chocolatey has paid big dividends.


Applying Netflix DevOps Patterns to Windows was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Creating custom Pinpoint dashboards using Amazon QuickSight, part 3

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/creating-custom-pinpoint-dashboards-using-amazon-quicksight-part-3/

Note: This post was written by Manan Nayar and Aprajita Arora, Software Development Engineers on the AWS Digital User Engagement team.


This is the third and final post in our series about creating custom visualizations of your Amazon Pinpoint metrics using Amazon QuickSight.

In our first post, we used the Metrics APIs to retrieve specific Key Performance Indicators (KPIs), and then created visualizations using QuickSight. In the second post, we used the event stream feature in Amazon Pinpoint to enable more in-depth analyses.

The examples in the first two posts used Amazon S3 to store the metrics that we retrieved from Amazon Pinpoint. This post takes a different approach, using Amazon Redshift to store the data. By using Redshift to store this data, you gain the ability to create visualizations on large data sets. This example is useful in situations where you have a large volume of event data, and situations where you need to store your data for long periods of time.

Step 1: Provision the storage

The first step in setting up this solution is to create the destinations where you’ll store the Amazon Pinpoint event data. Since we’ll be storing the data in Amazon Redshift, we need to create a new Redshift cluster. We’ll also create an S3 bucket, which will house the original event data that’s streamed from Amazon Pinpoint.

To create the Redshift cluster and the S3 bucket

  1. Create a new Redshift cluster. To learn more, see the Amazon Redshift Getting Started Guide.
  2. Create a new table in the Redshift cluster that contains the appropriate columns. Use the following query to create the table:
    create table if not exists pinpoint_events_table(
      rowid varchar(255) not null,
      project_key varchar(100) not null,
      event_type varchar(100) not null,
      event_timestamp timestamp not null,
      campaign_id varchar(100),
      campaign_activity_id varchar(100),
      treatment_id varchar(100),
      PRIMARY KEY (rowid)
    );
  3. Create a new Amazon S3 bucket. For complete procedures, see Create a Bucket in the Amazon S3 Getting Started Guide.

Step 2: Set up the event stream

This example uses the event stream feature of Amazon Pinpoint to send event data to S3. Later, we’ll create a Lambda function that sends the event data to your Redshift cluster when new event data is added to the S3 bucket. This method lets us store the original event data in S3, and transfer a subset of that data to Redshift for analysis.

To set up the event stream

  1. Sign in to the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint. In the list of projects, choose the project that you want to enable event streaming for.
  2. Under Settings, choose Event stream.
  3. Choose Edit, and then configure the event stream to use Amazon Kinesis Data Firehose. If you don’t already have a Kinesis Data Firehose stream, follow the link to create one in the Kinesis console. Configure the stream to send data to an S3 bucket. For more information about creating streams, see Creating an Amazon Kinesis Data Firehose Delivery Stream.
  4. Under IAM role, choose Automatically create a role. Choose Save.

Step 3: Create the Lambda function

In this section, you create a Lambda function that processes the raw event stream data, and then writes it to a table in your Redshift cluster.
To create the Lambda function:

  1. Download the psycopg2 binary from https://github.com/jkehler/awslambda-psycopg2. This Python library lets you interact with PostgreSQL databases, such as Amazon Redshift. It contains certain libraries that aren’t included in Lambda.
    • Note: This Github repository is not an official AWS-managed repository.
  2. Within the awslambda-psycopg2-master folder, you’ll find a folder called psycopg2-37. Rename the folder to psycopg2 (you may need to delete the existing folder with that name), and then compress the entire folder to a .zip file.
  3. Create a new Lambda function from scratch, using the Python 3.7 runtime.
  4. Upload the psycopg2.zip file that you created in step 1 to Lambda.
  5. In Lambda, create a new function called lambda_function.py. Paste the following code into the function:
    import datetime
    import json
    import re
    import uuid
    import os
    import boto3
    import psycopg2
    from psycopg2 import Error
    
    cluster_redshift = "<clustername>"
    dbname_redshift = "<dbname>"
    user_redshift = "<username>"
    password_redshift = "<password>"
    endpoint_redshift = "<endpoint>"
    port_redshift = "5439"
    table_redshift = "pinpoint_events_table"
    
    # Get the file that contains the event data from the appropriate S3 bucket.
    def get_file_from_s3(bucket, key):
        s3 = boto3.client('s3')
        obj = s3.get_object(Bucket=bucket, Key=key)
        text = obj["Body"].read().decode()
    
        return text
    
    # If the object that we retrieve contains newline-delineated JSON, split it into
    # multiple objects.
    def clean_and_split(json_raw):
        json_delimited = re.sub('}\s{','}---X-DELIMITER---{',json_raw)
        json_clean = re.sub('\s+','',json_delimited)
        data = json_clean.split("---X-DELIMITER---")
    
        return data
    
    # Set all of the variables that we'll use to create the new row in Redshift.
    def set_variables(in_json):
    
        for line in in_json:
            content = json.loads(line)
            app_id = content['application']['app_id']
            event_type = content['event_type']
            event_timestamp = datetime.datetime.fromtimestamp(content['event_timestamp'] / 1e3).strftime('%Y-%m-%d %H:%M:%S')
    
            if (content['attributes'].get('campaign_id') is None):
                campaign_id = ""
            else:
                campaign_id = content['attributes']['campaign_id']
    
            if (content['attributes'].get('campaign_activity_id') is None):
                campaign_activity_id = ""
            else:
                campaign_activity_id = content['attributes']['campaign_activity_id']
    
            if (content['attributes'].get('treatment_id') is None):
                treatment_id = ""
            else:
                treatment_id = content['attributes']['treatment_id']
    
            write_to_redshift(app_id, event_type, event_timestamp, campaign_id, campaign_activity_id, treatment_id)
                
    # Write the event stream data to the Redshift table.
    def write_to_redshift(app_id, event_type, event_timestamp, campaign_id, campaign_activity_id, treatment_id):
        row_id = str(uuid.uuid4())
    
        query = ("INSERT INTO " + table_redshift + "(rowid, project_key, event_type, "
                + "event_timestamp, campaign_id, campaign_activity_id, treatment_id) "
                + "VALUES ('" + row_id + "', '"
                + app_id + "', '"
                + event_type + "', '"
                + event_timestamp + "', '"
                + campaign_id + "', '"
                + campaign_activity_id + "', '"
                + treatment_id + "');")
    
        try:
            conn = psycopg2.connect(user = user_redshift,
                                    password = password_redshift,
                                    host = endpoint_redshift,
                                    port = port_redshift,
                                    database = dbname_redshift)
    
            cur = conn.cursor()
            cur.execute(query)
            conn.commit()
            print("Updated table.")
    
        except (Exception, psycopg2.DatabaseError) as error :
            print("Database error: ", error)
        finally:
            if (conn):
                cur.close()
                conn.close()
                print("Connection closed.")
    
    # Handle the event notification that we receive when a new item is sent to the 
    # S3 bucket.
    def lambda_handler(event,context):
        print("Received event: \n" + str(event))
    
        bucket = event['Records'][0]['s3']['bucket']['name']
        key = event['Records'][0]['s3']['object']['key']
        data = get_file_from_s3(bucket, key)
    
        in_json = clean_and_split(data)
    
        set_variables(in_json)

    In the preceding code, make the following changes:

    • Replace <clustername> with the name of the cluster.
    • Replace <dbname> with the name of the database.
    • Replace <username> with the user name that you specified when you created the Redshift cluster.
    • Replace <password> with the password that you specified when you created the Redshift cluster.
    • Replace <endpoint> with the endpoint address of the Redshift cluster.
  6. In IAM, update the execution role that’s associated with the Lambda function to include the GetObject permission for the S3 bucket that contains the event data. For more information, see Editing IAM Policies in the AWS IAM User Guide.

Step 4: Set up notifications on the S3 bucket

Now that we’ve created the Lambda function, we’ll set up a notification on the S3 bucket. In this case, the notification will refer to the Lambda function that we created in the previous section. Every time a new file is added to the bucket, the notification will cause the Lambda function to run.

To create the event notification

  1. In S3, create a new bucket notification. The notification should be triggered when PUT events occur, and should trigger the Lambda function that you created in the previous section. For more information about creating notifications, see Configuring Amazon S3 Event Notifications in the Amazon S3 Developer Guide.
  2. Test the event notification by sending a test campaign. If you send an email campaign, your Redshift database should contain events such as _campaign.send, _email.send, _email.delivered, and others. You can check the contents of the Redshift table by running the following query in the Query Editor in the Redshift console:
    select * from pinpoint_events_table;

Step 5: Add the data set in Amazon QuickSight

If your Lambda function is sending event data to Redshift as expected, you can use your Redshift database to create a new data set in Amazon QuickSight. QuickSight includes an automatic database discovery feature that helps you add your Redshift database as a data set with only a few clicks. For more information, see Creating a Data Set from a Database in the Amazon QuickSight User Guide.

Step 6: Create your visualizations

Now that QuickSight is retrieving information from your Redshift database, you can use that data to create visualizations. To learn more about creating visualizations in QuickSight, see Creating an Analysis in the Amazon QuickSight User Guide.

This brings us to the end of our series. While these posts focused on using Amazon QuickSight to visualize your analytics data, you can also use these same techniques to create visualizations using 3rd party applications. We hope you enjoyed this series, and we can’t wait to see what you build using these examples!

Creating custom Pinpoint dashboards using Amazon QuickSight, part 2

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/creating-custom-pinpoint-dashboards-using-amazon-quicksight-part-2/

Note: This post was written by Manan Nayar and Aprajita Arora, Software Development Engineers on the AWS Digital User Engagement team.


In our previous post, we discussed the process of visualizing specific, pre-aggregated Amazon Pinpoint metrics—such as delivery rate or open rate—using the Amazon Pinpoint Metrics APIs. In that example, we showed how to create a Lambda function that retrieves your metrics, and then make those metrics available for creating visualizations in Amazon QuickSight.

This post discusses shows a different approach to exporting data from Amazon Pinpoint and using it to build visualizations. Rather than retrieve specific metrics, you can use the event stream feature in Amazon Pinpoint to export raw event data. You can use this data in Amazon QuickSight to create in-depth analyses of your data, as opposed to visualizing pre-calculated metrics. As an added benefit, when you use this solution, you don’t have to modify any code, and the underlying data is updated every few minutes.

Step 1: Configure the event stream in Amazon Pinpoint

The Amazon Pinpoint event stream includes information about campaign events (such as campaign.send) and application events (such as session.start). It also includes response information related to all of the emails and SMS messages that you send from your Amazon Pinpoint project, regardless of whether they were sent from campaigns or on a transactional basis. When you enable event streams, Amazon Pinpoint automatically sends this data to your S3 bucket (via Amazon Kinesis Data Firehose) every few minutes.

To set up the event stream

  1. Sign in to the Amazon Pinpoint console at http://console.aws.amazon.com/pinpoint. In the list of projects, choose the project that you want to enable event streaming for.
  2. Under Settings, choose Event stream.
  3. Choose Edit, and then configure the event stream to use Amazon Kinesis Data Firehose. If you don’t already have a Kinesis Data Firehose stream, follow the link to create one in the Kinesis console. Configure the stream to send data to an S3 bucket. For more information about creating streams, see Creating an Amazon Kinesis Data Firehose Delivery Stream.
  4. Under IAM role, choose Automatically create a role. Choose Save.

Step 2: Add a data set in Amazon QuickSight

Now that you’ve started streaming your Amazon Pinpoint data to S3, you can set Amazon QuickSight to look for data in the S3 bucket. You connect QuickSight to sources of data by creating data sets.

To create a data set

    1. In a text editor, create a new file. Paste the following code:
      {
          "fileLocations": [
              {
                  "URIPrefixes": [ 
                      "s3://<bucketName>/"          
                  ]
              }
          ],
          "globalUploadSettings": {
              "format": "JSON"
          }
      }

      In the preceding code, replace <bucketName> with the name of the S3 bucket that you’re using to store the event stream data. Save the file as manifest.json.

    2. Sign in to the QuickSight console at https://quicksight.aws.amazon.com.
    3. Create a new S3 data set. When prompted, choose the manifest file that you created in step 1. For more information about creating S3 data sets, see Creating a Data Set Using Amazon S3 Files in the Amazon QuickSight User Guide.
    4. Create a new analysis. From here, you can start creating visualizations of your data. To learn more, see Creating an Analysis in the Amazon QuickSight User Guide.

Step 3: Set the refresh rate for the data set

You can configure your data sets in Amazon QuickSight to automatically refresh on a certain schedule. In this section, you configure the data set to refresh every day, one minute before midnight.

To set the refresh schedule

  1. Go to the QuickSight start page at https://quicksight.aws.amazon.com/sn/start. Choose Manage data.
  2. Choose the data set that you created in the previous section.
  3. Choose Schedule refresh. Follow the prompts to set up a daily refresh schedule.

Step 4: Create your visualizations

From this point, you can start creating visualizations of your data. To learn more about creating visualizations, see Creating an Analysis in the Amazon QuickSight User Guide.

Creating custom Pinpoint dashboards using Amazon QuickSight, part 1

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/creating-custom-pinpoint-dashboards-using-amazon-quicksight-part-1/

Note: This post was written by Manan Nayar and Aprajita Arora, Software Development Engineers on the AWS Digital User Engagement team.


Amazon Pinpoint helps you create customer-centric engagement experiences across the mobile, web, and other messaging channels. It also provides a variety of Key Performance Indicators (KPIs) that you can use to track the performance of your messaging programs.

You can access these KPIs through the console, or by using the Amazon Pinpoint API. In some cases, you might want to create custom dashboards that aren’t included by default, or even combine these metrics with other data. Over the next few days, we’ll discuss several different methods that you can use to create your own custom dashboards.

In this post, you’ll learn how to use the Amazon Pinpoint API to retrieve metrics, and then display them in visualizations that you create in Amazon QuickSight. This option is ideal for creating custom dashboards that highlight a specific set of metrics, or for embedding these metrics in your existing application or website.

In the next post (which we’ll post on Monday, August 19), you’ll learn how to export raw event data to an S3 bucket, and use that data to create dashboards in by using QuickSight’s Super-fast, Parallel, In-memory Calculation Engine (SPICE). This option enables you to perform in-depth analyses and quickly update visualizations. It’s also cost-effective, because all of the event data is stored in an S3 bucket.

The final post (which we’ll post on Wednesday, August 21) will also discuss the process of creating visualizations from event stream data. However, in this solution, the data will be sent from Amazon Kinesis to a Redshift cluster. This option is ideal if you need to process very large volumes of event data.

Creating a QuickSight dashboard that uses specific metrics

You can use the Amazon Pinpoint API to programmatically access many of the metrics that are shown on the Analytics pages of the Amazon Pinpoint console. You can learn more about using the API to obtain specific KPIs in our recent blog post, Tracking Campaign Performance Using the Metrics APIs.

The following sections show you how to parse and store those results in Amazon S3, and then create custom dashboards by using Amazon Quicksight. The steps below are meant to provide general guidance, rather than specific procedures. If you’ve used other AWS services in the past, most of the concepts here will be familiar. If not, don’t worry—we’ve included links to the documentation to make things easier.

Step 1: Package the Dependencies

Lambda currently uses a version of the AWS SDK that is a few versions behind the current version. However, the ability to retrieve Pinpoint metrics programmatically is a relatively new feature. For this reason, you have to download the latest version of the SDK libraries to your computer, create a .zip archive, and then upload that archive to Lambda.

To package the dependencies

    1. Paste the following code into a text editor:
      from datetime import datetime
      import boto3
      import json
      
      AWS_REGION = "<us-east-1>"
      PROJECT_ID = "<projectId>"
      BUCKET_NAME = "<bucketName>"
      BUCKET_PREFIX = "quicksight-data"
      DATE = datetime.now()
      
      # Get today's push open rate KPI values.
      def get_kpi(kpi_name):
      
          client = boto3.client('pinpoint',region_name=AWS_REGION)
      
          response = client.get_application_date_range_kpi(
              ApplicationId=PROJECT_ID,
              EndTime=DATE.strftime("%Y-%m-%d"),
              KpiName=kpi_name,
              StartTime=DATE.strftime("%Y-%m-%d")
          )
          rows = response['ApplicationDateRangeKpiResponse']['KpiResult']['Rows'][0]['Values']
      
          # Create a JSON object that contains the values we'll use to build QuickSight visualizations.
          data = construct_json_object(rows[0]['Key'], rows[0]['Value'])
      
          # Send the data to the S3 bucket.
          write_results_to_s3(kpi_name, json.dumps(data).encode('UTF-8'))
      
      # Create the JSON object that we'll send to S3.
      def construct_json_object(kpi_name, value):
          data = {
              "applicationId": PROJECT_ID,
              "kpiName": kpi_name,
              "date": str(DATE),
              "value": value
          }
      
          return data
      
      # Send the data to the designated S3 bucket.
      def write_results_to_s3(kpi_name, data):
          # Create a file path with folders for year, month, date, and hour.
          path = (
              BUCKET_PREFIX + "/"
              + DATE.strftime("%Y") + "/"
              + DATE.strftime("%m") + "/"
              + DATE.strftime("%d") + "/"
              + DATE.strftime("%H") + "/"
              + kpi_name
          )
      
          client = boto3.client('s3')
      
          # Send the data to the S3 bucket.
          response = client.put_object(
              Bucket=BUCKET_NAME,
              Key=path,
              Body=bytes(data)
          )
      
      def lambda_handler(event, context):
          get_kpi('email-open-rate')
          get_kpi('successful-delivery-rate')
          get_kpi('unique-deliveries')

      In the preceding code, make the following changes:

      • Replace <us-east-1> with the name of the AWS Region that you use Amazon Pinpoint in.
      • Replace <projectId> with the ID of the Amazon Pinpoint project that the metrics are associated with.
      • Replace <bucketName> with the name of the Amazon S3 bucket that you want to use to store the data. For more information about creating S3 buckets, see Create a Bucket in the Amazon S3 Getting Started Guide.
      • Optionally, modify the lambda_handler function so that it calls the get_kpi function for the specific metrics that you want to retrieve.

      When you finish, save the file as retrieve_pinpoint_kpis.py.

  1. Use pip to download the latest versions of the boto3 and botocore libraries. Add these libraries to a .zip file. Also add retrieve_pinpoint_kpis.py to the .zip file. You can learn more about all of these tasks in Updating a Function with Additional Dependencies With a Virtual Environment in the AWS Lambda Developer Guide.

Step 2: Set up the Lambda function

In this section, you upload the package that you created in the previous section to Lambda.

To set up the Lambda function

  1. In the Lambda console, create a new function from scratch. Choose the Python 3.7 runtime.
  2. Choose a Lambda execution role that contains the following permissions:
    • Allows the action mobiletargeting:GetApplicationDateRangeKpi for the resource arn:aws:mobiletargeting:<awsRegion>:<yourAwsAccountId>:apps/*/kpis/*/*, where <awsRegion> is the Region where you use Amazon Pinpoint, and <yourAwsAccountId> is your AWS account number.
    • Allows the action s3:PutObject for the resource arn:aws:s3:::<my_bucket>/*, where <my_bucket> is the name of the S3 bucket where you want to store the metrics.
  3. Upload the .zip file that you created in the previous section.
  4. Change the Handler value to retrieve_pinpoint_kpis.lambda_handler.
  5. Save your changes.

Step 3: Schedule the execution of the function

At this point, the Lambda function is ready to run. The next step is to set up the trigger that will cause it to run. In this case, since we’re retrieving an entire day’s worth of data, we’ll set up a scheduled trigger that runs every day at 11:59 PM.

To set up the trigger

  1. In the Lambda console, in the Designer section, choose Add trigger.
  2. Create a new CloudWatch Events rule that uses the Schedule expression rule type.
  3. For the schedule expression, enter cron(59 23 ? * * *).

Step 4: Create QuickSight Analyses

Once the data is populated in S3, you can start creating analyses in Amazon QuickSight. The process of creating new analyses involves a couple of tasks: creating a new data set, and creating your visualizations.

To create analyses in QuickSight
1.    In a text editor, create a new file. Paste the following code:

{
    "fileLocations": [
        {
            "URIPrefixes": [ 
                "s3://<bucketName>/quicksight-data/"          
            ]
        }
    ],
    "globalUploadSettings": {
        "format": "JSON"
    }
}

In the preceding code, replace <bucketName> with the name of the S3 bucket that you’re using to store the metrics data. Save the file as manifest.json.
2.    Sign in to the QuickSight console at https://quicksight.aws.amazon.com.
3.    Create a new S3 data set. When prompted, choose the manifest file that you created in step 1. For more information about creating S3 data sets, see Creating a Data Set Using Amazon S3 Files in the Amazon QuickSight User Guide.
4.    Create a new analysis. From here, you can start creating visualizations of your data. To learn more, see Creating an Analysis in the Amazon QuickSight User Guide.

Track Campaign Performance Using the Metrics APIs in Amazon Pinpoint

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/track-campaign-performance-using-the-metrics-apis-in-amazon-pinpoint/

Note: This post was written by Siddhanth Deshpande, a Software Development Engineer on the AWS Digital User Engagement team.


Today, we added Campaign Metrics and Application Metrics APIs to Amazon Pinpoint. You can use these APIs to programmatically access many of the metrics that are currently shown on the Analytics pages of the Amazon Pinpoint console. You can use these new APIs to analyze Amazon Pinpoint metrics in the reporting tool of your choice. For example, you can use these APIs to build a live custom dashboard to display your weekly campaign results, or to perform in-depth analytics on delivery rates for your campaigns. In this post, we’ll look at how to use these APIs to query key performance indicators (KPIs), as well as how to parse the response and pass the data to another service or application for further analysis.

Sending your request

The following Java sample code describes how to make a request to the Amazon Pinpoint Application Metrics API.

GetApplicationDateRangeKpiRequest request = new GetApplicationDateRangeKpiRequest()
        .withApplicationId(<YOUR_PINPOINT_PROJECT_ID>)
        .withKpiName(<KPI_NAME>)
        .withStartTime(Date.from(Instant.parse(<START_TIMESTAMP>)))
        .withEndTime(Date.from(Instant.parse(<END_TIMESTAMP>)));

These semantics also apply to the Amazon Pinpoint Campaign Metrics API. You can find a full list of metrics that are supported by these APIs at https://docs.aws.amazon.com/pinpoint/latest/developerguide/analytics.html.

Parsing the response

When you send your query, Amazon Pinpoint returns a JSON response that contains the data that you requested. The API groups the results by date, campaign ID, or another relevant field, depending on the metric. Amazon Pinpoint supports both single value KPIs, such as the count of unique message deliveries (unique-deliveries) and grouped by KPIs, such as the rate of successful deliveries grouped by date (successful-delivery-rate-grouped-by-date) through the same API call. Depending on the KPI that you queried, the shape of the result can vary. All KPI result rows include the result values. However, grouped by KPI result row values include an additional field indicating the keys used to group the result values, whereas single value KPI result row values do not.

You can use the following Java code example to display the data that’s contained in the response:

GetApplicationDateRangeKpiResult result = amazonPinpoint.getApplicationDateRangeKpi(request);
List<ResultRow> rows = result
        .getApplicationDateRangeKpiResponse()
        .getKpiResult()
        .getRows();

// Understanding the result

rows.forEach(row -> {
    System.out.print(
            String.format(
                    "Found values: %s",
                    row.getValues().stream().map(value -> String.format(
                            "Name:%s,Type:%s,Value:%s",
                            value.getKey(),
                            value.getType(),
                            value.getValue()
                    )).collect(Collectors.joining(";"))
            )
    );
    if (row.getGroupedBys() != null) {
        System.out.println(
                String.format(
                        " for keys: %s.",
                        row.getGroupedBys().stream().map(groupedBy -> String.format(
                                "Name:%s,Type:%s,Value:%s",
                                groupedBy.getKey(),
                                groupedBy.getType(),
                                groupedBy.getValue()
                        )).collect(Collectors.joining(";"))
                )
        );
    } else {
        System.out.println(".");
    }
});

For example, for the unique-deliveries KPI, you see a result that resembles the following example:

Found values: Name:UniqueDeliveries,Type:Double,Value:30.0.

For the successful-delivery-rate-grouped-by-campaign-activity KPI, you see a result that resembles the following example:

Found values: Name:SuccessfulDeliveryRate,Type:Double,Value:1.0 for keys: Name:CampaignActivityId,Type:String,Value:DATA_API_CAMPAIGN_ACTIVITY_ID_1.
Found values: Name:SuccessfulDeliveryRate,Type:Double,Value:1.0 for keys: Name:CampaignActivityId,Type:String,Value:DATA_API_CAMPAIGN_ACTIVITY_ID_2.

For the successful-delivery-rate-grouped-by-date KPI, you see a result that resembles the following example:

Found values: Name:SuccessfulDeliveryRate,Type:Double,Value:1.0 for keys: Name:Date,Type:String,Value:2019-01-01.
Found values: Name:SuccessfulDeliveryRate,Type:Double,Value:1.0 for keys: Name:Date,Type:String,Value:2019-01-02.
Found values: Name:SuccessfulDeliveryRate,Type:Double,Value:1.0 for keys: Name:Date,Type:String,Value:2019-01-03.

You can also define parsing logic that’s specific to the kind of KPI you are querying. For example, from the first of the preceding examples, we know that the unique-deliveries KPI returns a single value without any grouped by fields. You can then reduce the amount of logic that’s required to parse the result. You can use the following code example to parse the unique-deliveries KPI data:

if(!rows.isEmpty()) {
    ResultRowValue value = rows.get(0).getValues().get(0);
    System.out.print(
            String.format(
                    "Found value: Name:%s,Type:%s,Value:%s", 
                    value.getKey(),
                    value.getType(),
                    value.getValue()
            )
    );
}

Conclusion

Using these APIs enables you to monitor, assess, and share the performance of your campaigns without having to analyze raw event data. They also let you analyze metrics without having to sign in to the Amazon Pinpoint console. Sharing these metrics can help you and your team better understand your customers, and can help you find exciting new ways to use Amazon Pinpoint to engage with your customers.

Send text messages in Amazon Connect by integrating Amazon Pinpoint

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/send-text-messages-in-amazon-connect-by-integrating-amazon-pinpoint/

Because Amazon Pinpoint is a member of the AWS family, you can integrate it seamlessly with other AWS services. In the past, this blog has looked at the process of integrating Amazon Pinpoint with Amazon Comprehend and Amazon Redshift.

Earlier this week, Michael Woodward, a Solution Architect here at AWS, published a blog post about integrating Amazon Pinpoint with Amazon Connect, our cloud-based contact center service.

Integrating Amazon Pinpoint into Amazon Connect lets you expand the capabilities of your call center systems in several interesting ways. For example, you can use Amazon Pinpoint to send more information after a call ends, or to send a link to an after-call survey.

To learn more about this solution, see Michael’s post on the AWS Contact Center blog.

New Regions, New Features, and a New Web Site

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/new-regions-new-features-and-a-new-web-site/

It’s a busy time here on the Digital User Engagement Team at AWS!

Last week, we made Amazon Pinpoint available in the Asia Pacific (Mumbai) and Asia Pacific (Sydney) AWS Regions. This is great news for new Pinpoint customers in these areas of the globe who were previously concerned with issues related to latency and data residency. Existing Amazon Pinpoint customers can also use these new Regions to increase availability and create geographical redundancy.

On Tuesday of this week, we also launched two exciting improvements to the Amazon Pinpoint console. The first improvement is a tool that you can use to import customer segments in just a few clicks. Previously, if you wanted to import customer data into Pinpoint, you had to save the data in a CSV or JSON file, upload it to an S3 bucket, create a segment in Pinpoint, and enter the full path to the S3 bucket. Now, you can drag and drop files right into the segment importer. To learn more, see the Pinpoint User Guide.

The other new feature that we released this week is an improved email editor. Our previous email editor only allowed you to include a limited set of HTML tags in your emails. With our new editor, however, you can include any HTML tags that you want. The new editor also includes a helpful side-by-side view that renders your message in real-time, as shown in the following image.

Users who don’t want to work with HTML code can also use the Design view to create and modify emails in an intuitive, WYSIWYG interface. For more information, see the Pinpoint User Guide.

Finally, we launched a new website for Amazon Pinpoint at https://aws.amazon.com/pinpoint. On our new site, you can learn more about the capabilities of Amazon Pinpoint. You’ll find in-depth information about all of the features, channels, and use cases that Amazon Pinpoint supports.

Every day, we’re amazed by the things that our customers do with Amazon Pinpoint. We hope these changes help you do even more incredible things!

Learn About Amazon Pinpoint at Upcoming Events Around the World

Post Syndicated from Hannah Nilsson original https://aws.amazon.com/blogs/messaging-and-targeting/learn-about-amazon-pinpoint-at-upcoming-events/

Connect with the AWS Customer Engagement team at events around the world to learn how our technology can to help you better engage with your customers. Get demos on recent feature releases, discover how you can use Pinpoint for your specific use case, and attend informative sessions to hear how companies around the world are using AWS Customer Engagement solutions to deliver better experiences for their customers. Plus, read below to find out how Amazon Pinpoint and Amazon SES both enable you to create innovative email experiences with the recent AMP Project launch.

AWS Customer Engagement in the news: Amazon SES and Amazon Pinpoint support build the future of email with AMP

The AMP Project’s mission is to enable more user-first experiences on the web, including web-based technology like email. On March 26, the AMP Project announced that they are bringing AMP technology to email in order to give users an interactive, real-time experience that also keeps inboxes safe.

Amazon Pinpoint and Amazon SES both provide out-of-the-box support for AMP for email with no additional configuration. This allows you to easily create experiences for your customers such as  submitting RSVPs to events, filling out questionnaires, browsing catalogs, or responding to comments right within the email.

Read the AMP announcement for more information about these new capabilities. To learn how to use the AMP format with Amazon SES, visit the SES Developer Guide. To learn how to use the AMP format with Amazon Pinpoint, read this Amazon Pinpoint API Reference. View these instructions for more information on how to add AMP to an existing email.

Amazon Pinpoint has been busy building. You can now:

  • Learn how to set up an email preference management web page that enables customers to manage their email subscription preferences. Read now.
  • Learn how to set up a web form that collects information from new customers, and then sends them an SMS message to confirm that they want to receive content from you. Read now.
  • Use Amazon Pinpoint in the US West (Oregon), EU (Frankfurt), and EU (Ireland) regions in addition to the US East (Virginia) region. Learn more.
  • Deliver voice messages to your users with Amazon Pinpoint Voice. Learn more.
  • Set up campaigns that auto-send messages to your customers when they take specific actions. Learn more.
  • Detect and understand issues impacting your email deliverability with the Amazon Pinpoint Deliverability Dashboard. Learn more. 

Meet an Amazon Pinpoint expert at these upcoming events. We will teach you how to take advantage of recent updates so that you can create better engagement experiences for your customers. Plus, we can give you an inside look on what’s on our roadmap, and we’ll be giving out custom Pinpoint swag!

AWS Summit, Singapore 

April 10, 2019
Singapore Expo Convention & Exhibition Centre
Amazon Pinpoint will host an informative session about our Customer Engagement solutions at the AWS Singapore Summit. In this session, we will describe how AWS enables companies to better understand and engage their customers with personalized, timely, and relevant communications on multiple channels. You will also learn how Disney Streaming Services is using Amazon Pinpoint to engage their users.
Register for the Summit here.

“Mobile Days” at the AWS San Francisco Loft   

April 24, 2019 
AWS San Francisco Loft
Join us for an engaging day of discussion and education. Amazon Pinpoint experts will host the following sessions:

  • 2:30pm – 3:30pm: How Do You Measure Customer Success? Featuring Amazon Pinpoint. 
  • 3:30pm – 4:30pm: Using ML to Create Enhance Your Marketing. Featuring Amazon Pinpoint and Amazon Personalize. 

Space for this event is limited, please reserve your seat here.

AWS Summit, Sydney

May 1-2, 2019
International Convention Centre (ICC), Darling Harbour, Sydney
Don’t miss the customer engagement session on April 30th. This session, part of Amazon’s Innovation Day event, features a keynote address by Neil Lindsay, Vice President of Global Marketing at Amazon. The session explores how AWS technologies power organizations that deliver customer-centric innovations. Learn about how Australia’s largest brands and digital agencies use AWS technologies to engage customers, build new business models, and transform customer experiences.
Register for the Summit here

AWS Summit, Mumbai

May 15, 2019
Bombay Exhibition Center, Mumbai
The Amazon Pinpoint team will be at the “Ask an Expert” booth. Stop by to meet the team, ask questions, and pick up Amazon Pinpoint swag!
Register for the summit here

This Is My Architecture: Mobile Cryptocurrency Mining

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/this-is-my-architecture-mobile-cryptocurrency-mining/

In North America, approximately 95% of adults over the age of 25 have a bank account. In the developing world, that number is only about 52%. Cryptocurrencies can provide a platform for millions of unbanked people in the world to achieve financial freedom on a more level financial playing field.

Electroneum, a cryptocurrency company located in England, built its cryptocurrency mobile back end on AWS and is using the power of blockchain to unlock the global digital economy for millions of people in the developing world.

Electroneum’s cryptocurrency mobile app allows Electroneum customers in developing countries to transfer ETNs (exchange-traded notes) and pay for goods using their smartphones. Listen in to the discussion between AWS Solutions Architect Toby Knight and Electroneum CTO Barry Last as they explain how the company built its solution. Electroneum’s app is a web application that uses a feedback loop between its web servers and AWS WAF (a web application firewall) to automatically block malicious actors. The system then uses Athena, with a gamified approach, to provide an additional layer of blocking to prevent DDoS attacks. Finally, Electroneum built a serverless, instant payments system using AWS API Gateway, AWS Lambda, and Amazon DynamoDB to help its customers avoid the usual delays in confirming cryptocurrency transactions.

 

New Whitepaper: Active Directory Domain Services on AWS

Post Syndicated from Vinod Madabushi original https://aws.amazon.com/blogs/architecture/new-whitepaper-active-directory-domain-services-on-aws/

The cloud is now at the center of most Enterprise IT strategies. As such, a well-planned move to the cloud can result in immediate business payoff. To achieve such success, it’s important that you adopt Microsoft Active Directory (AD), the foundation of many large enterprise Windows and .NET applications in a secure, scalable, and highly available manner within the AWS Cloud.

AWS offers flexible options for running AD, so as a customer it’s essential to select an architecture well-suited to support your applications. AWS offers a fully managed option called AWS Managed Active Directory, which enables your directory-aware workloads to use Managed Active Directory in AWS. You can also run Active Directory on Amazon Elastic Compute Cloud (Amazon EC2) and manage both the EC2 Instances and Active Directory, which provides the flexibility needed to extend an existing Active Directory domain to the AWS infrastructure.

In this regard, we are very excited to release Active Directory Domain Services for AWS Whitepaper. This Active Directory whitepaper describes best practices for running Active Directory on AWS, including different architectural approaches for running AWS Managed AD and Active Directory on EC2 Instances. In addition, this document discusses the design considerations, security, network connectivity, and multi-region deployment of Active Directory for both scenarios.

Read the whitepaper: Active Directory on AWS.

About the author

Vinod MadabushiVinod Madabushi is an Enterprise Solutions Architect and subject matter expert in Microsoft technologies, including Active Directory. He works with customers on building highly available, scalable, and resilient applications on AWS Cloud. He’s passionate about solving technology challenges and helping customers with their cloud journey.

 

The latest news, content, and helpful tips for AWS Digital User Engagement

Post Syndicated from Hannah Nilsson original https://aws.amazon.com/blogs/messaging-and-targeting/the-latest-news-content-and-helpful-tips-for-aws-digital-user-engagement/

The AWS Digital User Engagement team hit the ground running this year. From speaking in front of crowds of digital marketers and developers, to developing new tutorials to help make it easier to get started building solutions to common use cases, here’s the latest on what we’ve been up to and our latest updates to Amazon Pinpoint.

How To Achieve Customer-Obsessed Digital User Engagement

simon-poile-presenting-at-digital-summit

Simon Poile, GM of AWS Digital User Engagement, had the pleasure of speaking to hundreds of digital marketers at the Digital Summit conference in Seattle, WA on February 26th. Digital Summit attendees are the movers and shakers influencing the growth and success of their company’s digital marketing — and the future landscape of the digital economy. Simon provided insights on how marketers can embody the Amazon culture of customer obsession to gain a deeper understanding of their customers, strengthen trust between brands and their users, and create a personalized digital engagement experience that is timely, contextually relevant, and reaches the right user at the right time through the right medium. He discussed how marketers can embrace technology such as machine learning and IoT to accomplish transformative engagement, and provided insights about how brands around the world are using AWS Digital User Engagement solutions to transform their engagement efforts.

View The Presentation Deck.

Learn to implement two-way SMS messaging for a simple approach that results in higher levels of customer engagement

In a recent article posted on A Cloud Guru, Dennis Hill explains what two-way SMS is and how you can quickly and easily start sending personalized, timely, and relevant text messages to your customers with Amazon Pinpoint. He then shows how you can implement a practical solution for setting up an SMS long codeso you can start sending and receiving text messages.

Read Now.

New Amazon Pinpoint Getting Started Guide: How to Create an SMS Registration System

On Wednesday the 27th, we launched the first Amazon Pinpoint Getting Started Guide. This guide, located in the Tutorials section of the Pinpoint Developer Guide, shows you the entire process of creating a customer registration solution for SMS messaging. A common way to capture customers’ mobile phone numbers is to use a web-based form. After you verify the customer’s and confirm the customer’s subscription, you can start sending promotional, transactional, and informational SMS messages to that customer.

In the tutorial, you’ll learn how to set up two-way SMS messaging in Pinpoint, create a web form to capture customers’ contact information, send registration information from your own website to a Lambda function using API Gateway, how to implement a double opt-in strategy, and more.

The tutorial is intended for users of all skill levels. While there is some coding involved, all of the necessary code is included. You can use this tutorial to create a complete solution, or as a starting point for your own use case.

Get started now.

Recent Amazon Pinpoint Launches

Amazon Pinpoint is now available in the US West (Oregon), EU (Frankfurt), and EU (Ireland) regions in addition to the US East (Virginia) region. You can now use Amazon Pinpoint to power your digital user engagement without having to transfer your customer data across regions.

This regional expansion is particularly useful for organizations in certain regions of the EU, where data residency considerations previously made it difficult for many customers to use Amazon Pinpoint. It also creates a global infrastructure that helps to improve availability and redundancy while reducing latency.

Learn more.

ICYMI, you can now:

amazon-pinpoint-voice

Deliver voice messages to your users with Amazon Pinpoint Voice.

Learn more.

amazon-pinpoint-event-triggers

Set up campaigns that auto-send messages to your customers when they take specific actions.

Learn more.

amazon-pinpoint-deliverability-dashboard

Detect and understand issues impacting your email deliverability with the Amazon Pinpoint Deliverability Dashboard.

Learn more.

Customer Spotlight

How Hulu uses Amazon Pinpoint for their real-time notification platform.

hulu-amazon-pinpoint-architecture

At Hulu, notifying their viewers when their favorite teams are playing helps them drive growth and improve viewer engagement. However, building this feature was a complex process. Managing their live TV metadata, while generating audiences in real time in high-scalability scenarios, posed unique challenges for the engineering team. In this video, Hulu discuss the challenges in building their real-time notification platform, how Amazon Pinpoint helped them with their goals, and how they architected their solution for global scale and deliverability.
Watch to learn how they built their solution.

Watch to learn how they built their solution.
View the presentation deck.

Meet us at Shoptalk, March 3-6

The AWS Digital User Engagement team will be at the AWS Booth #2617 at Shoptalk, March 3-6 at the Venetian in Las Vegas. Stop by to view our demo of the integration of Amazon Pinpoint and Amazon Personalize, which will show how a customer’s interaction with products in a retail setting can be tracked with smart-devices connected to AWS, resulting in real-time inferences and predictions on a customer’s affinity for products they haven’t yet interacted with. This information can be used to send push notifications with Amazon Pinpoint to a customer’s mobile device, making them aware of the products and possible deals that Amazon Personalize has predicted they will appreciate.

Introducing AWS Solutions: Expert architectures on demand

Post Syndicated from AWS Admin original https://aws.amazon.com/blogs/architecture/introducing-aws-solutions-expert-architectures-on-demand/

AWS Solutions Architects are on the front line of helping customers succeed using our technologies. Our team members leverage their deep knowledge of AWS technologies to build custom solutions that solve specific problems for clients. But many customers want to solve common technical problems that don’t require custom solutions, or they want a general solution they can use as a reference to build their own custom solution. For these customers, we offer AWS Solutions: vetted, technical reference implementations built by AWS Solutions Architects and AWS Partner Network partners. AWS Solutions are designed to help customers solve common business and technical problems, or they can be customized for specific use cases.

AWS Solutions are built to be operationally effective, performant, reliable, secure, and cost-effective; and incorporate architectural frameworks such as the Well-Architected Framework. Every AWS Solution comes with a detailed architecture diagram, a deployment guide, and instructions for both manual and automated deployment.

Here are some Solutions we are particularly excited about.

Media2Cloud

We released the Media2Cloud solution in January 2019. This solution helps customers migrate their existing video archives to the cloud. Media2Cloud sets up a serverless end-to-end workflow to ingest your videos and establish metadata, proxy videos, and image thumbnails.

Because it can be a challenging and slow process to migrate your existing video archives to the cloud, the Media2Cloud solution builds the following architecture.

Media2Cloud architecture

The solution leverages the Media Analysis Solution to analyze and extract valuable metadata from your video archives using Amazon Rekognition, Amazon Transcribe, and Amazon Comprehend.

The solution also includes a simple web interface that helps make it easier to get started ingesting your videos to the AWS Cloud. This solution is set up to integrate with AWS Partner Network partners to help customers migrate their video archives to the cloud.

AWS Instance Scheduler

In October 2018, we updated the AWS Instance Scheduler, a solution that enables customers to easily configure custom start and stop schedules for their Amazon EC2 and Amazon RDS instances.

When you deploy the solution’s template, the solution builds the following architecture.

AWS Instance Scheduler

 

For customers who leave all of their instances running at full utilization, this solution can result in up to 70% cost savings for those instances that are only necessary during regular business.

The Instance Scheduler solution gives you the flexibility to automatically manage multiple schedules as necessary, configure multiple start and stop schedules by either deploying multiple Instance Schedulers or modifying individual resource tags, and review Instance Scheduler metrics to better assess your Instance capacity and usage, and to calculate your cost savings.

AWS Connected Vehicle Solution

In January 2018, we updated the AWS Connected Vehicle Solution, a solution that provides secure vehicle connectivity to the AWS Cloud. This solution includes capabilities for local computing within vehicles, sophisticated event rules, and data processing and storage. The solution also allows you to implement a core framework for connected vehicle services that allows you to focus on developing new functionality rather than managing infrastructure.

When you deploy the solution’s template, the solution builds the following architecture.

Connected Vehicle solution

You can build upon this framework to address a variety of use cases such as voice interaction, navigation and other location-based services, remote vehicle diagnostics and health monitoring, predictive analytics and required maintenance alerts, media streaming services, vehicle safety and security services, head unit applications, and mobile applications.

These are just some of our current offerings. Other notable Solutions include AWS WAF Security Automations, Machine Learning for Telecommunication, and AWS Landing Zone. In the coming months, we plan to continue expanding our portfolio of AWS Solutions to address common business and technical problems that our customers face. Visit our homepage to keep up to date with the latest AWS Solutions.

Two-Way SMS with Amazon Pinpoint

Post Syndicated from Hannah Nilsson original https://aws.amazon.com/blogs/messaging-and-targeting/two-way-sms-with-amazon-pinpoint/

pinpoint-2way-sms

Learn to implement two-way SMS messaging for a simple approach that results in higher levels of customer engagement

SMS, or text messaging, is the simplest way to reach your users outside of normal customer-facing web or mobile applications. Compared to other communication channels, such as email and push notifications, text messaging results in higher engagement.

SMS messaging is extremely convenient — users don’t have to authenticate, download your app, or go to your website. They simply receive your message on their device. When it comes to customer acquisition and retention, it doesn’t get any easier than this.

In this article posted on A Cloud Guru, Dennis Hills explains what two-way SMS is and how you can quickly and easily start sending personalized, timely, and relevant text messages to your customers with Amazon Pinpoint. He then shows how you can implement a practical solution for setting up an SMS long code so you can start sending and receiving text messages.

Read the article now, and be sure to let us know in the comments what types of advanced topics  for SMS messaging you’d like to see us or Dennis write about in the future.

AWS Ops Automator v2 features vertical scaling (Preview)

Post Syndicated from AWS Admin original https://aws.amazon.com/blogs/architecture/aws-ops-automator-v2-features-vertical-scaling-preview/

The new version of the AWS Ops Automator, a solution that enables you to automatically manage your AWS resources, features vertical scaling for Amazon EC2 instances. With vertical scaling, the solution automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. The solution can resize your instances by restarting your existing instance with a new size. Or, the solution can resize your instances by replacing your existing instance with a new, resized instance.

With this update, the AWS Ops Automator can help make setting up vertical scaling easier. All you have to do is define the time-based or event-based trigger that determines when the solution scales your instances, and choose whether you want to change the size of your existing instances or replace your instances with new, resized instances. The time-based or event-based trigger invokes the AWS Lambda to scale your instances.

Ops Automator Vertical Scaling

Restarting with a new size

When you choose to resize your instances by restarting the instance with a new size, the solution increases or decreases the size of your existing instances in response to changes in demand or at a specified point in time. The solution automatically changes the instance size to the next defined size up or down.

Replacing with a new, resized instance

Alternatively, you can choose to have the Ops Automator replace your instance with a new, resized instance instead of restarting your existing instance. When the solution determines that your instances need to be scaled, the solution launches new instances with the next defined instance size up or down. The solution is also integrated with Elastic Load Balancing to automatically register the new instance with load Balancers.

Getting Started

To learn more, visit the solution webpage and request access to the private preview.

Samsung Builds a Secure Developer Portal with Fargate and ECR

Post Syndicated from AWS Admin original https://aws.amazon.com/blogs/architecture/samsung-builds-a-secure-developer-portal-with-fargate-and-ecr/

This post was provided by Samsung.

The Samsung developer portal (Samsung Developers) is Samsung’s online portal built to serve technical documents, the Developer blog, and API guides to developers, IT managers, and students interested in building applications with the Samsung products. The Samsung Developers consists of three different portals:

  • SmartThings portal, which serves IoT developers is our oldest portal. We developed it on Amazon Elastic Container Service (ECS) but have now migrated it to AWS Fargate
  • Bixby portal, which serves Bixby capsule developers, was developed using AWS Fargate
  • Rich Communication Services (RCS), which serves the new standard of mobile messaging, was also developed using AWS Fargate

Samsung Electronics Cloud Operation Group (SECOG) unveiled these three portals at Samsung Developer Conference 2017 and 2018.

Samsung developed the SmartThings portal on ECS and had an overall good experience using it. We   found that ECS provided the appropriate level of abstraction while also offering control of their underlying instances. However, when we learned about AWS Fargate at re:Invent 2017, we wanted to try it out. Being an Amazon ECS customer, there was a lot to like about Fargate. It provided significant operational efficiency while also eliminating the need to manage servers and clusters, meaning we could just focus on running containers to release new features.

In 2018, our engineering team began migrating all of our systems to Fargate. Because Fargate exposed the same APIs and endpoints that ECS did, the migration experience was extremely smooth and we immediately experienced improvements in operational efficiency. Before Fargate, Samsung typically had administrators and operators dedicated to managing their web services for the portal. However, as we migrated to Fargate, we were able to easily eliminate the need for an administrator, saving operational cost while improving development efficiency. Now, our operations and administration teams are focused more on elaborate logging and monitoring activities, further improving overall service reliability, security, and performance.

The Samsung developer portal is built using a microservice based architecture, and provides technical documents, API Docs, and support channels to our customers. To serve these features, the portal requires frequent updates to a number of different Fargate services. Technical writers who are interested in publishing new content every day  initiate these updates. To meet these business requirements, Samsung Electronics Cloud Operation Group (SECOG) and Technology Partner (TecAce) researched services that were agile and efficient and could be run with minimal operational overhead. When they learned about Fargate, they were interested in doing a proof of concept and based on its result, were convinced that Fargate could meet their needs.

Service Key Requirements

As we began our migration to Fargate, we realized that the portal had to comply with the various key requirements standardized with SECOG and InfoSec. These requirements are:

  • Security: the Service Ops should have the ability to control every Security factor.
  • Scalability: the Service focuses on Samsung developers who are using Samsung products in public. The Service therefore should be capable of handling traffic surges.
  • Easy to deploy: technical documents are easily pushed to the live environment giving technical writers the ability to easily make edits.
  • Controllability: The Service should be able to control container options such as port mapping, memory size, etc.

As we dived deeper into AWS Fargate,  SECOG and Infosec teams were satisfied that Fargate could deliver on all these requirements.

Build and Deploy Process

SECOG and TecAce decided to use AWS Fargate and Amazon Elastic Container Registry (ECR) service to meet the key requirements of the developer portal.

Figure 1: Architecture drawing

Figure 1: Architecture drawing

The System Architecture is very simple. When we release new features or update documents, we upload new container images to ECR then we publish our code to production. Each business application is designed with the combination of Application Load Balancer (ALB), Fargate, and Route 53.

Easy Fargate

After using Fargate, Samsung’s business owners were extremely satisfied with the choice. The Samsung Developers is operated and configured with multiple teams, which are globally distributed with development, operations, and QA roles, and responsibilities. Each team needs to deploy an individual environment for test. Before Fargate, we needed considerable engineers and developers bandwidth to operate web services infrastructure. However, Fargate simplified this process. Each team only needs to create a new container images and deploy to ECR. The image is then deployed to the test environment on Fargate. With this process, we were able to greatly reduce the time our developers and operators spent managing and configuring this infrastructure.

With Fargate, we are able to deploy more often to production and teams are able to handle additional Samsung products within the Samsung Developers. Additionally, we don’t have to worry about deploying and creating new images. We  simply create a new revision, setting the container’s memory and port. Then, we select our Fargate cluster after determining the commute capacity needed.

The compute capacity of the Fargate services can be easily scaled out using Autoscaling. Therefore, all deployment tasks only take a few minutes to serve. Additionally, there is no cluster managed by a system administrator or operator, and there is no EC2 instance and no docker swarm to maintain the  services. This ensures that we can focus on the features of Samsung Developers and improve end-customer experiences.

Currently, when an environment is deployed and served at Samsung Developers, Samsung monitors the health with alarms based on Amazon CloudWatch metrics. In addition, we have easily achieved the required availability and the reliability from our portal  while reducing monthly costs by approximately 44.5% (compute cost only).

Because of Samsung’s experience with Fargate, we have decided to migrate additional services from ECS to Fargate. Overall our tems have a great experience working with Fargate. The level of automation Fargate provides helps us move faster while also helping us become more economical with our developerment and operations resource. We felt that getting started with Fargate can take some time, however once the environment is set up, we were able to achive high levels of agiligty and scalablility with Fargate.

About Samsung

Samsung is a South Korean multinational conglomerate headquartered in Samsung Town, Seoul. It comprises numerous affiliated businesses,most of them united under the Samsung brand, and is the largest South Korean business conglomerate.

Handling AWS Chargebacks for Enterprise Customers

Post Syndicated from Varad Ram original https://aws.amazon.com/blogs/architecture/handling-aws-chargebacks-for-enterprise-customers/

As AWS product portfolios and feature sets grow, as an enterprise customer, you are likely to migrate your existing workloads and innovate your new products on AWS. To help you keep your cloud charges simple, you can use consolidated billing. This can, however, create complexity for your internal chargebacks, especially if some of your resources and services are not tagged correctly. To help your individual teams and business units normalize and reduce their costs as your AWS implementation grows, you can implement chargebacks transparently and automate billing.

This blog post includes a walkthrough of an end-to-end mechanism that you can use to automate your consolidated billing charges for either your existing AWS accounts, or for newly created accounts.

Walkthrough

Prerequisites for implementation:

  • One account that is the payer account, which consolidates billing and links all other accounts (including admin accounts)
  • An understanding of billing, Detailed Billing Report (DBR), Cost and Usage Report (CUR), and blended and unblended costs
  • Activate propagation of necessary cost allocation tags to consolidated billing
  • Access to reservations across the linked accounts
  • Read permission on the source bucket and write permission to the transformed bucket
  • An automated method (such as database access or an API) to verify the cost centers tagged to AWS resources
  • Permissions to get access to the services described in this solution on the account targeted for this automation

Before you begin, it is important to understand the blended costs and unblended costs in consolidated billing. Blended costs are calculated based on the blended rate (the average rates for the reserved and on-demand instances that are used by your member accounts) for each service your accounts used, multiplied by the account usage of those services. Unblended costs are the charges for those services broken out for each linked account.

Based on your organization’s strategy for savings (centralized or not), you could consider either the blended or unblended costs. The consolidated billing files that include the information for the chargeback are the Detailed Billing Report (DBR) and Cost and Usage Report (CUR). Both of these reports provide both the blended and unblended rates as separate columns.

To help you create and maintain your AWS accounts, you can use AWS Account Vending Machine (AVM). You can launch AVM from either the AWS Landing Zone or with a custom solution. AVM keeps all your account information in a DynamoDB table (such as the account number, root mail ID, default cost center, name of the owner, etc.) and maintains reservation-related data (such as invoice ID, instance type, region, amount, cost center, etc.) in another table. To enable your account administrator to add invoice details for all your reservations, you can use a web page hosted on AWS Lambda, Amazon Simple Storage Service (Amazon S3), or a web server.

To begin the process of billing transformation, you must add a trigger on an S3 bucket (which contains raw AWS billing files) that pushes messages (PutObject) into Amazon Simple Queue Service (SQS) and your billing transformation program (written in Python, Nodejs, Java, .net, etc. using AWS SDK) that runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance, containers, or Lambda (if the bill can be processed within 15 minutes with file size restrictions).

The billing transformation program must do the following:

  • Cache the Account details and reservation DynamoDB tables
  • Verify if there are any messages in SQS
  • Ignore if the file is not a DBR or CUR file (process either of them, not both)
  • Download the file, unzip, and read row-by-row; for a DBR file, consider only the “LineItem” RecordType
  • Add two new columns: Bill_CostCenter and Bill_Notes
    • If there is a valid value in the CostCenter tag (verified with internal automation processes), add the same value to the Bill_CostCenter column and any notes to the Bill_Notes column
    • If the CostCenter is invalid, get the default Cost Center from the cached account details and add the information to the Bill_CostCenter and Bill_Notes columns
    • If the row is a reservation invoice, the cost center information comes from the reservation table and is added to the correct column
  • Cache consolidation of cost centers with the blended or unblended cost of each row
  • Write each of these processed line items into a new file
  • Handle exceptions by the normal organization practices (for example, email the owner of the cost center or the finance team)
  • Push the new file into the transformed Amazon S3 bucket
  • Write the consolidated lines into a different file and upload to Transformed Amazon S3 bucket
Figure 1 – Architecture of processing a billing chargeback

Figure 1 – Architecture of processing a billing chargeback

 

Figure 2 – Validating the Cost Center process

After you have the consolidated billing file aggregated by cost center, you can easily see and handle your internal chargebacks. To further simplify your chargeback model, you can get help from AWS Technical Account Managers and Billing Concierge, if your organization would like AWS to provide custom invoices from the consolidated billing file.

Because the cost centers in your organization can expire over time, it’s important validate them frequently with automation, such as a Lambda program.

Improvements

If your organization has a more complex chargeback structure, you can extend the logic described above to support deeper and broader chargeback codes, or implement hierarchical chargeback structure.

You can also extend the transformation logic to support several chargeback codes (such as comma separated or with additional tags) if you have multiple teams or project that want to share a resource.

Summary

As enterprise organizations grow and consume more cloud services, the cost optimization process grows and evolves with them. Sophisticated chargeback models enable the teams and business units in the organization to be accountable and contribute to take the steps necessary to normalize the usage and costs of AWS services.

About the Author

Varad RamVarad Ram likes to help customers adopt to cloud technologies and he is particularly interested in Artificial Intelligence. He believes Deep Learning will power future technology growth. In his spare time, his daughter and toddler son keep him busy biking and hiking.

How Disney Streaming Services Uses Amazon Pinpoint to Send Personalized Messages to Millions of Users in Real Time

Post Syndicated from Hannah Nilsson original https://aws.amazon.com/blogs/messaging-and-targeting/how-disney-streaming-services-uses-amazon-pinpoint-to-send-personalized-messages-to-millions-of-users-in-real-time/

At AWS re:Invent 2018, Billy Liu and Jimmy Tam from Disney Streaming Services took the stage to talk about how they use Amazon Pinpoint to meet some of their unique digital user engagement needs. Disney Streaming Services supports several mobile apps, including MLB At Bat, Ballpark and Beat The Streak from Major League Baseball, along with the NHL mobile app. Billy and Jimmy shared their story on the re:Invent Launchpad stage, as well as during the Digital User Engagement Leadership Session with Simon Poile from AWS. In these sessions, they discussed how they use Amazon Pinpoint—along with other AWS services including Amazon Kinesis, AWS Lambda, Amazon S3, and AWS Glue—to target customers, monitor the performance of their campaigns in real time, and gain a deeper understanding of their users’ needs and desires.

Targeting the right customer at the right time 
When you consider the use cases for MLB’s suite of apps, you can quickly see why sending the right message to the right customer is a more complicated task than it might seem at first glance. For each of the 30 Major League Baseball teams, users can opt to receive eight different types of messages. Each of these eight message types is available in both English and Spanish. And on top of all that, each push notification sent has to target combinations of these segments when two teams play each other. There are thousands of possible segments and combinations of segments to consider with each message sent.

To address this issue, Disney Streaming Services uses Amazon Pinpoint to dynamically create unique segments and campaigns for every event in milliseconds. In the most demanding usage scenarios, Amazon Pinpoint scales to create over 300 segments and campaigns per hour, and over 20 segments and campaigns per minute. To learn more about how they solved this challenge, take a look at the recording of their session.

How Disney Streaming Services Targets The Right Customer
Monitoring campaign performance in real time
With the fast-paced nature of Disney Streaming’s notifications and the sheer number of campaigns and segments they are targeting, monitoring their performance directly in the Amazon Pinpoint console is not scalable for their use case. However, they must have real-time notifications to let them know if their campaigns are lagging or not reaching the expected number of recipients.

To meet this unique need, Disney Streaming developed a solution that uses AWS Step Functions, Amazon Cloudwatch, AWS Lambda, and Amazon Pinpoint. This solution monitors each campaign that is created. When a campaign is executed, their solution streams data about the execution and delivery of that campaign, and sends alerts when the team needs to take a closer look at how their campaigns are performing. You can learn more about the specifics of their monitoring solution in the recording of their session.

How Disney Streaming Services Monitors Campaign Performance

Understanding fans
After a campaign has been sent, Disney Streaming analyzes the performance of campaigns. By performing this analysis, they can better understand how customers engaged with notifications, and ensure fans are receiving a compelling experience.

To achieve this, Disney Streaming uses the event streaming and exporting features of Amazon Pinpoint. They stream engagement events by using Amazon Kinesis. These events let them know how fans interacted with the application, and allow them to drill down into various performance metrics on a per-team basis. They then store these metrics in S3, which are picked up by their data lake team for further processing. By using this solution, they can create near-real-time reports for the unique audiences.  How Disney Streaming Services Understands their Fans
They also use the Amazon Pinpoint API to export all of the details about the users to an S3 bucket using Lambda Triggers. An AWS Glue job processes the exported data and outputs the results to another S3 bucket. The data lake team then uses this data to glean additional insights about their audience.

How Disney Streaming Services Understands their Fans

Removing unengaged customers 
Disney Streaming also uses a custom solution to engage with customers who are still able to receive messages, and to ensure that reports only include engaged users. For example, if a customer uninstalls the MLB or NHL apps, re-installs an app but doesn’t set their messaging preferences, or starts using a device on a different platform, that customer might not be able to be contacted. Disney Streaming needs to remove these unreachable customers from campaigns so that they can maintain accurate reports on audience sizes, helping keep costs low, and reduce campaign latency.

To delete unreachable customers in real time, the Disney Streaming team uses Amazon Pinpoint to detect when they attempt to send a push notification to an unreachable customer. Their Kinesis Firehose stream then outputs campaign data to an S3 bucket, and an AWS Glue job filters out the customers who are unreachable. Finally, a Lambda function removes the endpoint by making a call to the Amazon Pinpoint API. You can find more details about how Disney Streaming Services implemented this solution in the recording of their session.

How Disney Streaming Services removes unengaged customers

You can learn more about the needs that Disney Streaming Service considered when they chose a Digital User Engagement solution by watching the recording of their discussion on the re:Invent 2018 Launchpad stage. You can also watch the Digital User Engagement Leadership Session to learn more about AWS’ Digital User Engagement solutions, information on recent feature launches, and to learn more about how Disney Streaming created solutions to their engagement challenges.