Tag Archives: AWS

Retry delivering failed SMS using Amazon Pinpoint

Post Syndicated from satyaso original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-utilise-amazon-pinpoint-to-retry-unsuccessful-sms-delivery/

Organizations in many sectors and verticals have user bases to whom they send transactional SMS messages such as OTPs (one-time passwords), Notices, or transaction/purchase confirmations, among other things. Amazon Pinpoint enables customers to send transactional SMS messages to a global audience through a single API endpoint, and the messages are routed to recipients by the service. Amazon Pinpoint relies on downstream SMS providers and telecom operators to deliver the messages to end user’s device. While most of the times the SMS messages gets delivered to recipients but sometimes these messages could not get delivered due to  carrier/telecom related issues which are transient in nature. This impacts customer’s brand name. As a result, customers need to implement a solution that allows them to retry the transmission of SMS messages that fail due to transitory problems caused by downstream SMS providers or telecom operators.

In this blog post, you will discover how to retry sending unsuccessfully delivered SMS messages caused by transitory problems at the downstream SMS provider or telecom operator side.

Prerequisites

For this post, you should be familiar with the following:

Managing an AWS account
Amazon Pinpoint
Amazon Pinpoint SMS events
AWS Lambda
AWS CloudFormation
Amazon Kinesis Firehose
Kinesis Streams
Amazon DynamoDB WCU and RCU accordingly

Architecture Overview

The architecture depicted below is a potential architecture for re-sending unsuccessful SMS messages at real time. The application sends the SMS message to Amazon Pinpoint for delivery using sendMessge API. Pinpoint receives the message and returns a receipt notification with the Message ID; the application records the message content and ID to a Datastore or DynamoDB. Amazon Pinpoint delivers messages to users and then receives SMS engagement events. The same SMS engagement events are provided to Amazon Kinesis Data Streams which as an event source for Lambda function that validates the event type, If the event type indicates that the SMS message was unable to be sent and it make sense to retry, the Lambda function logic retrieves respective “message id” from the SMS events and then retrieves the message body from the database. Then it sends the SMS message to Amazon  Pinpoint for redelivery, you can choose same or an alternative origination number as origination identity while resending the SMS to end users. We recommend configuring the number of retries and adding a retry message tag within Pinpoint to analyse retries and also to avoid infinite loops. All events are also sent to Amazon Kinesis Firehose which then saved to your S3 data lake for later audit and analytics purpose.

Note: The Lambda concurrency and DynamoDB WCU/RCUs need to be provisioned accordingly. The AWS CloudFormation template provided in this post automatically sets up the different architecture components required to retry unsuccessful SMS messages

Retry delivering failed SMS using Amazon Pinpoint

At the same time, if you use Amazon Kinesis Firehose delivery stream instead of Kinesis data stream to stream data to a storage location, you might consider utilising Transformation lambda as part of the kinesis Firehose delivery stream to retry unsuccessful messages. The architecture is as follows; application sends the SMS payload to Amazon Pinpoint using SendMessage API/SDK while also writing the message body to a persistent data store, in this case a DynamoDB database. The SMS related events are then sent to Amazon Kinesis Firehose, where a   transformation lambda is setup. In essence, if SMS event type returns no errors, the event is returned to Firehose as-is. However, if an event type fails and it makes sense to retry, lambda logic sends another SendMessage until the retry count (specified to 5 within the code) is reached. If just one retry attempt is made, S3 storage is not loaded with an event (thus the result=Dropped). Since Pinpoint event do not contain actual SMS content, a call to DynamoDB is made to get the message body for a new SendMessage.

Retry SMS diagram

Amazon Pinpoint provides event response for each transactional SMS communications for retrying unsuccessful SMS connections, there are primarily two factors to consider in this architecture. 1/ Type of event (event_type) 2/ Record Status (record_status). So whenever the event_type is “_SMS.FAILURE” and record_status is any of “UNREACHABLE”, “UNKNOWN”, “CARRIER_UNREACHABLE”, “EXPIRED”. Then surely customer application need to retry the SMS message delivery. Following pseudo code snippet explains the conditional flow for failed SMS sending logic within the lambda function.

Code Sample:
If event.event_type = '_SMS.FAILURE': and event.record_status == 'UNREACHABLE' 
	'| UNKNOWN | CARRIER_UNREACHABLE | TTL_EXPIRED'
	sendMessage(message content, Destination) # resend the SMS message then 
	output_record = { "recordId": record["recordId"], 'result': 'Dropped', 'data': 
		base64.b64encode(payload.encode('utf-8')) } 
else 
	output_record = { "recordId": record["recordId"], 'result': 'Ok', 
						'data': base64.b64encode(payload.encode('utf-8')) }

Getting started with solution deployment

Prerequisite tasks to be completed before deploying the logging solution

  1. Go to CloudFormation Console and Click Create Stack.
  2. Select Amazon S3 Url redio button and provide the cloud formation linkAWS console creating a Pinpoint template
  3. Click Next on Create Stack screen.
  4. Specify Stack Name, for example “SMS-retry-stack”
  5. Specify event stream configuration option, this will trigger the respective child cloud formation stack . There are three Event stream configuration you can choose from.
    • No Existing event stream setup – Select this option if you don’t have any event stream setup for Amazon Pinpoint.
    • Event stream setup with Amazon Kinesis Stream – Select this option if your Amazon Pinpoint project already have Amazon Kinesis as event stream destination.
    • Event stream setup with Amazon Kinesis Firehose – Select this option if you have configured Kinesis Firehose delivery stream as event stream destination.AWS console specifying Pinpoint stack details
  6. Specify the Amazon Pinpoint project app ID (Pinpoint project ID), and click Next.
  7. Click Next on Configure stack options screen.
  8. Select “I acknowledge that AWS CloudFormation might create IAM resources” and click Create Stack.
  9. Wait for the CloudFormation template to complete and then verify resources in the CloudFormation stack has been created. Click on individual resources and verify.
    • Parent stack-SMS retry parent stack
    • Child Stack –SMS retry child stack
  10. As described in the architectural overview session, the maxRetries configuration inside “RetryLambdaFunction” ensures that unsuccessful SMS messages are tried resending repeatedly. This number is set to 3 by default.” If you want to adjust the maxRetry count, go to the settings “RetryLambdaFunction” and change it to the desired number.SMS retry lambda

Notes :- The Cloudformation link in the blog specifically points to the parent cloudformation template, which has links to the child Cloudformation stack, these child stacks will be deployed automatically as you go through the patent stack.

Testing the solution

You can test the solution using “PinpointDDBProducerLambdaFunction” and SMS simulator numbers . PinpointDDBProducerLambdaFunction has sample code that shall trigger the SMS using Amazon Pinpoint.

testing SMS retry solution

Now follow the steps below to test the solution.

  1. Go to environment variables for PinpointDDBProducerLambdaFunction­­
  2. Update “destinationNumber” and “pinpointApplicationID,” where destination number is the recipient number for whom you wish to send the SMS as a failed attempt and Amazon Pinpoint application id is the Pinpoint Project ID for which the Pinpoint SMS channel has already been configured.
  3. Deploy and test the Lambda function.
  4. Check the “Pinpoint Message state” DyanamoDB table and open the Latest table ITEM.
  5. If you observe the table Items, it states the retry_count=2 (SMS send retry has been attempted 2 times and all_retries_failed=true ( for both of the times the SMS could not get delivered.)
Notes :
  • If existing Kinesis stream has pre-defined destination lambda then current stack will not replace it but exit gracefully.
  • If existing Kinesis firehose has pre-existing transformation lambda then current stack shall not replace the current stack.

Remarks

This SMS retry solution is based on best effort. This means that the solution is dependent on event response data from SMS aggregators. If the SMS aggregator data is incorrect, this slotion may not produce the desired effec

Cost

Considering that the retry mechanism is applicable for 1000000 unsuccessful SMS messages per month, this solution will approximately cost around $20 per month. Here is AWS calculator link for reference

Clean up

When you’re done with this exercise, complete the following steps to delete your resources and stop incurring costs:

  • On the CloudFormation console, select your stack and choose Delete.
  • This cleans up all the resources created by the stack.

Conclusion

In this blog post, we have demonstrated how customers can retry sending the undelivered/failed SMS messages via Amazon Pinpoint. We explained how to leverage the Amazon kinesis data streams and AWS Lambda functions to assess the status of unsuccessful SMS messages and retry delivering them in an automatic manner.

Extending the solution

This blog provides a rightful frame work to Implement a solution to retry sending failed SMS messages. You can download the AWS Cloudformation templates, code, and scripts for this solution from our GitHub repository and modify it to fit your needs.


About the Authors
Satyasovan Tripathy works as a Senior Specialist Solution Architect at AWS. He is situated in Bengaluru, India, and focuses on the AWS Digital User Engagement product portfolio. He enjoys reading and travelling outside of work.

Nikhil Khokhar is a Solutions Architect at AWS. He specializes in building and supporting data streaming solutions that help customers analyze and get value out of their data. In his free time, he makes use of his 3D printing skills to solve everyday problems.

Target your customers with ML based on their interest in a product or product attribute.

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/use-machine-learning-to-target-your-customers-based-on-their-interest-in-a-product-or-product-attribute/

Customer segmentation allows marketers to better tailor their efforts to specific subgroups of their audience. Businesses who employ customer segmentation can create and communicate targeted marketing messages that resonate with specific customer groups. Segmentation increases the likelihood that customers will engage with the brand, and reduces the potential for communications fatigue—that is, the disengagement of customers who feel like they’re receiving too many messages that don’t apply to them. For example, if your business wants to launch an email campaign about business suits, the target audience should only include people who wear suits.

This blog presents a solution that uses Amazon Personalize to generate highly personalized Amazon Pinpoint customer segments. Using Amazon Pinpoint, you can send messages to those customer segments via campaigns and journeys.

Personalizing Pinpoint segments

Marketers first need to understand their customers by collecting customer data such as key characteristics, transactional data, and behavioral data. This data helps to form buyer personas, understand how they spend their money, and what type of information they’re interested in receiving.

You can create two types of customer segments in Amazon Pinpoint: imported and dynamic. With both types of segments, you need to perform customer data analysis and identify behavioral patterns. After you identify the segment characteristics, you can build a dynamic segment that includes the appropriate criteria. You can learn more about dynamic and imported segments in the Amazon Pinpoint User Guide.

Businesses selling their products and services online could benefit from segments based on known customer preferences, such as product category, color, or delivery options. Marketers who want to promote a new product or inform customers about a sale on a product category can use these segments to launch Amazon Pinpoint campaigns and journeys, increasing the probability that customers will complete a purchase.

Building targeted segments requires you to obtain historical customer transactional data, and then invest time and resources to analyze it. This is where the use of machine learning can save time and improve the accuracy.

Amazon Personalize is a fully managed machine learning service, which requires no prior ML knowledge to operate. It offers ready to use models for segment creation as well as product recommendations, called recipes. Using Amazon Personalize USER_SEGMENTATION recipes, you can generate segments based on a product ID or a product attribute.

About this solution

The solution is based on the following reference architectures:

Both of these architectures are deployed as nested stacks along the main application to showcase how contextual segmentation can be implemented by integrating Amazon Personalize with Amazon Pinpoint.

High level architecture

Architecture Diagram

Once training data and training configuration are uploaded to the Personalize data bucket (1) an AWS Step Function state machine is executed (2). This state machine implements a training workflow to provision all required resources within Amazon Personalize. It trains a recommendation model (3a) based on the Item-Attribute-Affinity recipe. Once the solution is created, the workflow creates a batch segment job to get user segments (3b). The job configuration focuses on providing segments of users that are interested in action genre movies

{ "itemAttributes": "ITEMS.genres = \"Action\"" }

When the batch segment job finishes, the result is uploaded to Amazon S3 (3c). The training workflow state machine publishes Amazon Personalize state changes on a custom event bus (4). An Amazon Event Bridge rule listens on events describing that a batch segment job has finished (5). Once this event is put on the event bus, a batch segment postprocessing workflow is executed as AWS Step Function state machine (6). This workflow reads and transforms the segment job output from Amazon Personalize (7) into a CSV file that can be imported as static segment into Amazon Pinpoint (8). The CSV file contains only the Amazon Pinpoint endpoint-ids that refer to the corresponding users from the Amazon Personalize recommendation segment, in the following format:

Id
hcbmnng30rbzf7wiqn48zhzzcu4
tujqyruqu2pdfqgmcgkv4ux7qlu
keul5pov8xggc0nh9sxorldmlxc
lcuxhxpqh/ytkitku2zynrqb2ce

The mechanism to resolve an Amazon Pinpoint endpoint id relies on the user id that is set in Amazon Personalize to be also referenced in each endpoint within Amazon Pinpoint using the user ID attribute.

State machine for getting Amazon Pinpoint endpoints

The workflow ensures that the segment file has a unique filename so that the segments within Amazon Pinpoint can be identified independently. Once the segment CSV file is uploaded to S3 (7), the segment import workflow creates a new imported segment within Amazon Pinpoint (8).

Datasets

The solution uses an artificially generated movies’ dataset called Bingewatch for demonstration purposes. The data is pre-processed to make it usable in the context of Amazon Personalize and Amazon Pinpoint. The pre-processed data consists of the following:

  • Interactions’ metadata created out of the Bingewatch ratings.csv
  • Items’ metadata created out of the Bingewatch movies.csv
  • users’ metadata created out of the Bingewatch ratings.csv, enriched with invented data about e-mail address and age
  • Amazon Pinpoint endpoint data

Interactions’ dataset

The interaction dataset describes movie ratings from Bingewatch users. Each row describes a single rating by a user identified by a user id.

The EVENT_VALUE describes the actual rating from 1.0 to 5.0 and the EVENT_TYPE specifies that the rating resulted because a user watched this movie at the given TIMESTAMP, as shown in the following example:

USER_ID,ITEM_ID,EVENT_VALUE,EVENT_TYPE,TIMESTAMP
1,1,4.0,Watch,964982703 
2,3,4.0,Watch,964981247
3,6,4.0,Watch,964982224
...

Items’ dataset

The item dataset describes each available movie using a TITLE, RELEASE_YEAR, CREATION_TIMESTAMP and a pipe concatenated list of GENRES, as shown in the following example:

ITEM_ID,TITLE,RELEASE_YEAR,CREATION_TIMESTAMP,GENRES
1,Toy Story,1995,788918400,Adventure|Animation|Children|Comedy|Fantasy
2,Jumanji,1995,788918400,Adventure|Children|Fantasy
3,Grumpier Old Men,1995,788918400,Comedy|Romance
...

Users’ dataset

The users dataset contains all known users identified by a USER_ID. This dataset contains artificially generated metadata that describe the users’ GENDER and AGE, as shown in the following example:

USER_ID,GENDER,E_MAIL,AGE
1,Female,[email protected],21
2,Female,[email protected],35
3,Male,[email protected],37
4,Female,[email protected],47
5,Agender,[email protected],50
...

Amazon Pinpoint endpoints

To map Amazon Pinpoint endpoints to users in Amazon Personalize, it is important to have a consisted user identifier. The mechanism to resolve an Amazon Pinpoint endpoint id relies that the user id in Amazon Personalize is also referenced in each endpoint within Amazon Pinpoint using the userId attribute, as shown in the following example:

User.UserId,ChannelType,User.UserAttributes.Gender,Address,User.UserAttributes.Age
1,EMAIL,Female,[email protected],21
2,EMAIL,Female,[email protected],35
3,EMAIL,Male,[email protected],37
4,EMAIL,Female,[email protected],47
5,EMAIL,Agender,[email protected],50
...

Solution implementation

Prerequisites

To deploy this solution, you must have the following:

Note: This solution creates an Amazon Pinpoint project with the name personalize. If you want to deploy this solution on an existing Amazon Pinpoint project, you will need to perform changes in the YAML template.

Deploy the solution

Step 1: Deploy the SAM solution

Clone the GitHub repository to your local machine (how to clone a GitHub repository). Navigate to the GitHub repository location in your local machine using SAM CLI and execute the command below:

sam deploy --stack-name contextual-targeting --guided

Fill the fields below as displayed. Change the AWS Region to the AWS Region of your preference, where Amazon Pinpoint and Amazon Personalize are available. The Parameter Email is used from Amazon Simple Notification Service (SNS) to send you an email notification when the Amazon Personalize job is completed.

Configuring SAM deploy
======================
        Looking for config file [samconfig.toml] :  Not found
        Setting default arguments for 'sam deploy'     =========================================
        Stack Name [sam-app]: contextual-targeting
        AWS Region [us-east-1]: eu-west-1
        Parameter Email []: [email protected]
        Parameter PEVersion [v1.2.0]:
        Parameter SegmentImportPrefix [pinpoint/]:
        #Shows you resources changes to be deployed and require a 'Y' to initiate deploy
        Confirm changes before deploy [y/N]:
        #SAM needs permission to be able to create roles to connect to the resources in your template
        Allow SAM CLI IAM role creation [Y/n]:
        #Preserves the state of previously provisioned resources when an operation fails
        Disable rollback [y/N]:
        Save arguments to configuration file [Y/n]:
        SAM configuration file [samconfig.toml]:
        SAM configuration environment [default]:
        Looking for resources needed for deployment:
        Creating the required resources...
        [...]
        Successfully created/updated stack - contextual-targeting in eu-west-1
======================

Step 2: Import the initial segment to Amazon Pinpoint

We will import some initial and artificially generated endpoints into Amazon Pinpoint.

Execute the command below to your AWS CLI in your local machine.

The command below is compatible with Linux:

SEGMENT_IMPORT_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
aws s3 sync ./data/pinpoint s3://$SEGMENT_IMPORT_BUCKET/pinpoint

For Windows PowerShell use the command below:

$SEGMENT_IMPORT_BUCKET = (aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
aws s3 sync ./data/pinpoint s3://$SEGMENT_IMPORT_BUCKET/pinpoint

Step 3: Upload training data and configuration for Amazon Personalize

Now we are ready to train our initial recommendation model. This solution provides you with dummy training data as well as a training and inference configuration, which needs to be uploaded into the Amazon Personalize S3 bucket. Training the model can take between 45 and 60 minutes.

Execute the command below to your AWS CLI in your local machine.

The command below is compatible with Linux:

PERSONALIZE_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 sync ./data/personalize s3://$PERSONALIZE_BUCKET

For Windows PowerShell use the command below:

$PERSONALIZE_BUCKET = (aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 sync ./data/personalize s3://$PERSONALIZE_BUCKET

Step 4: Review the inferred segments from Amazon Personalize

Once the training workflow is completed, you should receive an email on the email address you provided when deploying the stack. The email should look like the one in the screenshot below:

SNS notification for Amazon Personalize job

Navigate to the Amazon Pinpoint Console > Your Project > Segments and you should see two imported segments. One named endpoints.csv that contains all imported endpoints from Step 2. And then a segment named ITEMSgenresAction_<date>-<time>.csv that contains the ids of endpoints that are interested in action movies inferred by Amazon Personalize

Amazon Pinpoint segments created by the solution

You can engage with Amazon Pinpoint customer segments via Campaigns and Journeys. For more information on how to create and execute Amazon Pinpoint Campaigns and Journeys visit the workshop Building Customer Experiences with Amazon Pinpoint.

Next steps

Contextual targeting is not bound to a single channel, like in this solution email. You can extend the batch-segmentation-postprocessing workflow to fit your engagement and targeting requirements.

For example, you could implement several branches based on the referenced endpoint channel types and create Amazon Pinpoint customer segments that can be engaged via Push Notifications, SMS, Voice Outbound and In-App.

Clean-up

To delete the solution, run the following command in the AWS CLI.

The command below is compatible with Linux:

SEGMENT_IMPORT_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
PERSONALIZE_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 rm s3://$SEGMENT_IMPORT_BUCKET/ --recursive
aws s3 rm s3://$PERSONALIZE_BUCKET/ --recursive
sam delete

For Windows PowerShell use the command below:

$SEGMENT_IMPORT_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`SegmentImportBucket`].OutputValue' --output text)
$PERSONALIZE_BUCKET=$(aws cloudformation describe-stacks --stack-name contextual-targeting --query 'Stacks[0].Outputs[?OutputKey==`PersonalizeBucketName`].OutputValue' --output text)
aws s3 rm s3://$SEGMENT_IMPORT_BUCKET/ --recursive
aws s3 rm s3://$PERSONALIZE_BUCKET/ --recursive
sam delete

Amazon Personalize resources like Dataset groups, datasets, etc. are not created via AWS Cloudformation, thus you have to delete them manually. Please follow the instructions in the official AWS documentation on how to clean up the created resources.

About the Authors

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. He loves to dive deep into his customer’s technical issues and help them design communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Christian Bonzelet

Christian Bonzelet

Christian Bonzelet is an AWS Solutions Architect at DFL Digital Sports. He loves those challenges to provide high scalable systems for millions of users. And to collaborate with lots of people to design systems in front of a whiteboard. He uses AWS since 2013 where he built a voting system for a big live TV show in Germany. Since then, he became a big fan on cloud, AWS and domain driven design.

Updated requirements for US toll-free phone numbers

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/updated-requirements-for-us-toll-free-phone-numbers/

Many Amazon Pinpoint customers use toll-free phone numbers to send messages to their customers in the United States. A toll-free number is a 10-digit number that begins with one of the following three-digit codes: 800, 888, 877, 866, 855, 844, or 833. You can use toll-free numbers to send both SMS and voice messages to recipients in the US.

What’s changing

Historically, US toll-free numbers have been available to purchase with no registration required. To prevent spam and other types of abuse, the United States mobile carriers now require new toll-free numbers to be registered as well. The carriers also require all existing toll-free numbers to be registered by September 30, 2022. The carriers will block SMS messages sent from unregistered toll-free numbers after this date.

If you currently use toll-free numbers to send SMS messages, you must complete this registration process for both new and existing toll-free numbers. We’re committed to helping you comply with these changing carrier requirements.

Information you provide as part of this registration process will be provided to the US carriers through our downstream partners. It can take up to 15 business days for your registration to be processed. To help prevent disruptions of service with your toll-free number, you should submit your registration no later than September 12th, 2022.

Requesting new toll-free numbers

Starting today, when you request a United States toll-free number in the Amazon Pinpoint console, you’ll see a new page that you can use to register your use case. Your toll-free number registration must be completed and verified before you can use it to send SMS messages. For more information about completing this registration process, see US toll-free number registration requirements and process in the Amazon Pinpoint User Guide.

Registering existing toll-free numbers

You can also use the Amazon Pinpoint console to register toll-free numbers that you already have in your account. For more information about completing the registration process for existing toll-free numbers, see US toll-free number registration requirements and process in the Amazon Pinpoint User Guide.

In closing

Change is a constant factor in the SMS and voice messaging industry. Carriers often introduce new processes in order to protect their customers. The new registration requirements for toll-free numbers are a good example of these kinds of changes. We’ll work with you to help make sure that these changes have minimal impact on your business. If you have any concerns about these changing requirements, open a ticket in the AWS Support Center.

Collaboration Drives Secure Cloud Innovation: Insights From AWS re:Inforce

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/08/02/collaboration-drives-secure-cloud-innovation-insights-from-aws-re-inforce/

Collaboration Drives Secure Cloud Innovation: Insights From AWS re:Inforce

This year’s AWS re:Inforce conference brought together a wide range of organizations that are shaping the future of the cloud. Last week in Boston, cloud service providers (CSPs), security vendors, and other leading organizations gathered to discuss how we can go about building cloud environments that are both safe and scalable, driving innovation without sacrificing security.

This array of attendees looks a lot like the cloud landscape itself. Multicloud architectures are now the norm, and organizations have begun to search for ways to bring their lengthening lists of vendors together, so they can gain a more cohesive picture of what’s going on in their environment. It’s a challenge, to be sure — but also an opportunity.

These themes came to the forefront in one of Rapid7’s on-demand booth presentations at AWS re:Inforce, “Speeding Up Your Adoption of CSP Innovation.” In this talk, Chris DeRamus, VP of Technology – Cloud Security at Rapid7, sat down with Merritt Baer — Principal, Office of the CISO at AWS — and Nick Bialek — Lead Cloud Security Engineer at Northwestern Mutual — to discuss how organizations can create processes and partnerships that help them quickly and securely utilize new services that CSPs roll out. Here’s a closer look at what they had to say.

Building a framework

The first step in any security program is drawing a line for what is and isn’t acceptable — and for many organizations, compliance frameworks are a key part of setting that baseline. This holds true for cloud environments, especially in highly regulated industries like finance and healthcare. But as Merritt pointed out, what that framework looks like varies based on the organization.

“It depends on the shop in terms of what they embrace and how that works for them,” she said. Benchmarks like CIS and NIST can be a helpful starting point in moving toward “continuous compliance,” she noted, as you make decisions about your cloud architecture, but the journey doesn’t end there.

For example, Nick said he and his team at Northwestern Mutual use popular compliance benchmarks as a foundation, leveraging curated packs within InsightCloudSec to give them fast access to the most common compliance controls. But from there, they use multiple frameworks to craft their own rigorous internal standards, giving them the best of all worlds.

The key is to be able to leverage detective controls that can find noncompliant resources across your environment so you can take automated actions to remediate — and to be able to do all this from a single vantage point. For Nick’s team, that is InsightCloudSec, which provides them a “single engine to determine compliance with a single set of security controls, which is very powerful,” he said.

Evaluating new services

Consolidating your view of the cloud environment is critical — but when you want to bring on a new service and quickly evaluate it for risk, Merritt and Nick agreed on the importance of embracing collaboration and multiplicity. When it’s working well, a multicloud approach can allow this evaluation process to happen much more quickly and efficiently than a single organization working on their own.

“We see success when customers are embracing this deliberate multi-account architecture,” Merritt said of her experience working with AWS users.

At Northwest Mutual, Nick and his team use a group evaluation approach when onboarding a new cloud service. They’ll start the process with the provider, such as AWS, then ask Rapid7 to evaluate the service for risks. Finally, the Northwest Mutual team will do an assessment that pays close attention to the factors that matter most to them, like disaster recovery and identity and access management.

This model helps Nick and his team realize the benefits of the cloud. They want to be able to consume new services quickly so they can innovate at scale, but their team alone can’t keep up the work needed to fully vet each new resource for risks. They need a partner that can help them keep pace with the speed and elasticity of the cloud.

“You need someone who can move fast with you,” Nick said.

Automating at scale

Another key component of operating quickly and at scale is automation. “Reducing toil and manual work,” as Nick put it, is essential in the context of fast-moving and complex cloud environments.

“The only way to do anything at scale is to leverage automation,” Merritt insisted. Shifting security left means weaving it into all decisions about IT architecture and application development — and that means innovation and security are no longer separate ideas, but simultaneous parts of the same process. When security needs to keep pace with development, being able to detect configuration drift and remediate it with automated actions can be the difference between success and stalling out.

Plus, who actually likes repetitive, manual tasks anyway?

“You can really put a lot of emphasis on narrowing that gray area of human decision-making down to decisions that are truly novel or high-stakes,” Merritt said.

This leveling-up of decision-making is the real opportunity for security in the age of cloud, Merritt believes. Security teams get to be freed from their former role as “the shop of no” and instead work as innovators to creatively solve next-generation problems. Instead of putting up barriers, security in the age of cloud means laying down new roads — and it’s collaboration across internal teams and with external vendors that makes this new model possible.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

[VIDEO] An Inside Look at AWS re:Inforce 2022 From the Rapid7 Team

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/07/29/video-an-inside-look-at-aws-re-inforce-2022-from-the-rapid7-team/

[VIDEO] An Inside Look at AWS re:Inforce 2022 From the Rapid7 Team

The summer of conferences rolls on for the cybersecurity and tech community — and for us, the excitement of being able to gather in person after two-plus years still hasn’t worn off. RSA was the perfect kick-off to a renewed season of security together, and we couldn’t have been happier that our second big stop on the journey, AWS re:Inforce, took place right in our own backyard in Boston, Massachusetts — home not only to the Rapid7 headquarters but also a strong and vibrant community of cloud, security, and other technology pros.

We asked three of our team members who attended the event — Peter Scott, VP Strategic Enablement – Cloud Security; Ryan Blanchard, Product Marketing Manager – InsightCloudSec; and Megan Connolly, Senior Security Solutions Engineer — to answer a few questions and give us their experience from AWS re:Inforce 2022. Here’s what they had to say.

What was your most memorable moment from AWS re:Inforce this year?



[VIDEO] An Inside Look at AWS re:Inforce 2022 From the Rapid7 Team

What was your biggest takeaway from the conference? How will it shape the way you think about cloud and cloud security practices in the months to come?



[VIDEO] An Inside Look at AWS re:Inforce 2022 From the Rapid7 Team

Thanks to everyone who came to say hello and talk cloud with us at AWS re:Inforce. We hope to see the rest of you in just under two weeks at Black Hat 2022 in Las Vegas!

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Rapid7 at AWS re:Inforce: 2 Big Announcements

Post Syndicated from Aaron Sawitsky original https://blog.rapid7.com/2022/07/26/rapid7-at-aws-re-inforce-2-big-announcements/

Rapid7 at AWS re:Inforce: 2 Big Announcements

This year’s AWS re:Inforce conference in Boston has been jam-packed with thrilling speakers, deep insights on all things cloud, and some much-needed in-person collaboration from all walks of the technology community. It also coincides with some exciting announcements from AWS — and we’re honored to be a part of two of them. Here’s a look at how Rapid7 is building on our existing partnership with Amazon Web Services to help organizations securely advance in today’s cloud-native business landscape.

InsightIDR awarded AWS Security Competency

For seven years, AWS has issued security competencies to partners who have a proven track record of helping customers secure their AWS environments. Today at re:Inforce, AWS re-launched their Security Competency program, so that it better aligns with customers’ constantly evolving security challenges. Rapid7 is proud to be included in this re-launch, having obtained a security competency under the new criteria for its InsightIDR solution in the Threat Detection and Response category. This is Rapid7’s second AWS security competency and fourth AWS competency.

This designation recognizes that InsightIDR has demonstrated and successfully met AWS’s technical and quality requirements for providing customers with a deep level of software expertise in security incident and event management (SIEM), helping them achieve their cloud security goals.

InsightIDR integrates with a number of AWS services, including CloudTrail, GuardDuty, S3, VPC Traffic Mirroring, and SQS. InsightIDR’s UEBA feature includes dedicated AWS detections. The Insight Agent can be installed on EC2 instances for continuous monitoring. InsightIDR also features an out-of-the-box honeypot purpose-built for AWS environments. Taken together, these integrations and features give AWS customers the threat detection and response capabilities they need, all in a SaaS solution that can be deployed in a matter of weeks.

Adding another competency to Rapid7’s repertoire reaffirms our commitment to giving organizations the tools they need to innovate securely in a cloud-first world.

Rapid7 named a launch partner for AWS GuardDuty Malware Protection

Malware Protection is the new malware detection capability AWS has added to their GuardDuty service — and we’re honored to join them as a launch partner, with two products that support this new GuardDuty functionality.

GuardDuty is AWS’s threat detection service. It monitors AWS environments for suspicious behavior. Malware Protection introduces a new type of detection capability to GuardDuty. When GuardDuty fires an alert that’s related to an Amazon Elastic Cloud Compute (EC2) instance or a container running on EC2, Malware Protection will automatically run a scan on the instance in question and detect malware using machine learning and threat intelligence. When trojans, worms, rootkits, crypto miners, or other forms of malware are detected, they appear as new findings in GuardDuty, so security teams can take the right remediation actions.

Rapid7 customers can ingest GuardDuty findings (including the new malware detections) into InsightIDR and InsightCloudSec. In InsightIDR, each type of GuardDuty finding can be treated as a notable behavior or as an alert which will automatically trigger a new investigation. This allows security teams to know the instant suspicious activity is detected in their AWS environment and react accordingly. Should an investigation be triggered, teams can use InsightIDR’s native automation capabilities to enrich the data from GuardDuty, quarantine a user, and more. In the case where GuardDuty detects malware, teams can pull additional data from the Insight agent and even terminate malicious processes. In addition, customers can use InsightIDR’s Dashboards capability to keep an eye on GuardDuty and spot trends in the findings.

InsightCloudSec customers can likewise build automated bots that automatically react to GuardDuty findings. When GuardDuty has detected malware, a customer might configure a bot that terminates the infected instance. Alternatively, a customer might choose to reconfigure the instance’s security group to effectively isolate it while the team investigates. The options are practically endless.

Rapid7 and AWS continue to deepen partnership to protect your cloud workloads

AWS re:Inforce 2022 provides a welcome opportunity for the community to come together and share insights about managing and securing cloud environments, and we can’t think of better timing to announce these two areas of partnership with AWS. Click here to learn more about what we’re up to at this year’s AWS re:Inforce conference in Boston.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

What We’re Looking Forward to at AWS re:Inforce

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/07/25/what-were-looking-forward-to-at-aws-re-inforce/

What We’re Looking Forward to at AWS re:Inforce

AWS re:Inforce 2022 starts tomorrow — Tuesday, July 26th — and we couldn’t be more excited to gather with the tech, cloud, and security communities in our home city of Boston. Here’s a sneak peek of the highlights to come at re:Inforce and what we’re looking forward to the most this Tuesday and Wednesday.

Expert insights at the Rapid7 booth

After two and half years of limited in-person gatherings, we have kind of a lot to say. That’s why we’re making the Rapid7 booth at AWS re:Inforce a hub for learning and sharing from our cybersecurity experts. Stop by and learn how our team members are tackling a range of topics in cloud and security overall, including:

  • Adapting Your VM Program for Cloud-Native Environments — Jimmy Green, VP of Software Engineering for Cloud, will walk through some of the key considerations when building a fully cloud-first approach to vulnerability management.
  • Speeding Up Your Adoption of CSP Innovation — Chris DeRamus, VP of Technology – Cloud, will detail how Rapid7 evaluates cloud service providers (CSPs) for risk in order to promote faster, more secure adoption.
  • Context Is King: The Future of Cloud Security Operations — Peter Scott, VP of Strategic Engagement for Cloud Security, will discuss why obtaining context around security data is key to managing complexity in cloud environments.
  • Hybrid Is Here: Is Your SOC Ready? — Megan Connolly, Senior Security Solutions Engineer, will highlight the role that extended detection and response (XDR) technology can play in helping SOCs move toward a cloud-first model.
  • InsightCloudSec Demo — Joe Brumbley, Cloud Security Solutions Engineer, and Sean Healy, Senior Domain Engineer – Enterprise Cloud Security, will show InsightCloudSec in action, taking you through the different use cases and features that enable integrated security for multi-cloud environments.

Sharing how we walk the walk

At Rapid7, we’re laser-focused on helping companies accelerate in the cloud without compromising security. Our technology and expertise help security teams bring that vision to life — and they form the foundation for how we secure our own cloud infrastructure, too.

In the AWS re:Inforce featured session, “Walking the Walk: AWS Security at Rapid7,” Ulrich Dangle (Director, Software Engineering – Platform) and Lauren Clausen Fenton (Manager, Software Engineering – Platform) will share their firsthand experiences developing, scaling, and operationalizing a cloud security program at Rapid7. They’ll talk about how they manage to reduce risk while supporting Rapid7’s business goals, as well as the needs of our fast-moving DevOps team.

Join us on Tuesday, July 26th, at 11:40 AM, or Wednesday, July 27th, at 10:05 AM to learn how our security team is working around-the-clock to keep our large cloud environment secure and compliant, with standardized configurations and a tried-and-true threat response playbook.

Conversations over cloudy beers

It’s no secret that great craft beer is an integral part of tech culture — so where better to talk about all things cloud than a Boston brewery known for the cloudy appearance of its hazy New England IPAs?

On Tuesday, July 26th, from 5:15 PM to 8 PM, we’ll be at Rapid7 Reception at Trillium Fort Point, right in the heart of the Seaport District. It’s a perfect chance to network with your fellow protectors and meet some of our Rapid7 security experts over a double dry-hopped pale ale or a nitro milk stout. (If beer’s not your thing, not to worry — we’ll have wine and seltzer, too.)

If that wasn’t enough…

Last but not least, we’re giving away a vacation of your choice valued at $5,000! The more you engage with us at re:Inforce, the more chances you have to win. You’ll be entered in the drawing when you stop by to see us at Booth 206 to receive a demo or watch a presentation, or when you attend the Rapid7 Reception at Trillium Fort Point.

Check out what we have planned and register with us today!

4 Strategies for Achieving Greater Visibility in the Cloud

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/07/20/4-strategies-for-achieving-greater-visibility-in-the-cloud/

4 Strategies for Achieving Greater Visibility in the Cloud

The cloud giveth, and the cloud taketh away. It giveth development teams the speed and scale to get applications into production and deployment faster than ever; it taketh away security teams’ comfort that they know exactly what’s going on in their environment.

Much has been said about the inherently slippery and hard-to-pin-down nature of the cloud in recent months — who thought the word “ephemeral” would appear in as much technology content as it has in 2022? The conversation has grown more critical as high-impact open-source vulnerabilities have proliferated just as fast as multi-cloud architectures have become the standard operating model in IT.

In this context, achieving cross-environment visibility — i.e., the very thing the cloud makes difficult — has become more critical than ever. While it may seem like an uphill battle, one we’re fighting against the very nature of the cloud, there are some strategies that can help in the effort. Here are four ways to put visibility at the center of your cloud security approach and understand what’s going on in your environment with greater clarity.

1. Take an inventory

Multi-cloud environments are now the dominant model, with 89% of organizations using this approach. As distributed architectures become the norm and the number of cloud providers in play at any given organization continues to climb, it becomes difficult to understand exactly what services are in use at any given time. This is where the problem of cloud visibility really starts — “What services are actually in our environment?” becomes a complex question to answer.

Parting the clouds of confusion and gaining visibility begins with getting a complete asset inventory, so you can understand what components are in your environment and clearly evaluate the risk associated with them.

That’s why it’s critical that your cloud security solution can provide a single, standardized asset inventory across all cloud service providers. This capability provides the foundation for many of the subsequent steps that help promote visibility for security teams, including consolidating policy management and spotting cloud misconfigurations.

2. Monitor from one vantage point, not many

With a cohesive inventory of all cloud assets in place, the next step is to monitor the environment — and as you might have guessed, monitoring from a centralized hub is another key way to promote big-picture visibility. But with multiple cloud providers and SaaS solutions, each with their own data and dashboards, actually achieving that consolidated view is easier said than done.

A cloud security tool that provides centralized monitoring can let you see the full picture of activity across a multi-cloud environment. This level of clarity will help you evaluate risks not just at the level of an individual cloud service but holistically, in the environment as a whole. And with developers working in a variety of platforms to innovate and iterate as quickly as possible, centralized monitoring also helps you quickly identify and remediate any issues that arise during development, such as unwanted configurations or compliance issues.

3. Prioritize risks through analytics

Alert fatigue is one of the biggest contributors to the noisiness that inundates security teams. Security operations center (SOC) analysts know this all too well when they’re faced with huge volumes of alerts from a security incident and event management (SIEM) solution. Especially when there’s a continued shortage of cybersecurity talent, there just aren’t enough hours in the day to chase down every alert.

A similar effect can take hold when monitoring cloud environments for risks and vulnerabilities. With increased complexity thanks to a growing number of services and a multitude of endpoints, how do you know what risks to prioritize and tackle first?

Analytics can help shed light on this often-cloudy picture, utilizing algorithms to set a baseline for “normal” activity, spot anomalies, and prioritize them based on severity. It’s one way to gain context into the data without actually being able to get the whole story as quickly as you need it. Some cloud security solutions provide these insights through integrations with cloud service provider (CSP) tools like Amazon GuardDuty, which continuously monitors for malicious activity in AWS environments.

4. Embrace automation

The first three steps are all about how security teams can collect and interpret data to more fully understand their cloud environments — but data is only as good as what you do with it. That’s where automation comes in: It helps standardize the remediation steps that occur after a security risk is identified.

Automation is often thought of as a means to increase speed and efficiency — and that’s certainly true. Being able to automatically set specific remediation actions in motion when a threat is detected can help reduce the time and effort it takes to mitigate the issue and reduce its potential impact. But automation can also be a key toward improving visibility.

When you’re looking back at a now-resolved security issue, understanding the timeline and sequence of events often becomes a hazy picture, especially when your team is working with increased urgency and speed. If you’ve set up automated actions as a standardized part of the remediation process, you won’t need to ask as many questions about what mitigation steps were taken, when, and who authorized them. There will surely be a large human element involved in mitigating cloud security issues, but automation can help provide structure and repeatability to the effort, streamlining the effort and reducing the number of places where confusion can creep in.

How are you handling cloud visibility challenges?

How to secure cloud environments effectively is an ongoing, dynamic conversation, and new difficulties surely lie ahead — but when security practitioners face challenges, they tend (rightly) to turn to their best and most reliable resource: each other.

What kinds of challenges is your team facing when it comes to achieving visibility in the cloud? Come chat with us at AWS re:Inforce on July 26-27, 2022 — we want to hear how you’re tackling these issues as you work toward fully cloud-native security.

Deploy tCell More Easily With the New AWS AMI Agent

Post Syndicated from Tom Caiazza original https://blog.rapid7.com/2022/07/18/deploy-tcell-more-easily-with-the-new-aws-ami-agent/

Deploy tCell More Easily With the New AWS AMI Agent

Rapid7’s tCell is a powerful tool that allows you to monitor risk and protect web applications and APIs in real time. Great! It’s a fundamental part of our push to make web application security as strong and comprehensive as it needs to be in an age when web application attacks account for roughly 70% of cybersecurity incidents.

But with that power comes complexity, and we know that not every customer has the same resources available both in-house or externally to leverage tCell in all its glory right out of the box. With our newest agent addition, we’re hoping to make that experience a little bit easier.

AWS AMI Agent for tCell

We’ve introduced the AWS AMI Agent for tCell, which makes it easier to deploy tCell into your software development life cycle (SDLC) without the need to manually configure tCell. If you aren’t as familiar with deploying web apps and need help getting tCell up and running, you can now deploy tCell with ease and get runtime protection on your apps within minutes.

If you use Amazon Web Services (AWS), you can now quickly launch a tCell agent with NGINX as a reverse proxy. This is placed in front of your existing web app without having to make development or code changes. To make things even easier, the new AWS AMI Agent even comes pre-equipped with a helper utility (with the NGINX agent pre-installed) that allows you to configure your tCell agent in a single command.

Shift left seamlessly

So why is this such an important new deployment method for tCell customers? Simply put, it’s a way to better utilize and understand tCell before making a case to your team of developers. To get the most out of tCell, it’s best to get buy-in from your developers, as deployment efforts traditionally can require bringing the dev team into the fold in a significant way.

With the AWS AMI Agent, your security team can utilize tCell right away, with limited technical knowledge, and use those learnings (and security improvements) to make the case that a full deployment of the tCell agent is in your dev team’s best interest. We’ve seen this barrier with some existing customers and with the overall shift-left approach within the web application community at large.

This new deployment offering is a way for your security team to get comfortable with the benefits (and there are many) of securing your web applications with tCell. They will better understand how to secure AWS-hosted web apps and how the two products work together seamlessly.

If you’d like to give it a spin, we recommend heading over to the docs to find out more.

The AWS AMI Agent is available to all existing tCell customers right now.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Zabbix 6.2 is out now!

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/zabbix-6-2-is-out-now/21602/

The Zabbix team is pleased to announce the release of the latest Zabbix major version – Zabbix 6.2! The latest version delivers features aimed at improving configuration management and performance on large Zabbix instances as well as extending the flexibility of the existing Zabbix functionality.

New features

A brief overview of the major new features available with the release of Zabbix 6.2:

  • Ability to suppress individual problems
    • Suppress problems indefinitely or until a specific point in time
  • Support of CyberArk vault for secret storage
  • Official AWS EC2 template
    • discover and monitor AWS EC2 performance statistics, alarms, and AWS EBS volumes
  • Ability to synchronize Zabbix proxy configuration directly from Zabbix frontend
    • Configuration synchronization is supported by active and passive proxies
  • Improved flexibility for hosts discovered from host prototypes
    • Link additional templates
    • Create and modify user macros
    • Populate the host with new tags
  • New items for VMware monitoring
  • The ability to further customize the hosts discovered by VMware discovery
  • Active agent check status can now be tracked from Zabbix frontend
  • Incremental configuration synchronization
    • Faster configuration synchronization
    • Reduced configuration synchronization performance impact
  • Newly created items are now checked within a minute after their creation
  • Execute now functionality is now available from the Latest data section
  • A warning message is now displayed when performing Execute now on items that do not support it
  • Templates are now grouped in template groups, instead of host groups
    • Improved host and template filtering
  • Multiple LDAP servers can now be defined and saved under Authentication – LDAP settings
  • Ability to collect Windows registry key values with the new registry monitoring items
  • New item for OS process discovery and collecting individual process statistics
  • New digital clock widget
  • The default Global view dashboard has been updated with the latest Zabbix widgets
  • The Graph widget has been further improved
    • Added stacked graph support
    • Legend now provides additional information
    • Added support of simple trigger display
  • UI forms now provide direct links to the relevant documentation sections
  • Many other improvements and features
Enhance the observability of your VMware infrastructure with the new items
Track your EC2 instances in a single pane of glass view
Suppress problems indefinitely or until a specific point in time
Track the active agent interface status from Zabbix frontend

New templates and integrations

Zabbix 6.2 comes pre-packaged with many new templates for the most popular vendors:

  • Envoy proxy
  • HashiCorp Consul
  • AWS EC2 Template
  • CockroachDB
  • TrueNAS
  • HPE MSA 2060 & 2040
  • HPE Primera
  • The S.M.A.R.T. monitoring template has received improvements

Zabbix 6.2 introduces a webhook integration for the GLPI IT Asset Management solution. This webhook can be used to forward problems created in Zabbix to the GLPi Assistance section

Zabbix 6.2 packages and images

The official Zabbix packages and images are available for:

  • Linux distributions for different hardware platforms on RHEL, CentOS, Oracle Linux, Debian, SUSE, Ubuntu, Raspbian, Alma Linux, Rocky Linux
  • Virtualization platforms based on VMware, VirtualBox, Hyper-V, XEN
  • Docker
  • Packages and precompiled agents for most popular platforms, including macOS and MSI packages for Windows

You can find the download instructions and download the new version on the Download page: https://www.zabbix.com/download

One-click deployments for the following cloud platforms are coming soon:

  • AWS, Azure, Google Cloud, Digital Ocean, Linode, Oracle Cloud, Red Hat OpenShift

Upgrading to Zabbix 6.2

In order to upgrade to Zabbix 6.2, you need to upgrade your repository package and download and install the new Zabbix component packages (Zabbix server, proxy, frontend, and other Zabbix components). When you start the Zabbix Server, an automatic database schema upgrade will be performed. Zabbix agents are backward compatible; therefore, it is not required to install the new agent versions. You can do it at a later time if needed.

If you’re using the official Docker container images – simply deploy a new set of containers for your Zabbix components. Once the Zabbix server container connects to the backend database, the database upgrade will be performed automatically.

You can find step-by-step instructions for the upgrade process to Zabbix 6.2 in the Zabbix documentation.

Join the webinar

If you wish to learn more about the Zabbix 6.2 features and improvements, we invite you to join our What’s new in Zabbix 6.2 public webinar.

During the webinar, you will get the opportunity to:

  • Learn about the Zabbix 6.2 features and improvements
  • See the latest Zabbix templates and integrations
  • Participate in a Q&A session with Zabbix founder and CEO Alexei Vladishev
  • Discuss the latest Zabbix version with Zabbix community and Zabbix team members
  • Anyone can sign up and attend the webinar at absolutely no cost

Don’t hesitate and sign up for the webinar now!

The post Zabbix 6.2 is out now! appeared first on Zabbix Blog.

Analyzing Amazon SES event data with AWS Analytics Services

Post Syndicated from Oscar Mendoza original https://aws.amazon.com/blogs/messaging-and-targeting/analyzing-amazon-ses-event-data-with-aws-analytics-services/

In this post, we will walk through using AWS Services, such as, Amazon Kinesis Firehose, Amazon Athena and Amazon QuickSight to monitor Amazon SES email sending events with the granularity and level of detail required to get insights from your customers engage with the emails you send.

Nowadays, email Marketers rely on internal applications to create their campaigns or any communications requirements, such us newsletters or promotional content. From those activities, they need to collect as much information as possible to analyze and improve their pipeline to get better interaction with the customers. Data such us bounces, rejections, success reception, delivery delays, complaints or open rate can be a powerful tool to understand the customers. Usually applications work with high-level data points without detailed logging or granular information that could help improve even better the effectiveness of their campaigns.

Amazon Simple Email Service (SES) is a smart tool for companies that wants a cost-effective, flexible, and scalable email service solution to easily integrate with their own products. Amazon SES provides methods to control your sending activity with built-in integration with Amazon CloudWatch Metrics and also provides a mechanism to collect the email sending events data.

In this post, we propose an architecture and step-by-step guide to track your email sending activities at a granular level, where you can configure several types of email sending events, including sends, deliveries, opens, clicks, bounces, complaints, rejections, rendering failures, and delivery delays. We will use the configuration set feature of Amazon SES to send detailed logging to our analytics services to store, query and create dashboards for a detailed view.

Overview of solution

This architecture uses Amazon SES built-in features and AWS analytics services to provide a quick and cost-effective solution to address your mail tracking requirements. The following services will be implemented or configured:

The following diagram shows the architecture of the solution:

Serverless Architecture to Analyze Amazon SES events

Figure 1. Serverless Architecture to Analyze Amazon SES events

The flow of the events starts when a customer uses Amazon SES to send an email. Each of those send events will be capture by the configuration set feature and forward the events to a Kinesis Firehose delivery stream to buffer and store those events on an Amazon S3 bucket.

After storing the events, it will be required to create a database and table schema and store it on AWS Glue Data Catalog in order for Amazon Athena to be able to properly query those events on S3. Finally, we will use Amazon QuickSight to create interactive dashboard to search and visualize all your sending activity with an email level of detailed.

Prerequisites

For this walkthrough, you should have the following prerequisites:

Walkthrough

Step 1: Use AWS CloudFormation to deploy some additional prerequisites

You can get started with our sample AWS CloudFormation template that includes some prerequisites. This template creates an Amazon S3 Bucket, an IAM role needed to access from Amazon SES to Amazon Kinesis Data Firehose.

To download the CloudFormation template, run one of the following commands, depending on your operating system:

In Windows:

curl https://raw.githubusercontent.com/aws-samples/amazon-ses-analytics-blog/main/SES-Blog-PreRequisites.yml -o SES-Blog-PreRequisites.yml

In MacOS

wget https://raw.githubusercontent.com/aws-samples/amazon-ses-analytics-blog/main/SES-Blog-PreRequisites.yml

To deploy the template, use the following AWS CLI command:

aws cloudformation deploy --template-file ./SES-Blog-PreRequisites.yml --stack-name ses-dashboard-prerequisites --capabilities CAPABILITY_NAMED_IAM

After the template finishes creating resources, you see the IAM Service role and the Delivery Stream on the stack Outputs tab. You are going to use these resources in the following steps.

IAM Service role and Delivery Stream created by CloudFormation template

Figure 2. CloudFormation template outputs

Step 2: Creating a configuration set in SES and setting the default configuration set for a verified identity

SES can track the number of send, delivery, open, click, bounce, and complaint events for each email you send. You can use event publishing to send information about these events to other AWS service. In this case we are going to send the events to Kinesis Firehose. To do this, a configuration set is required.

To create a configuration set, complete the following steps:

  1. On the AWS Console, choose the Amazon Simple Email Service.
  2. Choose Configuration sets.
  3. Click on Create set.

    Create a configuration set in Amazon SES

    Figure 3. Amazon SES Create Configuration Set

  4. Set a Configuration set name.
  5. Leave the other configurations by default.

    Write a name for your configuration set

    Figure 4. Configuration Set Name

  6. Once the configuration set is created, select Event destinations

    Configuration set created successfully

    Figure 5. Configuration set created successfully

  7. Click on Add destination
  8. Select the event types you would like to analyze and then click on next.

    Sending Events to analyze

    Figure 6. Sending Events to analyze

  9. Select Amazon Kinesis Data Firehose as the destination, choose the delivery stream and the IAM role created previously, click on next and in the review page, click on Add destination.

    Destination for Amazon SES sending events

    Figure 7. Destination for Amazon SES sending events

  10. Once you have created the configuration set and added the event destination, you can define the Default configuration set for the verified identity (domain or email address). In the SES console, choose Verified identities.

    Amazon SES Verified Identity

    Figure 8 Amazon SES Verified Identity

  11. Choose the verified identity from which you want to collect events and select Configuration set. Click on Edit.

    Edit Configuration Set for Verified Identity

    Figure 9. Edit Configuration Set for Verified Identity

  12. Click on the checkbox Assign a default configuration set and choose the configuration set created previously.

    Assign default configuration set

    Figure 10. Assign default configuration set

  13. Once you have completed the previous steps, your events will be sent to Amazon S3. Due to the buffer’s configuration on the Kinesis Delivery Stream, the data will be loaded every 5 minutes or every 5 MiB to Amazon S3. You can check the structure created on the bucket and see json logs with SES events data.

    Amazon S3 bucket structure

    Figure 11. Amazon S3 bucket structure

Step 3: Using Amazon Athena to query the SES event logs

Amazon SES publishes email sending event records to Amazon Kinesis Data Firehose in JSON format. The top-level JSON object contains an eventType string, a mail object, and either a Bounce, Complaint, Delivery, Send, Reject, Open, Click, Rendering Failure, or DeliveryDelay object, depending on the type of event.

  1. In order to simplify the analysis of email sending events, create the sesmaster table by running the following script in Amazon Athena. Don’t forget to change the location in the following script with your own bucket containing the data of email sending events.
    CREATE EXTERNAL TABLE sesmaster (
    
    eventType string,
    
    complaint struct<arrivaldate:string,
    complainedrecipients:array<struct<emailaddress:string>>,
    complaintfeedbacktype:string,
    feedbackid:string,
    `timestamp`:string,
    useragent:string>,
    
    bounce struct<bouncedrecipients:array<struct<action:string,
    diagnosticcode:string,
    emailaddress:string,
    status:string>>,
    bouncesubtype:string,
    bouncetype:string,
    feedbackid:string,
    reportingmta:string,
    `timestamp`:string>,
    
    mail struct<`timestamp`:string,
    source:string,
    sourceArn:string,
    sendingAccountId:string,
    messageId:string,
    destination:string,
    headersTruncated:boolean,
    headers:array<struct<name:string,
    value:string>>,
    commonHeaders:struct<`from`:array<string>,
    to:array<string>,
    messageId:string,
    subject:string>,
    tags:struct<ses_configurationset:string,
    ses_source_ip:string,
    ses_outgoing_ip:string,
    ses_from_domain:string,
    ses_caller_identity:string> >,
    
    send string,
    
    delivery struct<processingtimemillis:int,
    recipients:array<string>,
    reportingmta:string,
    smtpresponse:string,
    `timestamp`:string>,
    
    open struct<ipaddress:string,
    `timestamp`:string,
    userAgent:string>,
    
    reject struct<reason:string>,
    
    click struct<ipAddress:string,
    `timestamp`:string,
    userAgent:string,
    link:string>
    
    ) 
    ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
    WITH SERDEPROPERTIES (
    "mapping.ses_configurationset"="ses:configuration-set" , "mapping.ses_source_ip"="ses:source-ip" , 
    "mapping.ses_from_domain"="ses:from-domain" , "mapping.ses_caller_identity"="ses:caller-identity" , 
    "mapping.ses_outgoing_ip"="ses:outgoing-ip" ) LOCATION 's3://aws-s3-ses-analytics-<aws-account-number>/'

    The sesmaster table uses the org.openx.data.jsonserde.JsonSerDe SerDe library to deserialize the JSON data.

    We have leveraged the support for JSON arrays and maps and the support for nested data structures. Those features ease the process of preparation and visualization of data.

    In the sesmaster table, the following mappings were applied to avoid errors due to name of JSON fields containing colons.

    • “mapping.ses_configurationset”=”ses:configuration-set”
    • “mapping.ses_source_ip”=”ses:source-ip”
    • “mapping.ses_from_domain”=”ses:from-domain”
    • “mapping.ses_caller_identity”=”ses:caller-identity” “mapping.ses_outgoing_ip”=”ses:outgoing-ip”
  2. Once the sesmaster table is ready, it is a good strategy to create curated views of its data. The first view called vwSESMaster contains all the records of email sending events and all the fields which are unique on each event. Create the vwSESMaster view by running the following script in Amazon Athena.
    CREATE OR REPLACE VIEW vwSESMaster AS
    SELECT
    eventtype as eventtype
    , mail.messageId as mailmessageid
    , mail.timestamp as mailtimestamp
    , mail.source as mailsource
    , mail.sendingAccountId as mailsendingAccountId
    , mail.commonHeaders.subject as mailsubject
    , mail.tags.ses_configurationset as mailses_configurationset
    , mail.tags.ses_source_ip as mailses_source_ip
    , mail.tags.ses_from_domain as mailses_from_domain
    , mail.tags.ses_outgoing_ip as mailses_outgoing_ip
    , delivery.processingtimemillis as deliveryprocessingtimemillis
    , delivery.reportingmta as deliveryreportingmta
    , delivery.smtpresponse as deliverysmtpresponse
    , delivery.timestamp as deliverytimestamp
    , delivery.recipients[1] as deliveryrecipient
    , open.ipaddress as openipaddress
    , open.timestamp as opentimestamp
    , open.userAgent as openuseragent
    , bounce.bounceType as bouncebounceType
    , bounce.bouncesubtype as bouncebouncesubtype
    , bounce.feedbackid as bouncefeedbackid
    , bounce.timestamp as bouncetimestamp
    , bounce.reportingMTA as bouncereportingmta
    , click.ipAddress as clickipaddress
    , click.timestamp as clicktimestamp
    , click.userAgent as clickuseragent
    , click.link as clicklink
    , complaint.timestamp as complainttimestamp
    , complaint.userAgent as complaintuseragent
    , complaint.complaintFeedbackType as complaintcomplaintfeedbacktype
    , complaint.arrivalDate as complaintarrivaldate
    , reject.reason as rejectreason
    FROM
    sesmaster

    The sesmaster table contains some fields which are represented by nested arrays, so it is necessary to flatten them into multiples rows. Following you can see the event types and the fields which need to be flatten.

    • Event type SEND: field mail.commonHeaders
    • Event type BOUNCE: field bounce.bouncedrecipients
    • Event type COMPLAINT: field complaint.complainedrecipients

    To flatten those arrays into multiple rows, we used the CROSS JOIN in conjunction with the UNNEST operator using the following strategy for all the three events:

    • Create a temporal view with the mail.messageID and the field to be flattened.
    • Create another temporal view with the array flattened into multiple rows.
    • Create the final view joining the sesmaster table with the second temporal view by event type and mail.messageID.

    To create those views, follow the next steps.

  3. Run the following scripts in Amazon Athena to flat the mail.commonHeaders array in the SEND event type
    CREATE OR REPLACE VIEW vwSendMailTmpSendTo AS 
    SELECT
    mail.messageId as messageid
    , mail.commonHeaders.to as recipients
    FROM
    sesmaster
    WHERE 
    eventtype='Send'
    
    CREATE OR REPLACE VIEW vwsendmailrecipients AS 
    SELECT
    messageid
    , recipient
    FROM
    ("vwSendMailTmpSendTo"
    CROSS JOIN UNNEST(recipients) t (recipient))
    
    CREATE OR REPLACE VIEW vwSentMails AS
    SELECT 
    eventtype as eventtype
    , mail.messageId as mailmessageid
    , mail.timestamp as mailtimestamp
    , mail.source as mailsource
    , mail.sendingAccountId as mailsendingAccountId
    , mail.commonHeaders.subject as mailsubject
    , mail.tags.ses_configurationset as mailses_configurationset
    , mail.tags.ses_source_ip as mailses_source_ip
    , mail.tags.ses_from_domain as mailses_from_domain
    , mail.tags.ses_outgoing_ip as mailses_outgoing_ip
    , dest.recipient as mailto
    FROM
    sesmaster as sm
    ,vwsendmailrecipients as dest
    WHERE
    sm.eventtype = 'Send'
    and sm.mail.messageid = dest.messageid
  4. Run the following scripts in Amazon Athena to flat the bounce.bouncedrecipients array in the BOUNCE event type
    CREATE OR REPLACE VIEW vwbouncemailtmprecipients AS 
    SELECT
    mail.messageId as messageid
    , bounce.bouncedrecipients
    FROM
    sesmaster
    WHERE (eventtype = 'Bounce')
    
    CREATE OR REPLACE VIEW vwbouncemailrecipients AS 
    SELECT
    messageid
    , recipient.action
    , recipient.diagnosticcode
    , recipient.emailaddress
    FROM
    (vwbouncemailtmprecipients
    CROSS JOIN UNNEST(bouncedrecipients) t (recipient))
    
    CREATE OR REPLACE VIEW vwBouncedMails AS
    SELECT
    eventtype as eventtype
    , mail.messageId as mailmessageid
    , mail.timestamp as mailtimestamp
    , mail.source as mailsource
    , mail.sendingAccountId as mailsendingAccountId
    , mail.commonHeaders.subject as mailsubject
    , mail.tags.ses_configurationset as mailses_configurationset
    , mail.tags.ses_source_ip as mailses_source_ip
    , mail.tags.ses_from_domain as mailses_from_domain
    , mail.tags.ses_outgoing_ip as mailses_outgoing_ip
    , bounce.bounceType as bouncebounceType
    , bounce.bouncesubtype as bouncebouncesubtype
    , bounce.feedbackid as bouncefeedbackid
    , bounce.timestamp as bouncetimestamp
    , bounce.reportingMTA as bouncereportingmta
    , bd.action as bounceaction
    , bd.diagnosticcode as bouncediagnosticcode
    , bd.emailaddress as bounceemailaddress
    FROM
    sesmaster as sm
    ,vwbouncemailrecipients as bd
    WHERE
    sm.eventtype = 'Bounce'
    and sm.mail.messageid = bd.messageid
    
  5. Run the following scripts in Amazon Athena to flat the complaint.complainedrecipients array in the COMPLAINT event type
    CREATE OR REPLACE VIEW vwcomplainttmprecipients AS 
    SELECT
    mail.messageId as messageid
    , complaint.complainedrecipients
    FROM
    sesmaster
    WHERE (eventtype = 'Complaint')
    
    CREATE OR REPLACE VIEW vwcomplainedrecipients AS 
    SELECT
    messageid
    , recipient.emailaddress
    FROM
    (vwcomplainttmprecipients 
    CROSS JOIN UNNEST(complainedrecipients) t (recipient))
    

    At the end we have one table and four views which can be used in Amazon QuickSight to analyze email sending events:

    • Table sesmaster
    • View vwSESMaster
    • View vwSentMails
    • View vwBouncedMails
    • View vwComplainedemails

Step 4: Analyze and visualize data with Amazon QuickSight

 In this blog post, we use Amazon QuickSight to analyze and to visualize email sending events from the sesmaster and the four curated views created previously. Amazon QuickSight can directly access data through Athena. Its pay-per-session pricing enables you to put analytical insights into the hands of everyone in your organization.

Let’s set this up together. We first need to select our table and our views to create new data sources in Athena and then we use these data sources to populate the visualization. We are creating just an example of visualization. Feel free to create your own visualization based on your information needs.

Before we can use the data in Amazon QuickSight, we need to first grant access to the underlying S3 bucket. If you haven’t done so already for other analyses, see our documentation on how to do so.

  1. On the Amazon QuickSight home page, choose Datasets from the menu on the left side, then choose New dataset from the upper-right corner, set and pick Athena as data source. In the following dialog box, give the data source a descriptive name and choose Create data source.

    Create New Athena Data Source

    Figure 12. Create New Athena Data Source

  2. In the following dialog box, select the Catalog and the Database containing your sesmaster and curated views. Let’s select the sesmaster table in order to create some basic Key Performance Indicators. Select the table sesmaster and click on the Select

    Select Sesmaster Table

    Figure 13. Select Sesmaster Table

  3. Our sesmaster table now is a data source for Amazon QuickSight and we can turn to visualizing the data.

    QuickSight Visualize Data

    Figure 14. QuickSight Visualize Data

  4. You can see the list fields on the left. The canvas on the right is still empty. Before we populate it with data, let’s select Key Performance Indicator from the available visual types.

    QuickSight Visual Types

    Figure 15. QuickSight Visual Types

  5. To populate the graph, drag and drop the fields from the field list on the left onto their respective destinations. In our case, we put the field send onto the value well and use count as aggregation.

    Add Send field to visualization

    Figure 16. Add Send field to visualization

  6. Add another visual from the left-upper side and select Key Performance Indicator as visual type.
    Add a new visual

    Figure 17. Add a new visual

    Key Performance Indicator Visual Type

    Figure 18. Key Performance Indicator Visual Type

  7. Put the field Delivery onto the value well and use count as aggregation.

    Add Delivery Field to visualization

    Figure 19. Add Delivery Field to visualization

  8. Repeat the same procedure, (steps 1 to 4) to count the number of Open, Click, Bounce, Complaint and Reject Events. At the end, you should see something similar to the following visualization. After resizing and rearranging the visuals, you should get an analysis like the shown in the image below.

    Preview of Key Performance Indicators

    Figure 20. Preview of Key Performance Indicators

  9. Let´s add another dataset by clicking the pencil on the right of the current Dataset.

    Add a New Dataset

    Figure 21. Add a New Dataset

  10. On the following dialog box, select Add Dataset.

    Add a New Dataset

    Figure 22. Add a New Dataset

  11. Select the view called vwsesmaster and click Select.
    Add vwsesmaster dataset

    Figure 23. Add vwsesmaster dataset

    Now you can see all the available fields of the vwsesmaster view.

    New fields from vwsesmaster dataset

    Figure 24. New fields from vwsesmaster dataset

  12. Let’s create a new visual and select the Table visual type.

    QuickSight Visual Types

    Figure 25. QuickSight Visual Types

  13. Drag and drop the fields from the field list on the left onto their respective destinations. In our case, we put the fields eventtype, mailmessageid, and mailsubject onto the Group By well, but you can add as many fields as you need.

    Add eventtype, mailmessageid and mailsubject fields

    Figure 26. Add eventtype, mailmessageid and mailsubject fields

  14. Now let’s create a filter for this visual in order to filter by type of event. Be sure you select the table and then click on Filter on the left menu.

    Add a Filter

    Figure 27. Add a Filter

  15. Click on Create One and select the field eventtype on the popup window. Now select the eventtype filter to see the following options.

    Create eventtype filter

    Figure 28. Create eventtype filter

  16. Click on the dots on the right of the eventtype filter and select Add to Sheet.

    Add filter to sheet

    Figure 29. Add filter to sheet

  17. Leave all the default values, scroll down and select Apply

    Apply filters with default values

    Figure 30. Apply filters with default values

  18. Now you can filter the vwsesmaster view by eventtype.

    Filter vwsesmasterview by eventtype

    Figure 31. Filter vwsesmasterview by eventtype

  19. You can continue customizing your visualization with all the available data in the sesmaster table, the vwsesmaster view and even add more datasets to include data from the vwSentMails, vwBouncedMails, and vwComplainedemails views. Below, you can see some other visualizations created from those views.
    Final visualization 1

    Figure 32. Final visualization 1

    Final visualization 2

    Figure 33. Final visualization 2

    Final visualization 3

    Figure 34. Final visualization 3

Clean up

To avoid ongoing charges, clean up the resources you created as part of this post:

  1. Delete the visualizations created in Amazon Quicksight.
  2. Unsubscribe from Amazon QuickSight if you are not using it for other projects.
  3. Delete the views and tables created in Amazon Athena.
  4. Delete the Amazon SES configuration set.
  5. Delete the Amazon SES events stored in S3.
  6. Delete the CloudFormation stack in order to delete the Amazon Kinesis Delivery Stream.

Conclusion

In this blog we showed how you can use AWS native services and features to quickly create an email tracking solution based on Amazon SES events to have a more detailed view on your sending activities. This solution uses a full serverless architecture without having to manage the underlying infrastructure and giving you the flexibility to use the solution for small, medium or intense Amazon SES usage, without having to take care of any servers.

We showed you some samples of dashboards and analysis that can be built for most of customers requirements, but of course you can evolve this solution and customize it according to your needs, adding or removing charts, filters or events to the dashboard. Please refer to the following documentation for the available Amazon SES Events, their structure and also how to create analysis and dashboards on Amazon QuickSight:

From a performance and cost efficiency perspective there are still several configurations that can be done to improve the solution, for example using a columnar file formant like parquet, compressing with snappy or setting your S3 partition strategy according to your email sending usage. Another improvement could be importing data into SPICE to read data in Amazon Quicksight. Using SPICE results in the data being loaded from Athena only once, until it is either manually refreshed or automatically refreshed using a schedule.

You can use this walkthrough to configure your first SES dashboard and start visualizing events detail. You can adjust the services described in this blog according to your company requirements.

About the authors

Oscar Mendoza AWS Solutions Architect Oscar Mendoza is a Solutions Architect at AWS based in Bogotá, Colombia. Oscar works with our customers to provide guidance in architectural best practices and to build Well Architected solutions on the AWS platform. He enjoys spending time with his family and his dog and playing music.
Luis Eduardo Torres AWS Solutions Architect Luis Eduardo Torres is a Solutions Architect at AWS based in Bogotá, Colombia. He helps companies to build their business using the AWS cloud platform. He has a great interest in Analytics and has been leading the Analytics track of AWS Podcast in Spanish.
Santiago Benavidez AWS Solutions Architect Santiago Benavídez is a Solutions Architect at AWS based in Buenos Aires, Argentina, with more than 13 years of experience in IT, currently helping DNB/ISV customers to achieve their business goals using the breadth and depth of AWS services, designing highly available, resilient and cost-effective architectures.

Correlate IAM Access Analyzer findings with Amazon Macie

Post Syndicated from Nihar Das original https://aws.amazon.com/blogs/security/correlate-iam-access-analyzer-findings-with-amazon-macie/

In this blog post, you’ll learn how to detect when unintended access has been granted to sensitive data in Amazon Simple Storage Service (Amazon S3) buckets in your Amazon Web Services (AWS) accounts.

It’s critical for your enterprise to understand where sensitive data is stored in your organization and how and why it is shared. The ability to efficiently find data that is shared with entities outside your account and the contents of that data is paramount. You need a process to quickly detect and report which accounts have access to sensitive data. Amazon Macie is an AWS service that can detect many sensitive data types. Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and help protect your sensitive data in AWS.

AWS Identity and Access Management (IAM) Access Analyzer helps to identify resources in your organization and accounts, such as S3 buckets or IAM roles, that are shared with an external entity. When you enable IAM Access Analyzer, you create an analyzer for your entire organization or your account. The organization or account you choose is known as the zone of trust for the analyzer. The analyzer monitors the supported resources within your zone of trust. This analyzer enables IAM Access Analyzer to detect each instance of a resource shared outside the zone of trust and generates a finding about the resource and the external principals that have access to it.

Currently, you can use IAM Access Analyzer and Macie to detect external access and discover sensitive data as separate processes. You can join the findings from both to best evaluate the risk. The solution in this post integrates IAM Access Analyzer, Macie, and AWS Security Hub to automate the process of correlating findings between the services and presenting them in Security Hub.

How does the solution work?

First, IAM Access Analyzer discovers S3 buckets that are shared outside the zone of trust. Next, the solution schedules a Macie sensitive data discovery job for each of these buckets to determine if the bucket contains sensitive data. Upon discovery of shared sensitive data in S3, a custom high severity finding is created in Security Hub for review and incident response.

Solution architecture

This solution is based on a serverless architecture, and uses the following services:

Figure 1: Architecture diagram

Figure 1: Architecture diagram

Figure 1 depicts the following process flow:

  1. IAM Access Analyzer detects shared S3 buckets outside of the zone of trust—the organization or account you choose is known as a zone of trust for the analyzer—and creates the event Access Analyzer Finding in EventBridge.
  2. EventBridge triggers the Lambda function sda-aa-save-findings.
  3. The sda-aa-save-findings function records each finding in DynamoDB.
  4. An EventBridge scheduled event periodically starts a new cycle of the Step Function state machine, which immediately runs the Lambda function sda-macie-submit-scan. The template sets a 15-minute interval, but this is configurable.
  5. The sda-macie-submit-scan function reads the IAM Access Analyzer findings that were created by sda-aa-save-findings from DynamoDB.
  6. sda-macie-submit-scan launches a Macie classification job for each distinct S3 bucket that is related to one or more recent IAM Access Analyzer findings.
  7. Macie performs a sensitive discovery scan on each requested S3 bucket.
  8. The sda-macie-submit-scan function initiates the Lambda function sda-macie-check-status.
  9. sda-macie-check-status periodically checks the status of each Macie classification job, waiting for all the Macie jobs initiated by this solution to complete.
  10. Upon completion of the sda-macie-check-status function, the step function runs the Lambda function sda-sh-create-findings.
  11. sda-sh-create-findings joins the resulting IAM Access Analyzer and Macie datasets for each S3 bucket.
  12. sda-sh-create-findings publishes a finding to Security Hub for each bucket that has both external access and sensitive data.

    Note: The Macie scan is skipped if the S3 bucket is tagged to be excluded or if it was recently scanned by Macie. See the Cost considerations section for more information on custom configurations.

  13. Information security can review and act on the findings shown in Security Hub.

Sample Security Hub output

Figure 2 shows the sample findings that Security Hub will present. Each finding includes:

  • Severity
  • Workflow status
  • Record state
  • Company
  • Product
  • Title
  • Resource
Figure 2: Sample Security Hub findings

Figure 2: Sample Security Hub findings

The output to Security Hub will display a severity of HIGH with workflow NEW, because this is the first time the event has been observed. The record state is ACTIVE because the workflow state is NEW. The title explains the reason for the event.

For example, if potentially sensitive data is discovered in a bucket that is shared outside a zone of trust, selecting an event will display the resources involved in the finding so you can investigate. For more information, see the Security Hub User Guide.

Notes:

  • Detection of public S3 buckets by IAM Access Analyzer will still occur through Security Hub and will be marked as critical severity. This solution does not add to or augment this finding in Security Hub.
  • If a finding in IAM Access Analyzer is archived, the solution does not update the related finding in Security Hub.

Prerequisites

To use this solution, you need the following:

  • Permission to run AWS CloudFormation
  • Permission to create Lambda functions
  • Permission to create DynamoDB tables
  • Permission to create Step Function state machines
  • Permission to create EventBridge event rules
  • Permission to enable IAM Access Analyzer on the account where sensitive discovery is required
  • Permission to enable Macie on the account
  • Permission to enable Security Hub on the account

Deploy the solution

The solution is deployed through AWS CloudFormation, and you can review the template for options to best suit your specific needs.

  1. Sign in to your AWS account located at https://aws.amazon.com/console/.
  2. In the AWS Management Console, navigate to the AWS CloudFormation service, and then choose Create stack.
  3. Under Prerequisite – Prepare template, choose Template is ready.
  4. Under Specify template, choose Amazon S3 URL and provide the following URL:
    https://awsiammedia.s3.amazonaws.com/public/sample/936-correlating-aa-findings-macie/sda-cfn.yml
  5. Choose Next.
  6. Enter the stack name.
  7. The Application code location, S3 Bucket and S3 Key fields will be pre-filled.
  8. Under Service Activations, modify the activations based on the services you presently have running in your account.
  9. Modify the Logging and Monitoring settings if required.
  10. (Optional) Set an alert email address for errors.
  11. Choose Next, then choose Next again.
  12. Under Capabilities, select the check box.
  13. Choose Create Stack. The solution will begin deploying; watch for the CREATE_COMPLETE message.
Figure 3: Sample CloudFormation deployment status

Figure 3: Sample CloudFormation deployment status

The solution is now deployed and will start monitoring for sensitive data that is being shared. It will send the findings to Security Hub for your teams to investigate.

Cost considerations

When you scan large S3 buckets with sensitive data, remember that Macie cost is based on the amount of data scanned. For more information on Macie costs, see Amazon Macie pricing.

This solution allows the following options, which you can use to help manage costs:

  • Use environment variables in Lambda to skip specific tagged buckets
  • Skip recently scanned S3 buckets and reuse prior findings
Figure 4: Screen shot of configurable environment variable

Figure 4: Screen shot of configurable environment variable

Conclusion

In this post, we discussed how the solution uses Lambda, Step Functions and EventBridge to integrate IAM Access Analyzer with Macie discovery jobs. We reviewed the components of the application, deployed it by using CloudFormation, and reviewed the output a security team would use to take the appropriate actions. We also provided two ways that you can manage the costs associated with the solution.

After you deploy this project, you can modify it to meet your organization’s needs. For example, you can modify the tags to skip specific S3 buckets your organization has already classified to hold sensitive data. Customers who use multiple AWS accounts can designate a centralized Security Hub administrator account to receive the solution alerts from each member account. For more information on this option, see Designating a Security Hub administrator account.

If you have feedback about this post, please submit it in the Comments section below. If you have questions about this post, please start a new thread on the AWS Identity and Access Management forum.

Other resources

For more information on correlating security findings with AWS Security Hub and Amazon EventBridge, refer to this blog post.

Want more AWS Security news? Follow us on Twitter.

Nihar Das

Nihar Das

Nihar has over 20 years of experience in various business domains including financial services. As an AWS Senior Solutions Architect, he is passionate about solving challenges in the cloud and helps financial services customers to migrate to AWS and support the continued innovation.

Joe Dunn

Joe Dunn

Joe is an AWS Senior Solutions Architect in Financial Services with over 20 years of experience in infrastructure architecture and migration of business-critical loads to AWS. He helps financial services customers to innovate on the AWS Cloud by providing solutions using AWS products and services.

Armand Aquino

Armand Aquino

Armand is a solutions architect helping financial services organizations design their critical workloads on AWS. In his spare time, he enjoys exploring outdoors and learning Korean.

Registering SMS Sender IDs in Singapore

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/registering-sms-sender-ids-in-singapore/

A few weeks ago, we published a blog post about the process of registering alphanumeric Sender IDs. Today, we’re announcing support for registering Sender IDs in Singapore.

About Sender ID registration in Singapore

Singapore’s Infocomm Media Development Authority (IMDA) has created a Sender ID registry to protect consumers from fraudulent and malicious SMS messages. This registry is called the Singapore SMS Sender ID Registry (SSIR).

The government of Singapore encourages all government agencies and financial institutions to register with SSIR. Organizations and businesses outside of these industries can also register with SSIR.

Currently, there is no requirement to register your Sender ID. However, when you register with the SSIR, your Sender ID becomes a “Protected Sender ID.” Protected Sender IDs help to protect you and your customers by preventing other senders from using your Sender ID.

Note that in order to complete this registration process, your business or organization must have a Unique Entity Number (UEN). Businesses and other organizations receive a UEN when they register with Singapore’s Accounting and Corporate Regulatory Authority.

Registering your Sender ID

The first step in the registration process is to create a Protected Sender ID through the Singapore Network Information Centre (SGNIC). To initiate the registration process, send an email to [email protected]. In your message, include the name of your business, the Sender IDs that you want to register, and a description of your use case. SGNIC may contact you for additional information.

After you register with SGNIC, open a ticket in the AWS Support Center. You can find the procedure for opening a case in the Amazon Pinpoint User Guide. The AWS Support team will respond to your case within 24 hours. Their response includes a template for a letter that shows your intent to register a Sender ID.

The next step is to modify the contents of this letter. The regulatory groups in Singapore require a copy of this letter in order to allow AWS to send messages using your Sender ID. Begin by placing the contents of the letter on your company’s letterhead. Next, modify the fields that are highlighted in yellow. These fields include the following:

  • <Place>: The address of your company or organization.
  • <Brand Owner Company Name>: The name of your company or organization.
  • <Number>: Your Unique Entity Number.
  • <Signature>, <Name>, <Title>: The personal signature, name, and job title of the person who is submitting the request on behalf of your company or organization.
  • <ExampleSenderId1>, <ExampleSenderId2>: The Sender IDs that you intend to register with SGNIC. You can add or remove lines here depending on how many Sender IDs you plan to register.

Once you finish modifying the letter, submit it by attaching it to your existing case in the AWS Support Center.

What happens next?

IMDA regularly sends us lists of new Sender ID registrations. When we receive confirmation that your Sender ID has been registered, we update your account to allow it to send SMS messages through your Sender ID. We will also comment on your Support case to indicate that the process is complete.

Wrapping up

We continue to monitor changes to Sender ID registration requirements around the world. We’re working closely with carriers and organizations around the world to make the registration processes as straightforward as possible for our customers. Check in on this blog regularly to learn more about future regulatory changes.

For more information about registering Sender IDs in Singapore, see Special requirements for Singapore in the Amazon Pinpoint User Guide.

Registering Sender IDs for Sending SMS Messages

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/registering-sender-ids-for-sending-sms-messages/

With Amazon Pinpoint, you can use Sender IDs to send text messages to recipients in various countries around the world. A Sender ID is a short, alphanumeric identifier (such as “AMAZON”) that appears on a recipient’s device when they receive a message from you. A Sender ID is one type of origination identity—that is, an identity that’s used to send text messages. Other types of origination identities include short codes and long codes. Sender IDs are great for branding purposes, because recipients can easily determine who the sender of the message is.

SMS senders who send messages to some countries (such as India or the Philippines) are required to register their SMS use cases and message templates before they can send messages to those countries using a Sender ID. On the Amazon Pinpoint team, we listen to our customers when they tell us which countries they need to send messages to. We regularly add support for registration processes to help our customers reach their end users. In this post, I’ll discuss the purpose of Sender ID registration and provide information about registering Sender IDs.

Why is Sender ID registration required?

The rise of fraudulent and malicious SMS activity around the world means that it’s more important than ever for recipients of SMS messages to trust the Sender ID that is contacting them. To reduce the volume of fraudulent SMS messages reaching their customers, mobile carriers have systems in place to identify and prevent abuse.

Registering Sender IDs helps mobile carriers trace abuse and other issues back to a specific SMS sender. By registering a Sender ID, your messages bypass filters that can throttle or block unregistered traffic. This not only improves deliverability rates, but also helps earn trust, because the sender’s name is consistent and identifiable. AWS has processes for registering your dedicated Sender ID with regulatory agencies and industry groups in several countries.

The future of Sender ID registration

In the months and years ahead, we expect more countries to add Sender ID registration requirements. AWS will continue to work with local network operators to expand the services that we offer to our customers. We carefully monitor the global SMS industry and create new processes when needs arise. Regardless of changes to the regulatory landscape, we strive to offer consistently high, reliable SMS message deliverability rates.

How can I register a Sender ID?

You can find a list of countries that support Sender IDs in Supported countries and regions in the Amazon Pinpoint User Guide. That document also lists the countries that require pre-registration of Sender IDs.

If you plan to send messages to a country that requires Sender ID registration, you must complete the registration process. The registration process can be complicated, with many specific requirements and with different processes in each country. The AWS Support team can work with you to complete your registration. The first step in registering your Sender ID is to create a case with AWS Support. You can find more information about creating a case in Requesting Sender IDs for SMS messaging in the Amazon Pinpoint User Guide.

When you request a Sender ID, we provide you with an estimate of how long the request will take to complete. This estimate is based on the completion times that we’ve seen from other customers. Because each country has its own process, completion times for registration vary by destination country. For example, Sender ID registration in India can be complete in one week or less, whereas it can take six weeks or more in Vietnam. These requests can’t be expedited, because they involve the carriers themselves making changes to the ways that their networks are configured. We suggest that you start your registration process early so that you can start sending messages as soon as you launch your product or service.

When you create a case, it’s important that you check on it regularly. The AWS Support team will provide you with registration materials, such as the forms and cover letters that you must submit to begin the registration process. We recommend that you provide all of the requested information with as much detail as you can. Too much information is better than too little information. We also recommend that you don’t skip any fields in the registration forms that we send you. The carriers require that you provide responses in all of the fields on these forms. This is true even if you believe that a field doesn’t apply to your use case. This might occur if you’re registering a One-Time Password (OTP) use case, and the carriers require you to provide a response to the keyword “STOP.” Although it doesn’t seem logical that customers would want to opt-out of receiving one-time passwords, the carriers in most countries require you to provide recipients with a way to completely opt-out of receiving messages from you.

After you submit your application, it’s also possible that the mobile carriers will have feedback about your application. In this situation, you have to address their concerns before the registration process can continue. Addressing these concerns quickly can help reduce delays in completing your request.

Sender IDs are a great tool for reaching your customers by SMS. You can learn more about sender IDs and the other types of origination identities that Amazon Pinpoint supports in Originating identities for SMS messaging in the Amazon Pinpoint User Guide. Happy sending!

Automate the Creation of On-Demand Capacity Reservations for running EC2 instances

Post Syndicated from sbbusser original https://aws.amazon.com/blogs/compute/automate-the-creation-of-on-demand-capacity-reservations-for-running-ec2-instances/

This post is written by Ballu Singh a Principal Solutions Architect at AWS, Neha Joshi a Senior Solutions Architect at AWS, and Naveen Jagathesan a Technical Account Manager at AWS.

Customers have asked how they can “create On-Demand Capacity Reservations (ODCRs) for their existing instances during events, such as the holiday season, Black Friday, marketing campaigns, or others?”

ODCRs let you reserve compute capacity for your your Amazon Elastic Compute Cloud (Amazon EC2) instances. ODCRs further make sure that you always have EC2 capacity access when required, and for as long as you need it. Customers who want to make sure that any instances that are stopped/started during the critical event and are available when needed should be covered by ODCRs.

ODCRs let you reserve compute capacity for your Amazon EC2 instances in a specific availability zone for any duration. This means that you can create and manage capacity reservations independently from the billing discounts offered by Savings Plans or Regional Reserved Instances. You can create ODCR at any time, without entering into a one-year or three-year term commitment, and the capacity is available immediately. Billing starts as soon as the ODCR enters the active state. When you no longer need it, cancel the ODCR to stop incurring charges.

At the time of this blog publication, if you need to create ODCR for existing running instances, you must manually identify your running instances configuration with matching attributes, such as instance type, platform, and Availability Zone. This is a time and resource consuming process.

In this post, we provide an automated way to manage ODCR operations. This includes creating, modifying, and cancelling ODCRs for the running instances across regions in an account, all without requiring any manual intervention of specifying instance configuration attributes. Additionally, it creates an Amazon CloudWatch Alarm for InstanceUtilization and an Amazon Simple Notification Service (Amazon SNS) topic with topic name ODCRAlarmNotificationTopic to notify when the threshold breaches.

Note: This will not create cluster placement group ODCRs. For details on capacity reservations in cluster placement groups, refer here.

Getting started

Before you create Capacity Reservations, note the limitations and restrictions here.

To get started, download the scripts for registering, modifying, and canceling ODCRs and associated requirements.txt, as well as AWS Identity and Access Management (IAM) policy from the GitHub link here.

Pre-requisites

To implement these scripts, you need the following prerequisites:

  1. Access to AWS Management Console, AWS Command Line Interface (CLI),or AWS SDK for ODCR.
  2. The following IAM role permissions for IAM users using the solution as provided in ODCR_IAM.json.
  3. Amazon EC2 instance having supported platform for capacity reservation. Capacity Reservations support the following platforms listed here for Linux and Windows.
  4. Refer to the above GitHub link for the code, and save the requirements.txt file in the same directory with other python scripts. You may want to run the requirements.txt file if you don’t have appropriate dependency to run the rest of the python scripts. You can run this using the following command:
pip3 install -r requirements.txt

Implementation Details

To create ODCR capacity reservation

The following instructions will guide you through creating a capacity reservation of running instances across all of the Regions within an AWS account.
Input variables needed from users:

  • EndDateType (String) – Indicates how the Capacity Reservation ends. A Capacity Reservation can have one of the following end types:
      • unlimited – The Capacity Reservation remains active until you explicitly cancel it. Don’t provide an EndDate if the EndDateType is unlimited.
      • limited – The Capacity Reservation expires automatically at a specified date and time. You must provide an EndDate value if the EndDateType value is limited.
  • EndDate (datetime) – The date and time when the Capacity Reservation expires. When a Capacity Reservation expires, the reserved capacity is released and you can no longer launch instances into it. The Capacity Reservation’s state changes to expired when it reaches its end date and time.

You must provide EndDateType as ‘limited’ and the EndDate in standard UTC format to secure instances for a limited period. Command to execute register ODCR script with limited period:

You must provide EndDateType as ‘unlimited’ to secure instances for unlimited period. Command to execute register ODCR script with unlimited period:

registerODCR.py '<EndDateType>' '<EndDate>'
    Example- registerODCR.py 'limited' '2022-01-31 14:30:00'
  • You must provide EndDateType as ‘unlimited’ to secure instances for unlimited period. Command to execute register ODCR script with unlimited period:
registerODCR.py 'EndDateType'
    Example- registerODCR.py 'unlimited'

This registerODCR.py script does following four things:

1. Describe instances cross-region in an account. It checks for the instance that has:

    • No Capacity reservation
    • State of the instance is running
    • Tenancy is default
    • InstanceLifecycle is None indicates whether this is a Spot Instance or a Scheduled Instance

Note: Describe instances API call is counted toward your account API limit. Therefore, it is advisable to run the script during non-peak hours or before the short-term scaling event begins. Work with AWS Support team if you run into API throttling.

2. Aggregates instances with similar attributes, such as InstanceType, AvailabilityZone, Tenancy, and Platform.

3. Describe reserved instances cross-region in an account. It checks for instance(s) that have Zonal Reservation Instances (ZRIs) and compares them with aggregated instances with similar attributes.

4. Finally,

    • Reserves ODCR(s) for existing running instances with matching attributes for which ZRIs do not exist.

Note: If you have one or more ZRIs in an account, then the script compares them with the existing instances with matching characteristics – Instance Type, AZ, and Platform – and does NOT create ODCR for the ZRIs to avoid incurring redundant charges. If there are more running instances than ZRIs, then the script creates an ODCR for just the delta.

    • Creates an SNS topic with the topic name – ODCRAlarmNotificationTopic in the region where you’re registering ODCR, if it doesn’t already exist.
    • Creates CloudWatch alarm for InstanceUtilization using the best practices, which can be found here.

Note: You must subscribe and confirm to the SNS topic, if you haven’t already, to receive notifications.

The CloudWatch alarm is also created on your behalf in the region for each ODCR. This alarm monitors your ODCR metric- InstanceUtilization. Whenever it breaches threshold (50% in this case), it enters the alarm state and sends an SNS notification using the topic that was created for you if you subscribed to it.

Note: You can change the alarm threshold based on your specific needs.

  • You will receive an email notification when CloudWatch Alarm State changes to Alarm with:
    • SNS Subject (Assuming CW alarms triggers in US East region).
ALARM: "ODCRAlarm-cr-009969c7abf4daxxx" in US East (N. Virginia)
    • SNS Body will have the details
      • CW alarm, region, link to view the alarm, alarm details, and state change actions.

With this, if your ODCR InstanceUtilization drops, then you will be notified in near-real time to help you optimize the capacity and stop unnecessary payments for unused capacity.

To modify ODCR capacity reservation

To modify the attributes of an active capacity reservation after you have created it, adhere to the following instructions.

Note: When modifying a Capacity Reservation, you can only increase or decrease the quantity and change how it is released. You can’t change the instance type, EBS optimization, instance store settings, platform, Availability Zone, or instance eligibility of a Capacity Reservation. If you must modify any of these attributes, then we recommend that you cancel the reservation, and then create a new one with the required attributes. You can’t modify a Capacity Reservation after it has expired or after you have explicitly canceled it.

  • Input variables needed from users:
    • CapacityReservationID – The ID of the Capacity Reservation that you want to modify.
    • InstanceCount (integer) – The number of instances for which to reserve capacity. The number of instances can’t be increased or decreased by more than 1000 in a single request.
    • EndDateType (String) – Indicates how the Capacity Reservation ends. A Capacity Reservation can have one of the following end types:
      • unlimited – The Capacity Reservation remains active until you explicitly cancel it. Don’t provide an EndDate if the EndDateType is unlimited.
      • limited – The Capacity Reservation expires automatically at a specified date and time. You must provide an EndDate value if the EndDateType value is limited.
    • EndDate (datetime) – The date and time of when the Capacity Reservation expires. When a Capacity Reservation expires, the reserved capacity is released, and you can no longer launch
    • instances into it. The Capacity Reservation’s state changes to expired when it reaches its end date and time.
      Example to run the modify ODCR script for ‘limited’ period:
    • You must provide EndDateType as ‘unlimited’ to modify instances for an unlimited period. Command to the run modify ODCR script with unlimited period:
  • Command to execute modify ODCR script:
    modifyODCR.py <CapacityReservationId> <InstanceCount> <EndDateType> <EndDate> 
  • Example to execute the modify ODCR script for limited period:
modifyODCR.py 'cr-05e6a94b99915xxxx' '1' 'limited' '2022-01-31 14:30:00'

Note: EndDate is in the standard UTC time.

  • You must provide EndDateType as ‘unlimited’ to modify instances for unlimited period. Command to execute modify ODCR script with unlimited period:
modifyODCR.py <CapacityReservationId> <InstanceCount> <EndDateType>
  • Example to execute the modify ODCR script for unlimited period:
modifyODCR.py 'cr-05e6a94b99915xxxx' '1' 'unlimited'

To cancel ODCR capacity reservation

To cancel the ODCR that are in the “Active” state, follow these instructions:

Note: Once the cancellation request succeeds, the reservation status will be marked as “cancelled”.

  • Input variables needed from users:
    • CapacityReservationID – The ID of the Capacity Reservation to cancel.
  • You must provide one parameter while executing the cancellation script.
  • Command to execute cancel ODCR script:
cancelODCR.py <CapacityReservationId> 
  • Example to execute the cancel ODCR script:
Example - cancelODCR.py 'cr-05e6a94b99915xxxx'

Monitoring

CloudWatch metrics let you monitor the unused capacity in your Capacity Reservations to optimize the ODCR. ODCRs send metric data to CloudWatch every five minutes. Although Capacity Reservation usage metrics are UsedInstanceCount, AvailableInstanceCount, TotalInstanceCount, and InstanceUtilization, for this solution we will be using the InstanceUtilization metric. This shows the percentage of reserved capacity instances that are currently in use. This will be useful for monitoring and optimizing ODCR consumption.

For example, if your On-Demand Capacity Reservation is for four instances and with matching criteria only one EC2 instance is currently running, then the InstanceUtilization metric will be 25% for your respective capacity reservation.

Let’s look at the steps to create the CloudWatch monitoring dashboard for your On-Demand Capacity Reservation solution:

  1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
  2. If necessary, change the Region. From the navigation bar, select the Region where your Capacity Reservation resides. For more information, see Regions and Endpoints.
  3. In the navigation pane, choose Metrics.

Amazon CloudWatch Dashboard

For All metrics, choose EC2 Capacity Reservations.

Amazon CloudWatch Dashboard: Metrics

4. Choose the metric dimension By Capacity Reservation. Metrics will be grouped by

Amazon CloudWatch Metrics: Capacity Reservation Ids

5. Select the dropdown arrow for InstanceUtilization, and select Search for this only.

Amazon CloudWatch Metrics Filter

Once we see the InstanceUtilization metric in the filter list, select Graph Search.

Amazon CloudWatch Metrics: Graph Search

This displays the InstanceUtilization metrics for the selected period.

Amazon CloudWatch Metrics Duration

OPTIONAL: To display the Capacity Reservation IDs for active metrics only:

    • Navigate to Graphed metrics.

Amazon CloudWatch: Graphed Metrics

    • Under Details column, select Edit math expression.

Amazon CloudWatch Metrics: Math Expression

    • Edit the math expression with the following, and select Apply:
REMOVE_EMPTY(SEARCH('{AWS/EC2CapacityReservations,CapacityReservationId} MetricName="InstanceUtilization"', 'Average', 300))

Amazon CloudWatch Graphed Metrics: Math Expression Apply

This displays the Capacity Reservation IDs for active metrics only.

Amazon CloudWatch Metrics: Active Capacity Reservation Ids

With this configuration, whenever new Capacity Reservations are created, the InstanceUtilization metric for respective Capacity Reservation IDs will be populated.

6. From the Actions drop-down menu, select Add to dashboard.

Amazon CloudWatch Metrics: Add to Dashboard

Select Create new to create a new dashboard for monitoring your ODCR metrics.

Amazon CloudWatch: Creat New Dashboard

Specify the new dashboard name, and select Add to dashboard.

Amazon CloudWatch: Create New Dashboard

7. These configuration steps will navigate you to your newly created CloudWatch dashboard under Dashboards.

Amazon CloudWatch Dashboard: ODCR Metrics

Once this is created, if you create new Capacity Reservations, or new instances get added to existing reservations, then those metrics will be automatically be added to your CloudWatch Dashboard.

Note: You may see a delay of approximately 5-10 minutes from the point when changes are made to your environment (ODCR operations or instances launch/termination activities) to those changes getting reflected on your CloudWatch Dashboard metrics.

Conclusion

In this post, we discussed a solution for automating ODCR operations for existing EC2 instances. This included creating capacity reservation, modifying capacity reservation, and cancelling capacity reservation operations that inherit your existing EC2 instances for attribute details. We also discussed monitoring aspects of ODCR metrics using CloudWatch. This solution allows you to automate some of the ODCR operations for existing instances, thereby optimizing and speeding up the entire process.

For more information, see Target a group of Amazon EC2 On-Demand Capacity Reservations blog and Capacity Reservations documentation.

If you have feedback or questions about this post, please submit your comments in the comments section or contact AWS Support.

Incident notification mechanism using Amazon Pinpoint two-way SMS

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/incident-notification-mechanism-using-amazon-pinpoint-two-way-sms/

Unexpected situations that require immediate attention can occur in any industry. Part of resolving these incidents is the notifications’ delivery. For example, utility companies that have installed gas sensors need to notify immediately the available engineer if a leak occurs.

The goal of an incident management process is to restore a normal service operation as quickly as possible and to minimize the impact on business operations, thus ensuring that the best possible levels of service quality and availability are maintained. A key element of incident management is sending timely notifications to the assigned or available resource(s) who can rectify the issue.

An incident can take place at any time and the resource(s) assigned to it might not have internet access and even if they receive the message they might not be equipped to work on it. This creates five key requirements for an incident notifications mechanism:

  1. Notify the resources via a communication channel that ensures message delivery even without internet access
  2. Enable assigned resources to respond to a request via a communication channel that doesn’t require internet access
  3. Send reminder(s) in case there is no response from the assigned resource(s)
  4. Escalate to another resource in case the first one doesn’t reply or declines the incident
  5. Store the incident details & status for reporting and data analysis

In this blog post, I share a solution on how you can automate the delivery of incident notifications. This solution utilizes Amazon Pinpoint SMS channel to contact the designated resources who might not have access to the internet. Furthermore, the recipient of the SMS is able to reply with an acknowledgement. AWS Step Functions orchestrates the user journey using AWS Lambda functions to evaluate the recipients’ response and trigger the next best action. You will use AWS CloudFormation to deploy this solution.

Use Cases

An incident notification mechanism can vary depending the organization’s requirements and 3rd party system integrations. In this blog the solution covers all five points listed above but it might require further modifications depending your use case.

With minor modifications this solution can also be used in the following use cases:

  1. Medicine intake notification: It will notify the patient via SMS that it is their time to take their medicine. If the patient doesn’t acknowledge the SMS by replying then this can be escalated to their assigned doctor
  2. Assignment submission: It will notify the student that their assignment is due. If the student doesn’t acknowledge the SMS by replying then this can be escalated to their teacher

High-level Architecture

The solution requires the country of your SMS recipients to support two-way SMS. To check which countries, support two-way SMS visit this page.  If two-way SMS is supported then you will need to request a dedicated originating identity. You can also use Toll Free Number or 10DLC if your recipients are in the US.

Note: Sender ID doesn’t support two-way SMS.

A new incident is represented as an item in an Amazon DynamoDB table containing information such as description, URL, incident_id as well as the contact numbers for two resources. A resource is someone who has been assigned to work on this incident. The second resource is for escalation purposes in case the first one doesn’t acknowledge or decline the incident notification.

The Amazon DynamoDB table covers three functions for this solution:

  1. A way to add new incidents using either the AWS console or programmatically
  2. As a storage for variables that indicate the incident’s status and can be used from the solution to determine the next action(s)
  3. As a historical data storage for all incidents that have been created for data analysis purposes

The solution utilizes Amazon DynamoDB Streams to invoke an AWS Lambda function every time a new incident is created. The AWS Lambda function triggers an AWS Step Function State machine, which orchestrates three AWS Lambda functions:

  1. Send_First_SMS: Sends the first SMS
  2. Reminder_SMS: Sends a reminder SMS if the resource does not acknowledge the first SMS
  3. Incident_State_Review: Assesses the status of the incident and either goes back to the first AWS Lambda function or finishes the AWS Step Function State machine execution

The AWS Step Functions State machine uses the Choice state, which evaluates the response of the previous AWS Lambda function and decides on the next state. This is a very useful feature that can reduce custom code and potentially AWS Lambda invocations resulting to cost savings.

Additionally, the waiting between steps is also managed from AWS Step Functions State machine using the Wait state. This can be configured to wait seconds, days or till a specific point in the future.

To be able to receive SMS, this solution uses Amazon Pinpoint’s two-way SMS feature. When receiving an SMS Amazon Pinpoint sends a payload to an Amazon SNS topic, which needs to be created separately. An AWS Lambda function that is subscribed to the Amazon SNS topic processes the SMS content and performs one or both of the following actions:

  1. Update the incident status in the DynamoDB table
  2. Create a new Step Function State machine execution

In this solution SMS recipients can reply by typing either yes or no. The SMS response is not case sensitive.

An inbound SMS payload contains the originationNumber, destinationNumber, messageKeyword, messageBody, inboundMessageId and previousPublishedMessageId. Noticeably there isn’t a direct way to associate an inbound SMS with an incident. To overcome this challenge this solution uses a second DynamoDB table, which stores the message_id and incident_id every time an SMS is send to any of the two resources. This allows the solution to use the previousPublishedMessageId from the inbound SMS payload to fetch the respective incident_id from the second DynamoDB table.

The code in this solution uses AWS SDK for Python (Boto3).

Prerequisites

  1. An Amazon Pinpoint project with the SMS channel enabled – Guide on how to enable Amazon Pinpoint SMS channel
  2. Check if the country you want to send SMS to, supports two-way SMS – List with countries that support two-way SMS
  3. An originating identity that supports two-way SMS – Guide on how to request a phone number
  4. Increase your monthly SMS spending quota for Amazon Pinpoint – Guide on how to increase the monthly SMS spending quota

Deploy the solution

Step 1: Create an S3 bucket

  1. Navigate to the Amazon S3 console
  2. Select Create bucket
  3. Enter a unique name for Bucket name
  4. Select the AWS Region to be the same as the one of your Amazon Pinpoint project
  5. Scroll to the bottom of the page and select Create bucket
  6. Follow this link to download the GitHub repository. Once the repository is downloaded, unzip it and navigate to  \amazon-pinpoint-incident-notifications-mechanism-main\src
  7. Access the S3 bucket created above and upload the five .zip files

Step 2: Create a stack

  1. The application is deployed using an AWS CloudFormation template.
  2. Navigate to the AWS CloudFormation console select Create stack > With new resources (standard)
  3. Select Template is ready as Prerequisite – Prepare template and choose Upload a template file as Template source
  4. Select Choose file and from the GitHub repository downloaded in step 1.6 navigate to amazon-pinpoint-incident-notifications-mechanism-main\cfn upload CloudFormation_template.yaml and select Next
  5. Type Pinpoint-Incident-Notifications-Mechanism as Stack name, paste the S3 bucket name created in step 1.5 as the LambdaCodeS3BucketName, type the Amazon Pinpoint Originating Number in E.164 format as OriginatingIdenity, paste the Amazon Pinpoint project ID as PinpointProjectId and type 40 for WaitingBetweenSteps
  6. Select Next, till you reach to Step 4 Review where you will need to check the box I acknowledge that AWS CloudFormation might create IAM resources and then select Create Stack
  7. The stack creation process takes approximately 2 minutes. Click on the refresh button to get the latest event regarding the deployment status. Once the stack has been deployed successfully you should see the last Event with Logical ID Pinpoint-Incident-Notifications-Mechanism and with Status CREATE_COMPLETE

Step 3: Configure two-way SMS SNS topic

  1. Navigate to the Amazon Pinpoint console > SMS and voice > Phone numbers. Select the originating identity that supports two-way SMS. Scroll to the bottom of the page and click to expand the  and check the box to enable it.

    For SNS topic select Choose an existing SNS topic then using the drop down choose the one that contains the name of the AWS CloudFormation stack from Step 2.4 as well as the name TwoWaySMSSNSTopic and click Save.

Step 4: Create a new incident

To create a new incident, navigate to Amazon DynamoDB console > Tables and select the table containing the name of the AWS CloudFormation stack from Step 2.4 as well as the name IncidentInfoDynamoDB. Select View items and then Create item.

On the Create item page choose JSON, copy and paste the JSON below into the text box and replace the values for the first_contact and second_contact with a valid mobile number that you have access to.

Note: If you don’t have two different mobile numbers, enter the same for both first_contact and second_contact fields. The mobile numbers must follow E.164 format +<country code><number>.

{
   "incident_id":{
      "S":"123"
   },
   "incident_stat":{
      "S":"not_acknowledged"
   },
   "double_escalation":{
      "S":"no"
   },
   "description":{
      "S":"Error 111, unit 1 malfunctioned. Urgent assistance is required."
   },
   "url":{
      "S":"https://example.com/incident/111/overview"
   },
   "first_contact":{
      "S":"+4479083---"
   },
   "second_contact":{
      "S":"+4479083---"
   }
}

Incident fields description:

  • incident_id: Needs to be unique
  • incident_stat: This is used from the application to store the incident status. When creating the incident, this value should always be not_acknowledged
  • double_escalation: This is used from the application as a flag for recipients who try to escalate an incident that is already escalated. When creating the incident, this value should always be no
  • description: You can type a description that best describes the incident. Be aware that depending the number of characters the SMS parts will increase. For more information on SMS character limits visit this page
  • url: You can add a URL that resources can access to resolve the issue. If this field is not pertinent to your use case then type no url
  • first_contact: This should contain the mobile number in E.164 format for the first resource
  • second_contact: This should contain the mobile number in E.164 format for the second resource. The second resource will be contacted only if the first one does not acknowledge the SMS or declines the incident

Once the above is ready you can select Create item. This will execute the AWS Step Functions State machine and you should receive an SMS. You can reply with yes to acknowledge the incident or with no to decline it. Depending your response, the incident status in the DynamoDB table will be updated and if you reply no then the incident will be escalated sending a SMS to the second_contact.

Note: The SMS response is not case sensitive.

Clean-up

To remove the solution:

  1. Delete the AWS CloudFormation stack by following the steps listed in this guide
  2. Delete the dedicated originating identity that you used to send the SMS by following the steps listed in this guide
  3. Delete the Amazon Pinpoint project by navigating the Amazon Pinpoint console, select your Amazon Pinpoint Project, choose Settings > General settings > Delete Project

Next Steps

This solution currently works only if your SMS recipients are in one country. If your use case requires to send SMS to multiple countries you will need to:

  • Check this page to ensure that these countries support two-way SMS
  • Follow the instructions in this page to obtain a number that supports two-way SMS for each country
  • Expand the solution to identify the country of the SMS recipient and to choose the correct number accordingly. To identify the country of the SMS recipient you can use Amazon Pinpoint’s phone number validate service via Amazon Pinpoint API or SDKs. The phone validate service returns a list of data points per mobile number with one of them being the Country

Incidents that are not being acknowledged by any of the assigned resources, have their status updated to unacknowledged but they don’t escalate further. Depending your requirements, you can expand the solution to send an email using Amazon Pinpoint APIs or perform an outbound call using Amazon Connect APIs.

Conclusion

In this blog post, I have demonstrated how your organization can use Amazon Pinpoint two-way SMS and Step Functions to automate incident notifications. Furthermore, the solution highlights the synergy of AWS services and how you can build a custom solution with little effort that meets your requirements.

About the Author

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis

Pavlos Ioannou Katidis is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. He loves to dive deep into his customer’s technical issues and help them design communication solutions. In his spare time, he enjoys playing tennis, watching crime TV series, playing FPS PC games, and coding personal projects.

Queueing Amazon Pinpoint API calls to distribute SMS spikes

Post Syndicated from satyaso original https://aws.amazon.com/blogs/messaging-and-targeting/queueing-amazon-pinpoint-api-calls-to-distribute-sms-spikes/

Organizations across industries and verticals have user bases spread around the globe. Amazon Pinpoint enables them to send SMS messages to a global audience through a single API endpoint, and the messages are routed to destination countries by the service. Amazon Pinpoint utilizes downstream SMS providers to deliver the messages and these SMS providers offer a limited country specific threshold for sending SMS (referred to as Transactions Per Second or TPS). These thresholds are imposed by telecom regulators in each country to prohibit spamming. If customer applications send more messages than the threshold for a country, downstream SMS providers may reject the delivery.

Such scenarios can be avoided by ensuring that upstream systems do not send more than the permitted number of messages per second. This can be achieved using one of the following mechanisms:

  • Implement rate-limiting on upstream systems which call Amazon Pinpoint APIs.
  • Implement queueing and consume jobs at a pre-configured rate.

While rate-limiting and exponential backoffs are regarded best practices for many use cases, they can cause significant delays in message delivery in particular instances when message throughput is very high. Furthermore, utilizing solely a rate-limiting technique eliminates the potential to maximize throughput per country and priorities communications accordingly. In this blog post, we evaluate a solution based on Amazon SQS queues and how they can be leveraged to ensure that messages are sent with predictable delays.

Solution Overview

The solution consists of an Amazon SNS topic that filters and fans-out incoming messages to set of Amazon SQS queues based on a country parameter on the incoming JSON payload. The messages from the queues are then processed by AWS Lambda functions that in-turn invoke the Amazon Pinpoint APIs across one or more Amazon Pinpoint projects or accounts. The following diagram illustrates the architecture:

Step 1: Ingesting message requests

Upstream applications post messages to a pre-configured SNS topic instead of calling the Amazon Pinpoint APIs directly. This allows applications to post messages at a rate that is higher than Amazon Pinpoint’s TPS limits per country. For applications that are hosted externally, an Amazon API Gateway can also be configured to receive the requests and publish them to the SNS topic – allowing features such as routing and authentication.

Step 2: Queueing and prioritization

The SNS topic implements message filtering based on the country parameter and sends incoming JSON messages to separate SQS queues. This allows configuring downstream consumers based on the priority of individual queues and processing these messages at different rates.

The algorithm and attribute used for implementing message filtering can vary based on requirements. Similarly, filtering can be enabled based on business use-cases such as “REMINDERS”,   “VERIFICATION”, “OFFERS”, “EVENT NOTIFICATIONS” etc. as well. In this example, the messages are filtered based on a country attribute shown below:

Based on the filtering logic implemented, the messages are delivered to the corresponding SQS queues for further processing. Delivery failures are handled through a Dead Letter Queue (DLQ), enabling messages to be retried and pushed back into the queues.

Step 3: Consuming queue messages at fixed-rate

The messages from SQS queues are consumed by AWS Lambda functions that are configured per queue. These are light-weight functions that read messages in pre-configured batch sizes and call the Amazon Pinpoint Send Messages API. API call failures are handled through 1/ Exponential Backoff within the AWS SDK calls and 2/ DLQs setup as Destination Configs on the Lambda functions. The Amazon Pinpoint Send Messages API is a batch API that allows sending messages to 100 recipients at a time. As such, it is possible to have requests succeed partially – messages, within a single API call, that fail/throttle should also be sent to the DLQ and retried again.

The Lambda functions are configured to run at a fixed reserve concurrency value. This ensures that a fixed rate of messages is fetched from the queue and processed at any point of time. For example, a Lambda function receives messages from an SQS queue and calls the Amazon Pinpoint APIs. It has a reserved concurrency of 10 with a batch size of 10 items. The SQS queue rapidly receives 1,000 messages. The Lambda function scales up to 10 concurrent instances, each processing 10 messages from the queue. While it takes longer to process the entire queue, this results in a consistent rate of API invocations for Amazon Pinpoint.

Step 4: Monitoring and observability

Monitoring tools record performance statistics over time so that usage patterns can be identified. Timely detection of a problem (ideally before it affects end users) is the first step in observability. Detection should be proactive and multi-faceted, including alarms when performance thresholds are breached. For the architecture proposed in this blog, observability is enabled by using Amazon Cloudwatch and AWS X-Ray.

Some of the key metrics that are monitored using Amazon Cloudwatch are as follows:

  • Amazon Pinpoint
    • DirectSendMessagePermanentFailure
    • DirectSendMessageTemporaryFailure
    • DirectSendMessageThrottled
  • AWS Lambda
    • Invocations
    • Errors
    • Throttles
    • Duration
    • ConcurrentExecutions
  • Amazon SQS
    • ApproximateAgeOfOldestMessage
    • NumberOfMessagesSent
    • NumberOfMessagesReceived
  • Amazon SNS
    • NumberOfMessagesPublished
    • NumberOfNotificationsDelivered
    • NumberOfNotificationsFailed
    • NumberOfNotificationsRedrivenToDlq

AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how the application and its underlying services are performing, to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.

Notes:

  1. If you are using Amazon Pinpoint’s Campaign or Journey feature to deliver SMS to recipients in various countries, you do not need to implement this solution. Amazon Pinpoint will drain messages depending on the MessagesPerSecond configuration pre-defined in the campaign/journey settings.
  2. If you need to send transactional SMS to a small number of countries (one or two), you should work with AWS support to define your SMS sending throughput for those countries to accommodate spikey SMS message traffic instead.

Conclusion

This post shows how customers can leverage Amazon Pinpoint along with Amazon SQS and AWS Lambda to build, regulate and prioritize SMS deliveries across multiple countries or business use-cases. This leads to predictable delays in message deliveries and provides customers with the ability to control the rate and priority of messages sent using Amazon Pinpoint.


About the Authors

Satyasovan Tripathy works as a Senior Specialist Solution Architect at AWS. He is situated in Bengaluru, India, and focuses on the AWS Digital User Engagement product portfolio. He enjoys reading and travelling outside of work.

Rajdeep Tarat is a Senior Solutions Architect at AWS. He lives in Bengaluru, India and helps customers architect and optimize applications on AWS. In his spare time, he enjoys music, programming, and reading.

Handy Tips #22: Deploying Zabbix in the AWS cloud platform

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-22-deploying-zabbix-in-the-aws-cloud-platform/19343/

Deploy a production-ready Zabbix instance in the AWS cloud platform with just a few clicks.

With a major paradigm shift to cloud IT infrastructures, many organizations opt-in to migrate their on-prem systems to the Cloud. Zabbix provides official cloud images for the most popular cloud vendors including the AWS cloud platform.

Deploy the complete Zabbix infrastructure in AWS:

  • Deploying a fully functional environment takes less than 5 minutes
  • Select between multiple geographical regions

  • Select the EC2 Instance type best fit for your Zabbix workloads
  • Perfect for both Q/A and Production environments

Check out the video to learn how to deploy Zabbix in AWS.

How to deploy a Zabbix instance in AWS:
 
  1. Open the Zabbix Cloud Images page and select the AWS Zabbix server image
  2. Click Continue to Subscribe and subscribe to use the image
  3. Read the terms and conditions and click Continue to Configuration
  4. Select the Region in which you wish to deploy a Zabbix instance
  5. Select the launch options and the EC2 instance Type
  6. Select a VPC, a subnet, a Security group, and a key pair
  7. Make sure that the selected security group allows traffic through ports 10051, 22 and 443
  8. Press Launch to launch the instance
  9. Check the instance address and connect to the instance 
  10. Copy the initial frontend username and password
  11. Sign-in into the frontend with your credentials

Tips and best practices:
  • The initial frontend password can be obtained by connecting to the instance terminal
  • By default, the Zabbix frontend uses the UTC timezone
  • The frontend timezone can be changed by editing the php_value[date.timezone] variable in /etc/php-fpm.d/zabbix.conf and restarting the php-fpm process
  • The MySQL root password is stored in /root/.my.cnf configuration file

The post Handy Tips #22: Deploying Zabbix in the AWS cloud platform appeared first on Zabbix Blog.

Dynamically personalize your in-product user experience using Amazon Pinpoint in-app messaging

Post Syndicated from Pavlos Ioannou Katidis original https://aws.amazon.com/blogs/messaging-and-targeting/dynamically-personalize-your-in-product-user-experience-using-amazon-pinpoint-in-app-messaging/

Many businesses today struggle to align out-of-product messaging through channels such as email and SMS, with in-product messaging shown when a users is within a mobile or web application. Customers will present one message to a user through a targeted email, but once a user visits the application they are presented with different messaging. This creates confusion for the user, and reduces the chances of them performing a high-value action such as a purchasing a discounted product. Customers can get around this by hard coding certain messages into their application, however this is time consuming for development teams, and slower to implement as it requires a new release of a mobile or web client.

Amazon Pinpoint in-app messaging allows customers to create, target and display in-product messages to users dynamically without the need to update client-side code after initial implementation. This allows a non-technical persona such as a marketer to modify the application experience and target user messaging independently of a development team. This also allows the in-product messaging to share the user targeting as the out-of-app messaging. This creates consistent user messaging, and increases the chance a user performs a high value action.

The blog outlines how to create in-app endpoints, segments, and campaigns. Then how to fetch in-app messages, implement simple logic to control message prioritization, message caps, and to listen for events in order to show the message at the desired moment.

Solution Overview

Assume you are a retailer and want to display a banner with a promotion to all customers with a recent purchase over $500 when they launch the application. To deliver the above experience using the in-app messaging channel, you will need to create a dynamic customer segment where User.UserAttribute.LastPurchaseValue > $500, design an in-app message template with a call-to-action to claim the promotion, and create an in-app campaign. The in-app campaign will be triggered based on the customer event app_launch and only for customers who belong to the dynamic segment created above. To render the message and send in-app message engagement events back to Amazon Pinpoint, you will need to go through an one time setup that is explained in a later section of this blog. Monitor your in-app campaign performance across different metrics, using the Amazon Pinpoint campaign analytics dashboard.

In-app channel implementation can differ depending the use case and requirements. The creation of customer segments, message templates and campaigns can be done either via the Amazon Pinpoint console or programmatically using Amazon Pinpoint APIs. The in-app messages retrieval, rendering and recording of engagement events can either be build and managed from you or use AWS Amplify.

In the following sections you will be introduced to the seven components of the in-app channel and how they operate together:

  • Step 1: Creating in-app endpoints & segment
  • Step 2: Creating an in-app message template
  • Step 3: Creating an in-app campaign
  • Step 4: Querying available in-app messages for an Amazon Pinpoint customer
  • Step 5: Rendering in-app messages
  • Step 6: Recording in-app events
  • Step 7: In-app message display logic using SessionCap, DailyCap, TotalCap

Prerequisites

For this blog post, you should have the following prerequisites:

Step 1: Creating an Amazon Pinpoint customer segment

In Amazon Pinpoint, users can have one or more endpoints. An endpoint describes a unique address, such as an email or mobile number. Similar to other Amazon Pinpoint channels, you need to create or import in-app endpoints with Channel = IN_APP. To retrieve in-app messages for a user, you have to use their IN_APP endpoint id. Note that the Address is not a required field for in-app and can be left blank.

  1. Copy the text below and save it as CSV in your computer
    Id,ChannelType,Address,User.UserId,OptOut
    111,IN_APP,123,userid1,NONE
  2. Navigate to the Amazon Pinpoint console
  3. Select the Amazon Pinpoint project that you want to set up the in-app channel
  4. Navigate to the Segments’ section
  5. Choose Import a segment
  6. Select Upload files from your computer as Import method
  7. Select Choose files and find the CSV file you created in step 1
  8. Choose Create segment
  9. Navigate to AWS Cloudshell console and wait till the terminal loads
  10. Replace <Application id> with your Amazon Pinpoint application id in the following command aws pinpoint get-endpoint –application-id <Application id> —endpoint-id 111
  11. Execute the command in step 10 by pasting it in the AWS CloudShell terminal and press Enter. You should be able to see a response similar to the one below


Step 2: Creating an in-app Message Template

In-app message templates contain a variety of fields with some of them offering the option to choose from pre-defined values such as Header alignment and others in a form of free text such as Message. The end result is a banner that includes a Header, Message, Image, Button(s) and Custom data with all of them being fully customizable. While building an in-app template, you can preview the banner across Phone, Tablet and Browser. This preview is for reference purposes only as the rendering can vary according to the end user’s device as well as your preference on how to render it.

Note: The message template for in-app currently doesn’t support message helpers for personalization but it is a feature the Amazon Pinpoint product team is exploring.

  1. Navigate to Message templates
  2. Select Create template and choose In-app messaging as Channel
  3. Type my_first_in-app_message_template as Template name
  4. Complete the  section, as per your message requirements
  5. Select Create

Step 3: Creating an in-app Campaign

A campaign is a messaging initiative that engages a specific audience segment. A campaign sends tailored messages according to a schedule or customer event that you define.

  1. Navigate to your Amazon Pinpoint project and select Campaigns and Create a campaign
  2. Type my_first_in-app_campaign as Campaign name
  3. Select Standard campaign as Campaign type and In-app messaging as Channel
  4. Select Very important for Set prioritization. This configuration is specific to the in-app channel and it helps you identify the most important in-app message for an endpoint
  5. Select Next and choose the segment in-app-segment from the dropdown. This should be an imported segment that you created in Step 1: Creating an Amazon Pinpoint customer segment. The Segment estimate should show 1 endpoints
  6. Select Next and choose the in-app message template with the name my_first_in-app_message_template, then select Next
  7. An in-app campaign needs to have a Trigger event, which will determine when the in-app message will be displayed. You can add event Attributes and/or Metrics to make it more specific. To learn how to record events with Amazon Pinpoint visit Reporting events in your application. If you currently do not record any events in your Amazon Pinpoint project type test_event as Trigger events
  8. Select Start and End date and time for Campaign dates. Note that in-app campaigns need to start at least 15 minutes later from the time of publishing
  9. In the Edit campaign settings section you will find the fields, which specify the Maximum number of session messages viewed per endpoint (SessionCap), Maximum number of daily messages viewed per endpoint (DailyCap) and Maximum number of messages viewed per endpoint (TotalCap). These values indicate how many times the in-app message can be displayed to the customer for that in-app campaign within a session, day and in total respectively. In all three campaign setting fields enter the number 10 and select Override project-level setting where applicable Set prioritization, Trigger events and Caps are part of the in-app message payload that you receive when calling Amazon Pinpoint’s In-app messages REST API operation. You will use this information to decide whether to render or not that in-app message.
  10. Select Next scroll down and select Launch campaign

Step 4: Querying available in-app messages for an Amazon Pinpoint customer

To retrieve in-app messages for an Amazon Pinpoint customer, you will need to have their IN_APP endpoint id and either use the In-app messages REST API operation, one of the AWS SDKs that support Amazon Pinpoint, AWS Command Line Interface or AWS Amplify.

Note: AWS Amplify manages on your behalf the in-app messages request, rendering and tracking, thus if you are using AWS Amplify for Amazon Pinpoint in-app channel the steps below are not required.

In the request body you need to specify the IN_APP endpoint id. If there are any available in-app messages for that endpoint id, the response will contain a JSON object with the top ten active in-app messages based on their priority (the ten in app message response is a hard limit). Loop through the in-app messages and identify the one that meets the criteria based on the Trigger event and Prioritization.

  1. Navigate to the AWS CloudShell console
  2. Replace <Application id> with your Amazon Pinpoint application id in the following command aws pinpoint get-in-app-messages –application-id <Application id> —endpoint-id 111
  3. Execute the command in step 2 by pasting it in the AWS CloudShell terminal and press Enter. You should be able to see a response similar to the one below

The response should contain only one in-app campaign. You can see all the in-app message template data and campaign configuration are present in the response.

Note: Campaigns that have passed their end date, or have reached their daily or total cap limit won’t show in the response. In case the response contains more than one in app message with the same priority and they both haven’t exceeded their caps, you can use the in-app campaign start date to evaluate which one to display.

It is recommended to retrieve the in-app messages once per session and store them locally. That way in every event the customer triggers in your mobile / web app you would check against the in-app messages stored locally instead of performing additional calls to Amazon Pinpoint. This approach decreases the in-app channel cost as you pay per request.

You can perform the operation of retrieving in-app messages for an Amazon Pinpoint customer either client side or server side. Server side can be implemented using the architecture illustrated below, which utilizes Amazon API Gateway and AWS Lambda creating a development framework agnostic approach. Furthermore Amazon API Gateway is offering a great variety of authentication and authorization mechanisms.

The server side architecture depicted below doesn’t cover the use case for offline customers. If this is a requirement then it is recommend to store in-app messages and fetch them locally when the device doesn’t have internet connectivity. Once the device is connected back to the internet you can retrospectively send any in-app related events.

Note: If you are using AWS Amplify, it will retry to publish customer offline events that occurred once the device gets back online.

Step 5: Rendering in-app messages

Render the in-app messages yourself based on the in-app message API response or use AWS Amplify which will render it on your behalf. AWS Amplify allows you to provide your own In-App Messaging UI component to override the default Amplify provided UI.

Step 6: Recording in-app events

Measuring in-app campaigns’ performance is based on four metrics:

  • Message displayed: a message has been displayed to an end user
  • Message dismissed: a user has dismissed a message
  • Message clicked: a user has clicked through a message
  • Any event type: Any event that a user can trigger on the mobile or web app

Fire the above events either from client or server side as Amazon Pinpoint custom events. Amazon Pinpoint custom events can be recorded using put_events REST API operation or AWS SDKs that support Amazon Pinpoint.

Note: If you are using AWS Amplify, the in-app events will be recorded automatically

To have these events recorded under Amazon Pinpoint Campaign deliver metrics dashboard, you have to use the following EventType names:

  • Message displayed: _inapp.message_displayed
  • Message dismissed: _inapp.message_dismissed
  • Message clicked: _inapp.message_clicked
  • Any event type: No specific name is required

In addition to the EventType, a few other fields are required in order to attribute these events to the correct in-app campaign. Within the event attributes’ object of the request payload, the fields campaign_id and delivery_type must be provided. Campaign_id should match the InApp campaign_id, while the delivery_type should be IN_APP_MESSAGE. Additionally, the treatment_id is necessary if you are running an A/B test.

Note: If you do not use the above event names and attributes, you won’t see any events under Campaign delivery metrics and Campaign engagement rates on the Amazon Pinpoint console.

Step 7: In-app message display logic using SessionCap, DailyCap and TotalCap

Message display logic refers to the logic that stores and assesses the number of times a user has seen / interacted with the in-app message. Amazon Pinpoint calculates the DailyCap & TotalCap as long as you record the _inapp.message_displayed event or using AWS Amplify. For the SessionCap event you need to count the _inapp.message_displayed locally on your mobile / web application unless you are using AWS Amplify.

Note: When retrieving the in-app messages from Amazon Pinpoint, the payload contains the remaining number of times you can display the in-app message daily & total.

Conclusion

This post walks you through how to configure Amazon Pinpoint to send in-app messages to your customers when browsing your mobile / web application. Using this Amazon Pinpoint channel, you can now:

  • Create in-app segments, message templates and campaigns
  • Retrieve in-app messages per user
  • Render in-app messages
  • Record customer engagement data with the in-app message

Related links

To learn more about the technologies or features used to create this solution, explore the following pages: