Tag Archives: AWS

Send text messages in Amazon Connect by integrating Amazon Pinpoint

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/send-text-messages-in-amazon-connect-by-integrating-amazon-pinpoint/

Because Amazon Pinpoint is a member of the AWS family, you can integrate it seamlessly with other AWS services. In the past, this blog has looked at the process of integrating Amazon Pinpoint with Amazon Comprehend and Amazon Redshift.

Earlier this week, Michael Woodward, a Solution Architect here at AWS, published a blog post about integrating Amazon Pinpoint with Amazon Connect, our cloud-based contact center service.

Integrating Amazon Pinpoint into Amazon Connect lets you expand the capabilities of your call center systems in several interesting ways. For example, you can use Amazon Pinpoint to send more information after a call ends, or to send a link to an after-call survey.

To learn more about this solution, see Michael’s post on the AWS Contact Center blog.

New Regions, New Features, and a New Web Site

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/new-regions-new-features-and-a-new-web-site/

It’s a busy time here on the Digital User Engagement Team at AWS!

Last week, we made Amazon Pinpoint available in the Asia Pacific (Mumbai) and Asia Pacific (Sydney) AWS Regions. This is great news for new Pinpoint customers in these areas of the globe who were previously concerned with issues related to latency and data residency. Existing Amazon Pinpoint customers can also use these new Regions to increase availability and create geographical redundancy.

On Tuesday of this week, we also launched two exciting improvements to the Amazon Pinpoint console. The first improvement is a tool that you can use to import customer segments in just a few clicks. Previously, if you wanted to import customer data into Pinpoint, you had to save the data in a CSV or JSON file, upload it to an S3 bucket, create a segment in Pinpoint, and enter the full path to the S3 bucket. Now, you can drag and drop files right into the segment importer. To learn more, see the Pinpoint User Guide.

The other new feature that we released this week is an improved email editor. Our previous email editor only allowed you to include a limited set of HTML tags in your emails. With our new editor, however, you can include any HTML tags that you want. The new editor also includes a helpful side-by-side view that renders your message in real-time, as shown in the following image.

Users who don’t want to work with HTML code can also use the Design view to create and modify emails in an intuitive, WYSIWYG interface. For more information, see the Pinpoint User Guide.

Finally, we launched a new website for Amazon Pinpoint at https://aws.amazon.com/pinpoint. On our new site, you can learn more about the capabilities of Amazon Pinpoint. You’ll find in-depth information about all of the features, channels, and use cases that Amazon Pinpoint supports.

Every day, we’re amazed by the things that our customers do with Amazon Pinpoint. We hope these changes help you do even more incredible things!

Learn About Amazon Pinpoint at Upcoming Events Around the World

Post Syndicated from Hannah Nilsson original https://aws.amazon.com/blogs/messaging-and-targeting/learn-about-amazon-pinpoint-at-upcoming-events/

Connect with the AWS Customer Engagement team at events around the world to learn how our technology can to help you better engage with your customers. Get demos on recent feature releases, discover how you can use Pinpoint for your specific use case, and attend informative sessions to hear how companies around the world are using AWS Customer Engagement solutions to deliver better experiences for their customers. Plus, read below to find out how Amazon Pinpoint and Amazon SES both enable you to create innovative email experiences with the recent AMP Project launch.

AWS Customer Engagement in the news: Amazon SES and Amazon Pinpoint support build the future of email with AMP

The AMP Project’s mission is to enable more user-first experiences on the web, including web-based technology like email. On March 26, the AMP Project announced that they are bringing AMP technology to email in order to give users an interactive, real-time experience that also keeps inboxes safe.

Amazon Pinpoint and Amazon SES both provide out-of-the-box support for AMP for email with no additional configuration. This allows you to easily create experiences for your customers such as  submitting RSVPs to events, filling out questionnaires, browsing catalogs, or responding to comments right within the email.

Read the AMP announcement for more information about these new capabilities. To learn how to use the AMP format with Amazon SES, visit the SES Developer Guide. To learn how to use the AMP format with Amazon Pinpoint, read this Amazon Pinpoint API Reference. View these instructions for more information on how to add AMP to an existing email.

Amazon Pinpoint has been busy building. You can now:

  • Learn how to set up an email preference management web page that enables customers to manage their email subscription preferences. Read now.
  • Learn how to set up a web form that collects information from new customers, and then sends them an SMS message to confirm that they want to receive content from you. Read now.
  • Use Amazon Pinpoint in the US West (Oregon), EU (Frankfurt), and EU (Ireland) regions in addition to the US East (Virginia) region. Learn more.
  • Deliver voice messages to your users with Amazon Pinpoint Voice. Learn more.
  • Set up campaigns that auto-send messages to your customers when they take specific actions. Learn more.
  • Detect and understand issues impacting your email deliverability with the Amazon Pinpoint Deliverability Dashboard. Learn more. 

Meet an Amazon Pinpoint expert at these upcoming events. We will teach you how to take advantage of recent updates so that you can create better engagement experiences for your customers. Plus, we can give you an inside look on what’s on our roadmap, and we’ll be giving out custom Pinpoint swag!

AWS Summit, Singapore 

April 10, 2019
Singapore Expo Convention & Exhibition Centre
Amazon Pinpoint will host an informative session about our Customer Engagement solutions at the AWS Singapore Summit. In this session, we will describe how AWS enables companies to better understand and engage their customers with personalized, timely, and relevant communications on multiple channels. You will also learn how Disney Streaming Services is using Amazon Pinpoint to engage their users.
Register for the Summit here.

“Mobile Days” at the AWS San Francisco Loft   

April 24, 2019 
AWS San Francisco Loft
Join us for an engaging day of discussion and education. Amazon Pinpoint experts will host the following sessions:

  • 2:30pm – 3:30pm: How Do You Measure Customer Success? Featuring Amazon Pinpoint. 
  • 3:30pm – 4:30pm: Using ML to Create Enhance Your Marketing. Featuring Amazon Pinpoint and Amazon Personalize. 

Space for this event is limited, please reserve your seat here.

AWS Summit, Sydney

May 1-2, 2019
International Convention Centre (ICC), Darling Harbour, Sydney
Don’t miss the customer engagement session on April 30th. This session, part of Amazon’s Innovation Day event, features a keynote address by Neil Lindsay, Vice President of Global Marketing at Amazon. The session explores how AWS technologies power organizations that deliver customer-centric innovations. Learn about how Australia’s largest brands and digital agencies use AWS technologies to engage customers, build new business models, and transform customer experiences.
Register for the Summit here

AWS Summit, Mumbai

May 15, 2019
Bombay Exhibition Center, Mumbai
The Amazon Pinpoint team will be at the “Ask an Expert” booth. Stop by to meet the team, ask questions, and pick up Amazon Pinpoint swag!
Register for the summit here

This Is My Architecture: Mobile Cryptocurrency Mining

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/this-is-my-architecture-mobile-cryptocurrency-mining/

In North America, approximately 95% of adults over the age of 25 have a bank account. In the developing world, that number is only about 52%. Cryptocurrencies can provide a platform for millions of unbanked people in the world to achieve financial freedom on a more level financial playing field.

Electroneum, a cryptocurrency company located in England, built its cryptocurrency mobile back end on AWS and is using the power of blockchain to unlock the global digital economy for millions of people in the developing world.

Electroneum’s cryptocurrency mobile app allows Electroneum customers in developing countries to transfer ETNs (exchange-traded notes) and pay for goods using their smartphones. Listen in to the discussion between AWS Solutions Architect Toby Knight and Electroneum CTO Barry Last as they explain how the company built its solution. Electroneum’s app is a web application that uses a feedback loop between its web servers and AWS WAF (a web application firewall) to automatically block malicious actors. The system then uses Athena, with a gamified approach, to provide an additional layer of blocking to prevent DDoS attacks. Finally, Electroneum built a serverless, instant payments system using AWS API Gateway, AWS Lambda, and Amazon DynamoDB to help its customers avoid the usual delays in confirming cryptocurrency transactions.

 

New Whitepaper: Active Directory Domain Services on AWS

Post Syndicated from Vinod Madabushi original https://aws.amazon.com/blogs/architecture/new-whitepaper-active-directory-domain-services-on-aws/

The cloud is now at the center of most Enterprise IT strategies. As such, a well-planned move to the cloud can result in immediate business payoff. To achieve such success, it’s important that you adopt Microsoft Active Directory (AD), the foundation of many large enterprise Windows and .NET applications in a secure, scalable, and highly available manner within the AWS Cloud.

AWS offers flexible options for running AD, so as a customer it’s essential to select an architecture well-suited to support your applications. AWS offers a fully managed option called AWS Managed Active Directory, which enables your directory-aware workloads to use Managed Active Directory in AWS. You can also run Active Directory on Amazon Elastic Compute Cloud (Amazon EC2) and manage both the EC2 Instances and Active Directory, which provides the flexibility needed to extend an existing Active Directory domain to the AWS infrastructure.

In this regard, we are very excited to release Active Directory Domain Services for AWS Whitepaper. This Active Directory whitepaper describes best practices for running Active Directory on AWS, including different architectural approaches for running AWS Managed AD and Active Directory on EC2 Instances. In addition, this document discusses the design considerations, security, network connectivity, and multi-region deployment of Active Directory for both scenarios.

Read the whitepaper: Active Directory on AWS.

About the author

Vinod MadabushiVinod Madabushi is an Enterprise Solutions Architect and subject matter expert in Microsoft technologies, including Active Directory. He works with customers on building highly available, scalable, and resilient applications on AWS Cloud. He’s passionate about solving technology challenges and helping customers with their cloud journey.

 

The latest news, content, and helpful tips for AWS Digital User Engagement

Post Syndicated from Hannah Nilsson original https://aws.amazon.com/blogs/messaging-and-targeting/the-latest-news-content-and-helpful-tips-for-aws-digital-user-engagement/

The AWS Digital User Engagement team hit the ground running this year. From speaking in front of crowds of digital marketers and developers, to developing new tutorials to help make it easier to get started building solutions to common use cases, here’s the latest on what we’ve been up to and our latest updates to Amazon Pinpoint.

How To Achieve Customer-Obsessed Digital User Engagement

simon-poile-presenting-at-digital-summit

Simon Poile, GM of AWS Digital User Engagement, had the pleasure of speaking to hundreds of digital marketers at the Digital Summit conference in Seattle, WA on February 26th. Digital Summit attendees are the movers and shakers influencing the growth and success of their company’s digital marketing — and the future landscape of the digital economy. Simon provided insights on how marketers can embody the Amazon culture of customer obsession to gain a deeper understanding of their customers, strengthen trust between brands and their users, and create a personalized digital engagement experience that is timely, contextually relevant, and reaches the right user at the right time through the right medium. He discussed how marketers can embrace technology such as machine learning and IoT to accomplish transformative engagement, and provided insights about how brands around the world are using AWS Digital User Engagement solutions to transform their engagement efforts.

View The Presentation Deck.

Learn to implement two-way SMS messaging for a simple approach that results in higher levels of customer engagement

In a recent article posted on A Cloud Guru, Dennis Hill explains what two-way SMS is and how you can quickly and easily start sending personalized, timely, and relevant text messages to your customers with Amazon Pinpoint. He then shows how you can implement a practical solution for setting up an SMS long codeso you can start sending and receiving text messages.

Read Now.

New Amazon Pinpoint Getting Started Guide: How to Create an SMS Registration System

On Wednesday the 27th, we launched the first Amazon Pinpoint Getting Started Guide. This guide, located in the Tutorials section of the Pinpoint Developer Guide, shows you the entire process of creating a customer registration solution for SMS messaging. A common way to capture customers’ mobile phone numbers is to use a web-based form. After you verify the customer’s and confirm the customer’s subscription, you can start sending promotional, transactional, and informational SMS messages to that customer.

In the tutorial, you’ll learn how to set up two-way SMS messaging in Pinpoint, create a web form to capture customers’ contact information, send registration information from your own website to a Lambda function using API Gateway, how to implement a double opt-in strategy, and more.

The tutorial is intended for users of all skill levels. While there is some coding involved, all of the necessary code is included. You can use this tutorial to create a complete solution, or as a starting point for your own use case.

Get started now.

Recent Amazon Pinpoint Launches

Amazon Pinpoint is now available in the US West (Oregon), EU (Frankfurt), and EU (Ireland) regions in addition to the US East (Virginia) region. You can now use Amazon Pinpoint to power your digital user engagement without having to transfer your customer data across regions.

This regional expansion is particularly useful for organizations in certain regions of the EU, where data residency considerations previously made it difficult for many customers to use Amazon Pinpoint. It also creates a global infrastructure that helps to improve availability and redundancy while reducing latency.

Learn more.

ICYMI, you can now:

amazon-pinpoint-voice

Deliver voice messages to your users with Amazon Pinpoint Voice.

Learn more.

amazon-pinpoint-event-triggers

Set up campaigns that auto-send messages to your customers when they take specific actions.

Learn more.

amazon-pinpoint-deliverability-dashboard

Detect and understand issues impacting your email deliverability with the Amazon Pinpoint Deliverability Dashboard.

Learn more.

Customer Spotlight

How Hulu uses Amazon Pinpoint for their real-time notification platform.

hulu-amazon-pinpoint-architecture

At Hulu, notifying their viewers when their favorite teams are playing helps them drive growth and improve viewer engagement. However, building this feature was a complex process. Managing their live TV metadata, while generating audiences in real time in high-scalability scenarios, posed unique challenges for the engineering team. In this video, Hulu discuss the challenges in building their real-time notification platform, how Amazon Pinpoint helped them with their goals, and how they architected their solution for global scale and deliverability.
Watch to learn how they built their solution.

Watch to learn how they built their solution.
View the presentation deck.

Meet us at Shoptalk, March 3-6

The AWS Digital User Engagement team will be at the AWS Booth #2617 at Shoptalk, March 3-6 at the Venetian in Las Vegas. Stop by to view our demo of the integration of Amazon Pinpoint and Amazon Personalize, which will show how a customer’s interaction with products in a retail setting can be tracked with smart-devices connected to AWS, resulting in real-time inferences and predictions on a customer’s affinity for products they haven’t yet interacted with. This information can be used to send push notifications with Amazon Pinpoint to a customer’s mobile device, making them aware of the products and possible deals that Amazon Personalize has predicted they will appreciate.

Introducing AWS Solutions: Expert architectures on demand

Post Syndicated from AWS Admin original https://aws.amazon.com/blogs/architecture/introducing-aws-solutions-expert-architectures-on-demand/

AWS Solutions Architects are on the front line of helping customers succeed using our technologies. Our team members leverage their deep knowledge of AWS technologies to build custom solutions that solve specific problems for clients. But many customers want to solve common technical problems that don’t require custom solutions, or they want a general solution they can use as a reference to build their own custom solution. For these customers, we offer AWS Solutions: vetted, technical reference implementations built by AWS Solutions Architects and AWS Partner Network partners. AWS Solutions are designed to help customers solve common business and technical problems, or they can be customized for specific use cases.

AWS Solutions are built to be operationally effective, performant, reliable, secure, and cost-effective; and incorporate architectural frameworks such as the Well-Architected Framework. Every AWS Solution comes with a detailed architecture diagram, a deployment guide, and instructions for both manual and automated deployment.

Here are some Solutions we are particularly excited about.

Media2Cloud

We released the Media2Cloud solution in January 2019. This solution helps customers migrate their existing video archives to the cloud. Media2Cloud sets up a serverless end-to-end workflow to ingest your videos and establish metadata, proxy videos, and image thumbnails.

Because it can be a challenging and slow process to migrate your existing video archives to the cloud, the Media2Cloud solution builds the following architecture.

Media2Cloud architecture

The solution leverages the Media Analysis Solution to analyze and extract valuable metadata from your video archives using Amazon Rekognition, Amazon Transcribe, and Amazon Comprehend.

The solution also includes a simple web interface that helps make it easier to get started ingesting your videos to the AWS Cloud. This solution is set up to integrate with AWS Partner Network partners to help customers migrate their video archives to the cloud.

AWS Instance Scheduler

In October 2018, we updated the AWS Instance Scheduler, a solution that enables customers to easily configure custom start and stop schedules for their Amazon EC2 and Amazon RDS instances.

When you deploy the solution’s template, the solution builds the following architecture.

AWS Instance Scheduler

 

For customers who leave all of their instances running at full utilization, this solution can result in up to 70% cost savings for those instances that are only necessary during regular business.

The Instance Scheduler solution gives you the flexibility to automatically manage multiple schedules as necessary, configure multiple start and stop schedules by either deploying multiple Instance Schedulers or modifying individual resource tags, and review Instance Scheduler metrics to better assess your Instance capacity and usage, and to calculate your cost savings.

AWS Connected Vehicle Solution

In January 2018, we updated the AWS Connected Vehicle Solution, a solution that provides secure vehicle connectivity to the AWS Cloud. This solution includes capabilities for local computing within vehicles, sophisticated event rules, and data processing and storage. The solution also allows you to implement a core framework for connected vehicle services that allows you to focus on developing new functionality rather than managing infrastructure.

When you deploy the solution’s template, the solution builds the following architecture.

Connected Vehicle solution

You can build upon this framework to address a variety of use cases such as voice interaction, navigation and other location-based services, remote vehicle diagnostics and health monitoring, predictive analytics and required maintenance alerts, media streaming services, vehicle safety and security services, head unit applications, and mobile applications.

These are just some of our current offerings. Other notable Solutions include AWS WAF Security Automations, Machine Learning for Telecommunication, and AWS Landing Zone. In the coming months, we plan to continue expanding our portfolio of AWS Solutions to address common business and technical problems that our customers face. Visit our homepage to keep up to date with the latest AWS Solutions.

Two-Way SMS with Amazon Pinpoint

Post Syndicated from Hannah Nilsson original https://aws.amazon.com/blogs/messaging-and-targeting/two-way-sms-with-amazon-pinpoint/

pinpoint-2way-sms

Learn to implement two-way SMS messaging for a simple approach that results in higher levels of customer engagement

SMS, or text messaging, is the simplest way to reach your users outside of normal customer-facing web or mobile applications. Compared to other communication channels, such as email and push notifications, text messaging results in higher engagement.

SMS messaging is extremely convenient — users don’t have to authenticate, download your app, or go to your website. They simply receive your message on their device. When it comes to customer acquisition and retention, it doesn’t get any easier than this.

In this article posted on A Cloud Guru, Dennis Hills explains what two-way SMS is and how you can quickly and easily start sending personalized, timely, and relevant text messages to your customers with Amazon Pinpoint. He then shows how you can implement a practical solution for setting up an SMS long code so you can start sending and receiving text messages.

Read the article now, and be sure to let us know in the comments what types of advanced topics  for SMS messaging you’d like to see us or Dennis write about in the future.

AWS Ops Automator v2 features vertical scaling (Preview)

Post Syndicated from AWS Admin original https://aws.amazon.com/blogs/architecture/aws-ops-automator-v2-features-vertical-scaling-preview/

The new version of the AWS Ops Automator, a solution that enables you to automatically manage your AWS resources, features vertical scaling for Amazon EC2 instances. With vertical scaling, the solution automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. The solution can resize your instances by restarting your existing instance with a new size. Or, the solution can resize your instances by replacing your existing instance with a new, resized instance.

With this update, the AWS Ops Automator can help make setting up vertical scaling easier. All you have to do is define the time-based or event-based trigger that determines when the solution scales your instances, and choose whether you want to change the size of your existing instances or replace your instances with new, resized instances. The time-based or event-based trigger invokes the AWS Lambda to scale your instances.

Ops Automator Vertical Scaling

Restarting with a new size

When you choose to resize your instances by restarting the instance with a new size, the solution increases or decreases the size of your existing instances in response to changes in demand or at a specified point in time. The solution automatically changes the instance size to the next defined size up or down.

Replacing with a new, resized instance

Alternatively, you can choose to have the Ops Automator replace your instance with a new, resized instance instead of restarting your existing instance. When the solution determines that your instances need to be scaled, the solution launches new instances with the next defined instance size up or down. The solution is also integrated with Elastic Load Balancing to automatically register the new instance with load Balancers.

Getting Started

To learn more, visit the solution webpage and request access to the private preview.

Samsung Builds a Secure Developer Portal with Fargate and ECR

Post Syndicated from AWS Admin original https://aws.amazon.com/blogs/architecture/samsung-builds-a-secure-developer-portal-with-fargate-and-ecr/

This post was provided by Samsung.

The Samsung developer portal (Samsung Developers) is Samsung’s online portal built to serve technical documents, the Developer blog, and API guides to developers, IT managers, and students interested in building applications with the Samsung products. The Samsung Developers consists of three different portals:

  • SmartThings portal, which serves IoT developers is our oldest portal. We developed it on Amazon Elastic Container Service (ECS) but have now migrated it to AWS Fargate
  • Bixby portal, which serves Bixby capsule developers, was developed using AWS Fargate
  • Rich Communication Services (RCS), which serves the new standard of mobile messaging, was also developed using AWS Fargate

Samsung Electronics Cloud Operation Group (SECOG) unveiled these three portals at Samsung Developer Conference 2017 and 2018.

Samsung developed the SmartThings portal on ECS and had an overall good experience using it. We   found that ECS provided the appropriate level of abstraction while also offering control of their underlying instances. However, when we learned about AWS Fargate at re:Invent 2017, we wanted to try it out. Being an Amazon ECS customer, there was a lot to like about Fargate. It provided significant operational efficiency while also eliminating the need to manage servers and clusters, meaning we could just focus on running containers to release new features.

In 2018, our engineering team began migrating all of our systems to Fargate. Because Fargate exposed the same APIs and endpoints that ECS did, the migration experience was extremely smooth and we immediately experienced improvements in operational efficiency. Before Fargate, Samsung typically had administrators and operators dedicated to managing their web services for the portal. However, as we migrated to Fargate, we were able to easily eliminate the need for an administrator, saving operational cost while improving development efficiency. Now, our operations and administration teams are focused more on elaborate logging and monitoring activities, further improving overall service reliability, security, and performance.

The Samsung developer portal is built using a microservice based architecture, and provides technical documents, API Docs, and support channels to our customers. To serve these features, the portal requires frequent updates to a number of different Fargate services. Technical writers who are interested in publishing new content every day  initiate these updates. To meet these business requirements, Samsung Electronics Cloud Operation Group (SECOG) and Technology Partner (TecAce) researched services that were agile and efficient and could be run with minimal operational overhead. When they learned about Fargate, they were interested in doing a proof of concept and based on its result, were convinced that Fargate could meet their needs.

Service Key Requirements

As we began our migration to Fargate, we realized that the portal had to comply with the various key requirements standardized with SECOG and InfoSec. These requirements are:

  • Security: the Service Ops should have the ability to control every Security factor.
  • Scalability: the Service focuses on Samsung developers who are using Samsung products in public. The Service therefore should be capable of handling traffic surges.
  • Easy to deploy: technical documents are easily pushed to the live environment giving technical writers the ability to easily make edits.
  • Controllability: The Service should be able to control container options such as port mapping, memory size, etc.

As we dived deeper into AWS Fargate,  SECOG and Infosec teams were satisfied that Fargate could deliver on all these requirements.

Build and Deploy Process

SECOG and TecAce decided to use AWS Fargate and Amazon Elastic Container Registry (ECR) service to meet the key requirements of the developer portal.

Figure 1: Architecture drawing

Figure 1: Architecture drawing

The System Architecture is very simple. When we release new features or update documents, we upload new container images to ECR then we publish our code to production. Each business application is designed with the combination of Application Load Balancer (ALB), Fargate, and Route 53.

Easy Fargate

After using Fargate, Samsung’s business owners were extremely satisfied with the choice. The Samsung Developers is operated and configured with multiple teams, which are globally distributed with development, operations, and QA roles, and responsibilities. Each team needs to deploy an individual environment for test. Before Fargate, we needed considerable engineers and developers bandwidth to operate web services infrastructure. However, Fargate simplified this process. Each team only needs to create a new container images and deploy to ECR. The image is then deployed to the test environment on Fargate. With this process, we were able to greatly reduce the time our developers and operators spent managing and configuring this infrastructure.

With Fargate, we are able to deploy more often to production and teams are able to handle additional Samsung products within the Samsung Developers. Additionally, we don’t have to worry about deploying and creating new images. We  simply create a new revision, setting the container’s memory and port. Then, we select our Fargate cluster after determining the commute capacity needed.

The compute capacity of the Fargate services can be easily scaled out using Autoscaling. Therefore, all deployment tasks only take a few minutes to serve. Additionally, there is no cluster managed by a system administrator or operator, and there is no EC2 instance and no docker swarm to maintain the  services. This ensures that we can focus on the features of Samsung Developers and improve end-customer experiences.

Currently, when an environment is deployed and served at Samsung Developers, Samsung monitors the health with alarms based on Amazon CloudWatch metrics. In addition, we have easily achieved the required availability and the reliability from our portal  while reducing monthly costs by approximately 44.5% (compute cost only).

Because of Samsung’s experience with Fargate, we have decided to migrate additional services from ECS to Fargate. Overall our tems have a great experience working with Fargate. The level of automation Fargate provides helps us move faster while also helping us become more economical with our developerment and operations resource. We felt that getting started with Fargate can take some time, however once the environment is set up, we were able to achive high levels of agiligty and scalablility with Fargate.

About Samsung

Samsung is a South Korean multinational conglomerate headquartered in Samsung Town, Seoul. It comprises numerous affiliated businesses,most of them united under the Samsung brand, and is the largest South Korean business conglomerate.

Handling AWS Chargebacks for Enterprise Customers

Post Syndicated from Varad Ram original https://aws.amazon.com/blogs/architecture/handling-aws-chargebacks-for-enterprise-customers/

As AWS product portfolios and feature sets grow, as an enterprise customer, you are likely to migrate your existing workloads and innovate your new products on AWS. To help you keep your cloud charges simple, you can use consolidated billing. This can, however, create complexity for your internal chargebacks, especially if some of your resources and services are not tagged correctly. To help your individual teams and business units normalize and reduce their costs as your AWS implementation grows, you can implement chargebacks transparently and automate billing.

This blog post includes a walkthrough of an end-to-end mechanism that you can use to automate your consolidated billing charges for either your existing AWS accounts, or for newly created accounts.

Walkthrough

Prerequisites for implementation:

  • One account that is the payer account, which consolidates billing and links all other accounts (including admin accounts)
  • An understanding of billing, Detailed Billing Report (DBR), Cost and Usage Report (CUR), and blended and unblended costs
  • Activate propagation of necessary cost allocation tags to consolidated billing
  • Access to reservations across the linked accounts
  • Read permission on the source bucket and write permission to the transformed bucket
  • An automated method (such as database access or an API) to verify the cost centers tagged to AWS resources
  • Permissions to get access to the services described in this solution on the account targeted for this automation

Before you begin, it is important to understand the blended costs and unblended costs in consolidated billing. Blended costs are calculated based on the blended rate (the average rates for the reserved and on-demand instances that are used by your member accounts) for each service your accounts used, multiplied by the account usage of those services. Unblended costs are the charges for those services broken out for each linked account.

Based on your organization’s strategy for savings (centralized or not), you could consider either the blended or unblended costs. The consolidated billing files that include the information for the chargeback are the Detailed Billing Report (DBR) and Cost and Usage Report (CUR). Both of these reports provide both the blended and unblended rates as separate columns.

To help you create and maintain your AWS accounts, you can use AWS Account Vending Machine (AVM). You can launch AVM from either the AWS Landing Zone or with a custom solution. AVM keeps all your account information in a DynamoDB table (such as the account number, root mail ID, default cost center, name of the owner, etc.) and maintains reservation-related data (such as invoice ID, instance type, region, amount, cost center, etc.) in another table. To enable your account administrator to add invoice details for all your reservations, you can use a web page hosted on AWS Lambda, Amazon Simple Storage Service (Amazon S3), or a web server.

To begin the process of billing transformation, you must add a trigger on an S3 bucket (which contains raw AWS billing files) that pushes messages (PutObject) into Amazon Simple Queue Service (SQS) and your billing transformation program (written in Python, Nodejs, Java, .net, etc. using AWS SDK) that runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance, containers, or Lambda (if the bill can be processed within 15 minutes with file size restrictions).

The billing transformation program must do the following:

  • Cache the Account details and reservation DynamoDB tables
  • Verify if there are any messages in SQS
  • Ignore if the file is not a DBR or CUR file (process either of them, not both)
  • Download the file, unzip, and read row-by-row; for a DBR file, consider only the “LineItem” RecordType
  • Add two new columns: Bill_CostCenter and Bill_Notes
    • If there is a valid value in the CostCenter tag (verified with internal automation processes), add the same value to the Bill_CostCenter column and any notes to the Bill_Notes column
    • If the CostCenter is invalid, get the default Cost Center from the cached account details and add the information to the Bill_CostCenter and Bill_Notes columns
    • If the row is a reservation invoice, the cost center information comes from the reservation table and is added to the correct column
  • Cache consolidation of cost centers with the blended or unblended cost of each row
  • Write each of these processed line items into a new file
  • Handle exceptions by the normal organization practices (for example, email the owner of the cost center or the finance team)
  • Push the new file into the transformed Amazon S3 bucket
  • Write the consolidated lines into a different file and upload to Transformed Amazon S3 bucket
Figure 1 – Architecture of processing a billing chargeback

Figure 1 – Architecture of processing a billing chargeback

 

Figure 2 – Validating the Cost Center process

After you have the consolidated billing file aggregated by cost center, you can easily see and handle your internal chargebacks. To further simplify your chargeback model, you can get help from AWS Technical Account Managers and Billing Concierge, if your organization would like AWS to provide custom invoices from the consolidated billing file.

Because the cost centers in your organization can expire over time, it’s important validate them frequently with automation, such as a Lambda program.

Improvements

If your organization has a more complex chargeback structure, you can extend the logic described above to support deeper and broader chargeback codes, or implement hierarchical chargeback structure.

You can also extend the transformation logic to support several chargeback codes (such as comma separated or with additional tags) if you have multiple teams or project that want to share a resource.

Summary

As enterprise organizations grow and consume more cloud services, the cost optimization process grows and evolves with them. Sophisticated chargeback models enable the teams and business units in the organization to be accountable and contribute to take the steps necessary to normalize the usage and costs of AWS services.

About the Author

Varad RamVarad Ram likes to help customers adopt to cloud technologies and he is particularly interested in Artificial Intelligence. He believes Deep Learning will power future technology growth. In his spare time, his daughter and toddler son keep him busy biking and hiking.

How Disney Streaming Services Uses Amazon Pinpoint to Send Personalized Messages to Millions of Users in Real Time

Post Syndicated from Hannah Nilsson original https://aws.amazon.com/blogs/messaging-and-targeting/how-disney-streaming-services-uses-amazon-pinpoint-to-send-personalized-messages-to-millions-of-users-in-real-time/

At AWS re:Invent 2018, Billy Liu and Jimmy Tam from Disney Streaming Services took the stage to talk about how they use Amazon Pinpoint to meet some of their unique digital user engagement needs. Disney Streaming Services supports several mobile apps, including MLB At Bat, Ballpark and Beat The Streak from Major League Baseball, along with the NHL mobile app. Billy and Jimmy shared their story on the re:Invent Launchpad stage, as well as during the Digital User Engagement Leadership Session with Simon Poile from AWS. In these sessions, they discussed how they use Amazon Pinpoint—along with other AWS services including Amazon Kinesis, AWS Lambda, Amazon S3, and AWS Glue—to target customers, monitor the performance of their campaigns in real time, and gain a deeper understanding of their users’ needs and desires.

Targeting the right customer at the right time 
When you consider the use cases for MLB’s suite of apps, you can quickly see why sending the right message to the right customer is a more complicated task than it might seem at first glance. For each of the 30 Major League Baseball teams, users can opt to receive eight different types of messages. Each of these eight message types is available in both English and Spanish. And on top of all that, each push notification sent has to target combinations of these segments when two teams play each other. There are thousands of possible segments and combinations of segments to consider with each message sent.

To address this issue, Disney Streaming Services uses Amazon Pinpoint to dynamically create unique segments and campaigns for every event in milliseconds. In the most demanding usage scenarios, Amazon Pinpoint scales to create over 300 segments and campaigns per hour, and over 20 segments and campaigns per minute. To learn more about how they solved this challenge, take a look at the recording of their session.

How Disney Streaming Services Targets The Right Customer
Monitoring campaign performance in real time
With the fast-paced nature of Disney Streaming’s notifications and the sheer number of campaigns and segments they are targeting, monitoring their performance directly in the Amazon Pinpoint console is not scalable for their use case. However, they must have real-time notifications to let them know if their campaigns are lagging or not reaching the expected number of recipients.

To meet this unique need, Disney Streaming developed a solution that uses AWS Step Functions, Amazon Cloudwatch, AWS Lambda, and Amazon Pinpoint. This solution monitors each campaign that is created. When a campaign is executed, their solution streams data about the execution and delivery of that campaign, and sends alerts when the team needs to take a closer look at how their campaigns are performing. You can learn more about the specifics of their monitoring solution in the recording of their session.

How Disney Streaming Services Monitors Campaign Performance

Understanding fans
After a campaign has been sent, Disney Streaming analyzes the performance of campaigns. By performing this analysis, they can better understand how customers engaged with notifications, and ensure fans are receiving a compelling experience.

To achieve this, Disney Streaming uses the event streaming and exporting features of Amazon Pinpoint. They stream engagement events by using Amazon Kinesis. These events let them know how fans interacted with the application, and allow them to drill down into various performance metrics on a per-team basis. They then store these metrics in S3, which are picked up by their data lake team for further processing. By using this solution, they can create near-real-time reports for the unique audiences.  How Disney Streaming Services Understands their Fans
They also use the Amazon Pinpoint API to export all of the details about the users to an S3 bucket using Lambda Triggers. An AWS Glue job processes the exported data and outputs the results to another S3 bucket. The data lake team then uses this data to glean additional insights about their audience.

How Disney Streaming Services Understands their Fans

Removing unengaged customers 
Disney Streaming also uses a custom solution to engage with customers who are still able to receive messages, and to ensure that reports only include engaged users. For example, if a customer uninstalls the MLB or NHL apps, re-installs an app but doesn’t set their messaging preferences, or starts using a device on a different platform, that customer might not be able to be contacted. Disney Streaming needs to remove these unreachable customers from campaigns so that they can maintain accurate reports on audience sizes, helping keep costs low, and reduce campaign latency.

To delete unreachable customers in real time, the Disney Streaming team uses Amazon Pinpoint to detect when they attempt to send a push notification to an unreachable customer. Their Kinesis Firehose stream then outputs campaign data to an S3 bucket, and an AWS Glue job filters out the customers who are unreachable. Finally, a Lambda function removes the endpoint by making a call to the Amazon Pinpoint API. You can find more details about how Disney Streaming Services implemented this solution in the recording of their session.

How Disney Streaming Services removes unengaged customers

You can learn more about the needs that Disney Streaming Service considered when they chose a Digital User Engagement solution by watching the recording of their discussion on the re:Invent 2018 Launchpad stage. You can also watch the Digital User Engagement Leadership Session to learn more about AWS’ Digital User Engagement solutions, information on recent feature launches, and to learn more about how Disney Streaming created solutions to their engagement challenges.

Optimizing a Lift-and-Shift for Security

Post Syndicated from Jonathan Shapiro-Ward original https://aws.amazon.com/blogs/architecture/optimizing-a-lift-and-shift-for-security/

This is the third and final blog within a three-part series that examines how to optimize lift-and-shift workloads. A lift-and-shift is a common approach for migrating to AWS, whereby you move a workload from on-prem with little or no modification. This third blog examines how lift-and-shift workloads can benefit from an improved security posture with no modification to the application codebase. (Read about optimizing a lift-and-shift for performance and for cost effectiveness.)

Moving to AWS can help to strengthen your security posture by eliminating many of the risks present in on-premise deployments. It is still essential to consider how to best use AWS security controls and mechanisms to ensure the security of your workload. Security can often be a significant concern in lift-and-shift workloads, especially for legacy workloads where modern encryption and security features may not present. By making use of AWS security features you can significantly improve the security posture of a lift-and-shift workload, even if it lacks native support for modern security best practices.

Adding TLS with Application Load Balancers

Legacy applications are often the subject of a lift-and-shift. Such migrations can help reduce risks by moving away from out of date hardware but security risks are often harder to manage. Many legacy applications leverage HTTP or other plaintext protocols that are vulnerable to all manner of attacks. Often, modifying a legacy application’s codebase to implement TLS is untenable, necessitating other options.

One comparatively simple approach is to leverage an Application Load Balancer or a Classic Load Balancer to provide SSL offloading. In this scenario, the load balancer would be exposed to users, while the application servers that only support plaintext protocols will reside within a subnet which is can only be accessed by the load balancer. The load balancer would perform the decryption of all traffic destined for the application instance, forwarding the plaintext traffic to the instances. This allows  you to use encryption on traffic between the client and the load balancer, leaving only internal communication between the load balancer and the application in plaintext. Often this approach is sufficient to meet security requirements, however, in more stringent scenarios it is never acceptable for traffic to be transmitted in plaintext, even if within a secured subnet. In this scenario, a sidecar can be used to eliminate plaintext traffic ever traversing the network.

Improving Security and Configuration Management with Sidecars

One approach to providing encryption to legacy applications is to leverage what’s often termed the “sidecar pattern.” The sidecar pattern entails a second process acting as a proxy to the legacy application. The legacy application only exposes its services via the local loopback adapter and is thus accessible only to the sidecar. In turn the sidecar acts as an encrypted proxy, exposing the legacy application’s API to external consumers via TLS. As unencrypted traffic between the sidecar and the legacy application traverses the loopback adapter, it never traverses the network. This approach can help add encryption (or stronger encryption) to legacy applications when it’s not feasible to modify the original codebase. A common approach to implanting sidecars is through container groups such as pod in EKS or a task in ECS.

Implementing the Sidecar Pattern With Containers

Figure 1: Implementing the Sidecar Pattern With Containers

Another use of the sidecar pattern is to help legacy applications leverage modern cloud services. A common example of this is using a sidecar to manage files pertaining to the legacy application. This could entail a number of options including:

  • Having the sidecar dynamically modify the configuration for a legacy application based upon some external factor, such as the output of Lambda function, SNS event or DynamoDB write.
  • Having the sidecar write application state to a cache or database. Often applications will write state to the local disk. This can be problematic for autoscaling or disaster recovery, whereby having the state easily accessible to other instances is advantages. To facilitate this, the sidecar can write state to Amazon S3, Amazon DynamoDB, Amazon Elasticache or Amazon RDS.

A sidecar requires customer development, but it doesn’t require any modification of the lift-and-shifted application. A sidecar treats the application as a blackbox and interacts with it via its API, configuration file, or other standard mechanism.

Automating Security

A lift-and-shift can achieve a significantly stronger security posture by incorporating elements of DevSecOps. DevSecOps is a philosophy that argues that everyone is responsible for security and advocates for automation all parts of the security process. AWS has a number of services which can help implement a DevSecOps strategy. These services include:

  • Amazon GuardDuty: a continuous monitoring system which analyzes AWS CloudTrail Events, Amazon VPC Flow Log and DNS Logs. GuardDuty can detect threats and trigger an automated response.
  • AWS Shield: a managed DDOS protection services
  • AWS WAF: a managed Web Application Firewall
  • AWS Config: a service for assessing, tracking, and auditing changes to AWS configuration

These services can help detect security problems and implement a response in real time, achieving a significantly strong posture than traditional security strategies. You can build a DevSecOps strategy around a lift-and-shift workload using these services, without having to modify the lift-and-shift application.

Conclusion

There are many opportunities for taking advantage of AWS services and features to improve a lift-and-shift workload. Without any alteration to the application you can strengthen your security posture by utilizing AWS security services and by making small environmental and architectural changes that can help alleviate the challenges of legacy workloads.

About the author

Dr. Jonathan Shapiro-Ward is an AWS Solutions Architect based in Toronto. He helps customers across Canada to transform their businesses and build industry leading cloud solutions. He has a background in distributed systems and big data and holds a PhD from the University of St Andrews.

Optimizing a Lift-and-Shift for Cost Effectiveness and Ease of Management

Post Syndicated from Jonathan Shapiro-Ward original https://aws.amazon.com/blogs/architecture/optimizing-a-lift-and-shift-for-cost/

Lift-and-shift is the process of migrating a workload from on premise to AWS with little or no modification. A lift-and-shift is a common route for enterprises to move to the cloud, and can be a transitionary state to a more cloud native approach. This is the second blog post in a three-part series which investigates how to optimize a lift-and-shift workload. The first post is about performance.

A key concern that many customers have with a lift-and-shift is cost. If you move an application as is  from on-prem to AWS, is there any possibility for meaningful cost savings? By employing AWS services, in lieu of self-managed EC2 instances, and by leveraging cloud capability such as auto scaling, there is potential for significant cost savings. In this blog post, we will discuss a number of AWS services and solutions that you can leverage with minimal or no change to your application codebase in order to significantly reduce management costs and overall Total Cost of Ownership (TCO).

Automate

Even if you can’t modify your application, you can change the way you deploy your application. The adopting-an-infrastructure-as-code approach can vastly improve the ease of management of your application, thereby reducing cost. By templating your application through Amazon CloudFormation, Amazon OpsWorks, or Open Source tools you can make deploying and managing your workloads a simple and repeatable process.

As part of the lift-and-shift process, rationalizing the workload into a set of templates enables less time to spent in the future deploying and modifying the workload. It enables the easy creation of dev/test environments, facilitates blue-green testing, opens up options for DR, and gives the option to roll back in the event of error. Automation is the single step which is most conductive to improving ease of management.

Reserved Instances and Spot Instances

A first initial consideration around cost should be the purchasing model for any EC2 instances. Reserved Instances (RIs) represent a 1-year or 3-year commitment to EC2 instances and can enable up to 75% cost reduction (over on demand) for steady state EC2 workloads. They are ideal for 24/7 workloads that must be continually in operation. An application requires no modification to make use of RIs.

An alternative purchasing model is EC2 spot. Spot instances offer unused capacity available at a significant discount – up to 90%. Spot instances receive a two-minute warning when the capacity is required back by EC2 and can be suspended and resumed. Workloads which are architected for batch runs – such as analytics and big data workloads – often require little or no modification to make use of spot instances. Other burstable workloads such as web apps may require some modification around how they are deployed.

A final alternative is on-demand. For workloads that are not running in perpetuity, on-demand is ideal. Workloads can be deployed, used for as long as required, and then terminated. By leveraging some simple automation (such as AWS Lambda and CloudWatch alarms), you can schedule workloads to start and stop at the open and close of business (or at other meaningful intervals). This typically requires no modification to the application itself. For workloads that are not 24/7 steady state, this can provide greater cost effectiveness compared to RIs and more certainty and ease of use when compared to spot.

Amazon FSx for Windows File Server

Amazon FSx for Windows File Server provides a fully managed Windows filesystem that has full compatibility with SMB and DFS and full AD integration. Amazon FSx is an ideal choice for lift-and-shift architectures as it requires no modification to the application codebase in order to enable compatibility. Windows based applications can continue to leverage standard, Windows-native protocols to access storage with Amazon FSx. It enables users to avoid having to deploy and manage their own fileservers – eliminating the need for patching, automating, and managing EC2 instances. Moreover, it’s easy to scale and minimize costs, since Amazon FSx offers a pay-as-you-go pricing model.

Amazon EFS

Amazon Elastic File System (EFS) provides high performance, highly available multi-attach storage via NFS. EFS offers a drop-in replacement for existing NFS deployments. This is ideal for a range of Linux and Unix usecases as well as cross-platform solutions such as Enterprise Java applications. EFS eliminates the need to manage NFS infrastructure and simplifies storage concerns. Moreover, EFS provides high availability out of the box, which helps to reduce single points of failure and avoids the need to manually configure storage replication. Much like Amazon FSx, EFS enables customers to realize cost improvements by moving to a pay-as-you-go pricing model and requires a modification of the application.

Amazon MQ

Amazon MQ is a managed message broker service that provides compatibility with JMS, AMQP, MQTT, OpenWire, and STOMP. These are amongst the most extensively used middleware and messaging protocols and are a key foundation of enterprise applications. Rather than having to manually maintain a message broker, Amazon MQ provides a performant, highly available managed message broker service that is compatible with existing applications.

To use Amazon MQ without any modification, you can adapt applications that leverage a standard messaging protocol. In most cases, all you need to do is update the application’s MQ endpoint in its configuration. Subsequently, the Amazon MQ service handles the heavy lifting of operating a message broker, configuring HA, fault detection, failure recovery, software updates, and so forth. This offers a simple option for reducing management overhead and improving the reliability of a lift-and-shift architecture. What’s more is that applications can migrate to Amazon MQ without the need for any downtime, making this an easy and effective way to improve a lift-and-shift.

You can also use Amazon MQ to integrate legacy applications with modern serverless applications. Lambda functions can subscribe to MQ topics and trigger serverless workflows, enabling compatibility between legacy and new workloads.

Integrating Lift-and-Shift Workloads with Lambda via Amazon MQ

Figure 1: Integrating Lift-and-Shift Workloads with Lambda via Amazon MQ

Amazon Managed Streaming Kafka

Lift-and-shift workloads which include a streaming data component are often built around Apache Kafka. There is a certain amount of complexity involved in operating a Kafka cluster which incurs management and operational expense. Amazon Kinesis is a managed alternative to Apache Kafka, but it is not a drop-in replacement. At re:Invent 2018, we announced the launch of Amazon Managed Streaming Kafka (MSK) in public preview. MSK provides a managed Kafka deployment with pay-as-you-go pricing and an acts as a drop-in replacement in existing Kafka workloads. MSK can help reduce management costs and improve cost efficiency and is ideal for lift-and-shift workloads.

Leveraging S3 for Static Web Hosting

A significant portion of any web application is static content. This includes videos, image, text, and other content that changes seldom, if ever. In many lift-and-shifted applications, web servers are migrated to EC2 instances and host all content – static and dynamic. Hosting static content from an EC2 instance incurs a number of costs including the instance, EBS volumes, and likely, a load balancer. By moving static content to S3, you can significantly reduce the amount of compute required to host your web applications. In many cases, this change is non-disruptive and can be done at the DNS or CDN layer, requiring no change to your application.

Reducing Web Hosting Costs with S3 Static Web Hosting

Figure 2: Reducing Web Hosting Costs with S3 Static Web Hosting

Conclusion

There are numerous opportunities for reducing the cost of a lift-and-shift. Without any modification to the application, lift-and-shift workloads can benefit from cloud-native features. By using AWS services and features, you can significantly reduce the undifferentiated heavy lifting inherent in on-prem workloads and reduce resources and management overheads.

About the author

Dr. Jonathan Shapiro-Ward is an AWS Solutions Architect based in Toronto. He helps customers across Canada to transform their businesses and build industry leading cloud solutions. He has a background in distributed systems and big data and holds a PhD from the University of St Andrews.

Optimizing a Lift-and-Shift for Performance

Post Syndicated from Jonathan Shapiro-Ward original https://aws.amazon.com/blogs/architecture/optimizing-a-lift-and-shift-for-performance/

Many organizations begin their cloud journey with a lift-and-shift of applications from on-premise to AWS. This approach involves migrating software deployments with little, or no, modification. A lift-and-shift avoids a potentially expensive application rewrite but can result in a less optimal workload that a cloud native solution. For many organizations, a lift-and-shift is a transitional stage to an eventual cloud native solution, but there are some applications that can’t feasibly be made cloud-native such as legacy systems or proprietary third-party solutions. There are still clear benefits of moving these workloads to AWS, but how can they be best optimized?

In this blog series post, we’ll look at different approaches for optimizing a black box lift-and-shift. We’ll consider how we can significantly improve a lift-and-shift application across three perspectives: performance, cost, and security. We’ll show that without modifying the application we can integrate services and features that will make a lift-and-shift workload cheaper, faster, more secure, and more reliable. In this first blog, we’ll investigate how a lift-and-shift workload can have improved performance through leveraging AWS features and services.

Performance gains are often a motivating factor behind a cloud migration. On-premise systems may suffer from performance bottlenecks owing to legacy infrastructure or through capacity issues. When performing a lift-and-shift, how can you improve performance? Cloud computing is famous for enabling horizontally scalable architectures but many legacy applications don’t support this mode of operation. Traditional business applications are often architected around a fixed number of servers and are unable to take advantage of horizontal scalability. Even if a lift-and-shift can’t make use of auto scaling groups and horizontal scalability, you can achieve significant performance gains by moving to AWS.

Scaling Up

The easiest alternative to scale up to compute is vertical scalability. AWS provides the widest selection of virtual machine types and the largest machine types. Instances range from small, burstable t3 instances series all the way to memory optimized x1 series. By leveraging the appropriate instance, lift-and-shifts can benefit from significant performance. Depending on your workload, you can also swap out the instances used to power your workload to better meet demand. For example, on days in which you anticipate high load you could move to more powerful instances. This could be easily automated via a Lambda function.

The x1 family of instances offers considerable CPU, memory, storage, and network performance and can be used to accelerate applications that are designed to maximize single machine performance. The x1e.32xlarge instance, for example, offers 128 vCPUs, 4TB RAM, and 14,000 Mbps EBS bandwidth. This instance is ideal for high performance in-memory workloads such as real time financial risk processing or SAP Hana.

Through selecting the appropriate instance types and scaling that instance up and down to meet demand, you can achieve superior performance and cost effectiveness compared to running a single static instance. This affords lift-and-shift workloads far greater efficiency that their on-prem counterparts.

Placement Groups and C5n Instances

EC2 Placement groups determine how you deploy instances to underlying hardware. One can either choose to cluster instances into a low latency group within a single AZ or spread instances across distinct underlying hardware. Both types of placement groups are useful for optimizing lift-and-shifts.

The spread placement group is valuable in applications that rely on a small number of critical instances. If you can’t modify your application  to leverage auto scaling, liveness probes, or failover, then spread placement groups can help reduce the risk of simultaneous failure while improving the overall reliability of the application.

Cluster placement groups help improve network QoS between instances. When used in conjunction with enhanced networking, cluster placement groups help to ensure low latency, high throughput, and high network packets per second. This is beneficial for chatty applications and any application that leveraged physical co-location for performance on-prem.

There is no additional charge for using placement groups.

You can extend this approach further with C5n instances. These instances offer 100Gbps networking and can be used in placement group for the most demanding networking intensive workloads. Using both placement groups and the C5n instances require no modification to your application, only to how it is deployed – making it a strong solution for providing network performance to lift-and-shift workloads.

Leverage Tiered Storage to Optimize for Price and Performance

AWS offers a range of storage options, each with its own performance characteristics and price point. Through leveraging a combination of storage types, lift-and-shifts can achieve the performance and availability requirements in a price effective manner. The range of storage options include:

Amazon EBS is the most common storage service involved with lift-and-shifts. EBS provides block storage that can be attached to EC2 instances and formatted with a typical file system such as NTFS or ext4. There are several different EBS types, ranging from inexpensive magnetic storage to highly performant provisioned IOPS SSDs. There are also storage-optimized instances that offer high performance EBS access and NVMe storage. By utilizing the appropriate type of EBS volume and instance, a compromise of performance and price can be achieved. RAID offers a further option to optimize EBS. EBS utilizes RAID 1 by default, providing replication at no additional cost, however an EC2 instance can apply other RAID levels. For instance, you can apply RAID 0 over a number of EBS volumes in order to improve storage performance.

In addition to EBS, EC2 instances can utilize the EC2 instance store. The instance store provides ephemeral direct attached storage to EC2 instances. The instance store is included with the EC2 instance and provides a facility to store non-persistent data. This makes it ideal for temporary files that an application produces, which require performant storage. Both EBS and the instance store are expose to the EC2 instance as block level devices, and the OS can use its native management tools to format and mount these volumes as per a traditional disk – requiring no significant departure from the on prem configuration. In several instance types including the C5d and P3d are equipped with local NVMe storage which can support extremely IO intensive workloads.

Not all workloads require high performance storage. In many cases finding a compromise between price and performance is top priority. Amazon S3 provides highly durable, object storage at a significantly lower price point than block storage. S3 is ideal for a large number of use cases including content distribution, data ingestion, analytics, and backup. S3, however, is accessible via a RESTful API and does not provide conventional file system semantics as per EBS. This may make S3 less viable for applications that you can’t easily modify, but there are still options for using S3 in such a scenario.

An option for leveraging S3 is AWS Storage Gateway. Storage Gateway is a virtual appliance than can be run on-prem or on EC2. The Storage Gateway appliance can operate in three configurations: file gateway, volume gateway and tape gateway. File gateway provides an NFS interface, Volume Gateway provides an iSCSI interface, and Tape Gateway provides an iSCSI virtual tape library interface. This allows files, volumes, and tapes to be exposed to an application host through conventional protocols with the Storage Gateway appliance persisting data to S3. This allows an application to be agnostic to S3 while leveraging typical enterprise storage protocols.

Using S3 Storage via Storage Gateway

Figure 1: Using S3 Storage via Storage Gateway

Conclusion

A lift-and-shift can achieve significant performance gains on AWS by making use of a range of instance types, storage services, and other features. Even without any modification to the application, lift-and-shift workloads can benefit from cutting edge compute, network, and IO which can help realize significant, meaningful performance gains.

About the author

Dr. Jonathan Shapiro-Ward is an AWS Solutions Architect based in Toronto. He helps customers across Canada to transform their businesses and build industry leading cloud solutions. He has a background in distributed systems and big data and holds a PhD from the University of St Andrews.

Stream Amazon CloudWatch Logs to a Centralized Account for Audit and Analysis

Post Syndicated from David Bailey original https://aws.amazon.com/blogs/architecture/stream-amazon-cloudwatch-logs-to-a-centralized-account-for-audit-and-analysis/

A key component of enterprise multi-account environments is logging. Centralized logging provides a single point of access to all salient logs generated across accounts and regions, and is critical for auditing, security and compliance. While some customers use the built-in ability to push Amazon CloudWatch Logs directly into Amazon Elasticsearch Service for analysis, others would prefer to move all logs into a centralized Amazon Simple Storage Service (Amazon S3) bucket location for access by several custom and third-party tools. In this blog post, I will show you how to forward existing and any new CloudWatch Logs log groups created in the future to a cross-account centralized logging Amazon S3 bucket.

The streaming architecture I use in the destination logging account is a streamlined version of the architecture and AWS CloudFormation templates from the Central logging in Multi-Account Environments blog post by Mahmoud Matouk. This blog post assumes some knowledge of CloudFormation, Python3 and the boto3 AWS SDK. You will need to have or configure an AWS working account and logging account, an IAM access and secret key for those accounts, and a working environment containing Python and the boto3 SDK. (For assistance, see the Getting Started Resource Center and Start Building with SDKs and Tools.) All CloudFormation templates and Python code used in this article can be found in this GitHub Repository.

Setting Up the Solution

You need to create or use an existing S3 bucket for storing CloudFormation templates and Python code for an AWS Lambda function. This S3 bucket is referred to throughout the blog post as the <S3 infrastructure-bucket>. Ensure that the bucket does not block new bucket policies or cross-account access by checking the bucket’s Permissions tab and the Public access settings button.

You also need a bucket policy that allows each account that needs to stream logs to access it when we create the AWS Lambda function below. To do so, update your bucket policy to include each new account you create and the <S3 infrastructure-bucket> ARN from the top of the Bucket policy editor page to modify this template:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                  "03XXXXXXXX85",
                  "29XXXXXXXX02",
                  "13XXXXXXXX96",
                  "37XXXXXXXX30",
                  "86XXXXXXXX95"
                ]
            },
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
                "arn:aws:s3:::<S3 infrastructure-bucket>",
                "arn:aws:s3:::<S3 infrastructure-bucket>/*"
            ]
        }
    ]
}

Clone a local copy of the CloudFormation templates and Python code from the GitHub repository. Compress the CentralLogging.py and lambda.py into a .zip file for the lambda function we create below and name it AddSubscriptionFilter.zip. Load these local files into the <S3 infrastructure-bucket>. I recommend using folders called /python for the .py files, /lambdas for the AddSubscriptionFilter.zip file and /cfn for the CloudFormation templates.

Multi-Account Configuration and the Central Logging Account

One form of multi-account configuration is the Landing Zone offering, which provides a core logging account for storing all logs for auditing. I use this account configuration as an example in this blog post. Initially, the Landing Zone setup creates several stack sets and resources, including roles, security groups, alarms, lambda functions, a cloud trail stream and an S3 bucket.

If you are not using a Landing Zone, create an appropriately named S3 bucket in the account you have chosen as a logging account. This S3 bucket will be referred to later as the <LoggingS3Bucket>. To mimic what the Landing Zone calls its logging bucket, you can use the format aws-landing-zone-logs-<Account Number><Region>, or simply pick an appropriate name for the centralized logging location. In a production environment, remember that it is critical to lock down the access to logging resources and the permissions allowed within the account to prevent deletion or tampering with the logs.

Figure 1 - Initial Landing Zone logging account resources

Figure 1 – Initial Landing Zone logging account resources

The S3 bucket – aws-landing-zone-logs-<Account Number><Region> is the most important resource created by the stack-sets for logging purposes. It contains all of the logs streamed to it from all of the accounts. Initially, the Landing Zone only sends the AWS CloudTrail and AWS Config logs to this S3 bucket.

In order to send all of the other CloudWatch Logs that are necessary for auditing, we need to add a destination and streaming mechanism to the logging account.

Logging Account Insfrastructure

The additional infrastructure required in the central logging account provides a destination for the log group subscription filters and a stream for log events that are sent from all accounts and appropriate regions to load them into the <LoggingS3Bucket> repository. The selection of these particular AWS resources is important, because Kinesis Data Streams is the only resource currently supported as a destination for cross-account CloudWatch Logs subscription filters.

The centralLogging.yml CloudFormation template automates the creation of the entire required infrastructure in the core logging account. Make sure to run it in each of the regions in which you need to centralize logs. The log group subscription filter and destination regions must match in order to successfully stream the logs.

Installation Instructions:

  1. Modify the centralLogging.yml template to add your account numbers for all of the accounts you want to stream logs from into the DestinationPolicy where you see the <AccountNumberHere> placeholders. Remove any unused placeholders.
  2. In the same DestinationPolicy, modify the final arn statement, replacing <region> with the region it will be run in (e.g., us-east-1), and the <logging account number> with the account number of the logging account where this template is to be run.
  3. Log in to the core logging account and access the AWS management console using administrator credentials.
  4. Navigate to CloudFormation and click the Create Stack button.
  5. Select Specify an Amazon S3 template URL and enter the Link for the centralLogging.yml template found in the <S3 infrastructure-bucket>.
  6. Enter a stack name, such as CentralizedLogging, and the one parameter called LoggingS3Bucket. Enter in the ARN of the logging bucket: arn:aws:s3::: <LoggingS3Bucket>. This can be obtained by opening the S3 console, clicking on the bucket icon next to this bucket, and then clicking the Copy Bucket ARN button.
  7. Skip the next page, acknowledge the creation of IAM resources, and Create the stack.
  8. When the stack completes, select the stack name to go to stack details and open the Outputs. Copy the value of the DestinationArnExport, which will be needed as a parameter for the script in the next section.

Upon successful creation of this CloudFormation stack, the following new resources will be created:

  • Amazon CloudWatch Logs Destination
  • Amazon Kinesis Stream
  • Amazon Kinesis Firehose Stream
  • Two AWS Identity and Access Management (IAM) Roles
Figure 2 - New infrastructure required in the centralized logging account

Figure 2 – New infrastructure required in the centralized logging account

Because the Landing Zone is a multi-account offering, the Log Destination is required to be the destination for all subscription filters. The key feature of the destination is its DestinationPolicy. Whenever a new account is added to the environment, its account number needs to be added to this DestinationPolicy in order for logs to be sent to it from the new account. Add the new account number in the centralLogging.yml CloudFormation template, and run an update in CloudFormation to complete the addition. A sample Destination Policy looks like this:

{
  "Version" : "2012-10-17",
  "Statement" : [
    {
      "Effect" : "Allow",
      "Principal" : {
        "AWS" : [
          "03XXXXXXXX85",
          "29XXXXXXXX02",
          "13XXXXXXXX96",
          "37XXXXXXXX30",
          "86XXXXXXXX95"
        ]
      },
      "Action" : "logs:PutSubscriptionFilter",
      "Resource" : "arn:aws:logs:<Region>:<LoggingAccountNumber>:destination:CentralLogDestination"
    }
  ]
}

The Kinesis Stream get records from the Logs Destination and holds them for 48 hours. Kinesis Streams scale by adding shards. The CloudFormation template starts the stream with two shards. You need to monitor this as instances and applications are deployed into the accounts, however, because all CloudWatch log objects will flow through this stream, and it will need to be scaled up at some point. To scale, change the number of shards (ShardCount) in the Kinesis Stream resource (KinesisLoggingStream) to the required number. See the Amazon Kinesis Data Streams FAQ documentation to confirm the capacity and throughput of each shard.

Kinesis Firehose provides a simple and efficient mechanism to retrieve the records from the Kinesis Stream and load them into the <LoggingS3Bucket> repository. It uses the CloudFormation template parameter to know where to load the logs. All of the CloudWatch logs loaded by Firehose will be under the prefix /CentralizedAccountsLog. The buffering hints for Firehose suggest that the logs be loaded every 5 minutes or 50 MB. Leave the CompressionFormat UNCOMPRESSED, since the logs are already compressed.

There are two AWS Identity and Access Management (IAM) roles created for this infrastructure. The first, CWLtoKinesisRole is used by the destination to allow CloudWatch Logs from all regions to use the destination to put the log object records into the Kinesis Stream, as well as to pass the role. The second, FirehoseDeliveryRole, allows Firehose to get the log object records from the Kinesis Stream, and then to load them into S3 logging bucket.

Once you have successfully created this infrastructure, the next step is to add the subscription filters to existing log groups.

Adding Subscription Filters to Existing Log Groups

The next step in the process is to add subscription filters for the Log Destination in the core logging account to all existing log groups. Several log groups are created by the Landing Zone, or you may have created them by using various AWS services or by logging application events. For every new AWS account, you will need to run the init_account_central_logging.py Python script to add the subscription filters to all the existing log groups.

The init_account_central_logging.py script takes one parameter, which is the Log Destination ARN. Use the Destination ARN you copied from the stack details output in the previous section as the parameter to the script.

The init_account_central_logging.py script first adds this Destination ARN to the AWS Systems Manager Parameter Store so that the core logic that creates the subscription filter can use it. The script then gets a list of all existing log groups, iterates over them, deletes any existing subscription filters (because there can only be one subscription filter per log group and attempting to create another would cause an error), and then adds the new subscription filter to the centralized logging account to the Log Destination.

Figure 3 - Run script to add subscription filters to existing log groups

Figure 3 – Run script to add subscription filters to existing log groups

Installation Instructions:

  1. Make sure that Python and boto3 are installed and accessible in the client computer – consider loading into a virtual environment to keep dependencies separate.
  2. Set the AWS_PROFILE environment variable to the appropriate AWS account profile.
  3. Log in to the proper account, and obtain administrator or other credentials with appropriate permissions, and add the account access key and secret key to the AWS credentials file.
  4. Set the region and output in the AWS config file.
  5. Download and place two python files into a working directory: init_account_central_logging.py and CentralLogging.py.
  6. Run the script using the command python3 ./init_account_central_logging.py -d <LogDestinationArn>.

Use the AWS Management Console to validate the results. Navigate to CloudWatch Logs and view all of the log groups. Each one should now have a subscription filter named “Logs (CentralLogDestination).”

Automatically Adding Subscription Filters to New Log Groups

The final step to set up the centralized log streaming capability is to run a CloudFormation script to create resources that automatically add subscription filters to new log groups. New log groups are created in accounts by resources (e.g., Lambda functions) and by applications. A subscription filter must be added to every new log group in order to deliver its log events to the logging account,

The AddSubscriptionFilter.yml CloudFormation template contains resources to automatically add subscription filters.

First, it creates a role that allows it to access the lambda code that is stored in a centralized location – the <S3 infrastructure-bucket>. (Remember that its S3 bucket policy must contain this account number in order to access the lambda code.)

Second, the template creates the AddSubscriptionLambda, which reuses the core logic shared by the script in the last section. It retrieves the proper destination from the Parameter Store, deletes any existing subscription filter from the log group, and adds the new subscription filter to the newly created log group. This lambda function is triggered by a CloudWatch event rule.

Third, the CloudFormation creates a Lambda Permission, which allows the event trigger to invoke this particular lambda.

Finally, the CloudFormation template creates an Amazon CloudWatch Events Rule that acts as a trigger for the lambda. This rule looks for an event coming from CloudTrail that signals the creation of a new log group. For each create log group event found, it invokes the AddSubscriptionLambda.

Figure 4 - Infrastructure to automatically add a subscription filter to a new log group and the log flow to the centralized account

Figure 4 – Infrastructure to automatically add a subscription filter to a new log group and the log flow to the centralized account

Installation Instructions:

(Important note: This functionality requires that the LogDestination parameter be properly set to the LogDestinationArn in the Parameter Store before the Lambda will run successfully. The script in the previous step sets this parameter, or it can be done manually. Make certain that the destination specified is in this same region.)

  1. Ensure that the <S3 infrastructure-bucket> has the AddSubscriptionFilter.zip file containing the Python code files lambda.py and CentralLogging.py.
  2. Log in to the appropriate account, and access using administrator credentials. Make sure that the region is set properly.
  3. Navigate to Cloudformation and click the Create Stack button.
  4. Select Specify an Amazon S3 template URL and enter the Link for the AddSubscriptionFilter.yml template found in <S3 infrastructure-bucket>
  5. Enter a stack name, such as AddSubscription.
  6. Enter the two parameters, the <S3 infrastructure-bucket> name (not ARN) and the folder and file name (e.g., lambdas/AddSubscriptionFilter.zip)
  7. Skip the next page, acknowledge the creation of IAM resources, and Create the stack.

In order to test that the automated addition of subscription filters is working properly, use the AWS Management Console to navigate to CloudWatch Logs and click the Actions button. Select Create New Log Group and enter a random log group name, such as “testLogGroup.” When first created, the log group will not have a subscription filter. After a few minutes, refresh the display and you should see the new subscription filter on the log group. At this point, you can delete the test log group.

New Account Setup

As a reminder, when you add new accounts that you want to have stream log events to the central logging account, you will need to configure the new accounts in two places in order for this functionality to work properly.

First, add the account number to the LoggingDestination property DestinationPolicy in the centralLogging.yml template. Then, update the CloudFormation stack.

Second, modify the bucket policy for the <S3 infrastructure-bucket>. Select the Permissions tab, then the Bucket Policy button. Add the new account to allow cross-account access to the lambda code by adding the line “arn:aws:iam::<new account number>:root” to the Principal.AWS list.

Conclusion

Centralized logging is a key component in enterprise multi-account architectures. In this blog post, I have built on the central logging in multi-account environments streaming architecture to automatically subscribe all CloudWatch Logs log groups to send all log events to an S3 bucket in a designated logging account. The solution uses a script to add subscription filters to existing log groups, and a lambda function to automatically place a subscription filter on all new log groups created within the account. This can be used to forward application logs, security logs, VPC flow logs, or any other important logs that are required for audit, security, or compliance purposes.

About the author

David BaileyDavid Bailey is a Cloud Infrastructure Architect with AWS Professional Services specializing in serverless application architecture, IoT, and artificial intelligence. He has spent decades architecting and developing complex custom software applications, as well as teaching internationally on object-oriented design, expert systems, and neural networks.

 

 

New: Application integration with AWS Cloud Map for service discovery

Post Syndicated from AWS Admin original https://aws.amazon.com/blogs/architecture/new-application-integration-with-aws-cloud-map-for-service-discovery/

By: Alexandr Moroz, Sr. Product Manager, Amazon Route 53; Madhuri Peri, Sr. IoT Architect, AWS Professional Services; Aaron Molitor, Sr. Infrastructure Architect, AWS Professional Services; and Sarma Palli, Sr. DevOps Architect, AWS Professional Services

AWS Cloud Map enables you to map your cloud. You can define friendly names for any resource, such as Amazon S3 buckets, Amazon DynamoDB tables, Amazon SQS queues, or custom cloud services built on Amazon EC2, Amazon ECS, Amazon EKS, or AWS Lambda. Your applications can then discover resource location and metadata by friendly name using the AWS SDK and authenticated API queries. Resources can be further filtered and discovered by custom attributes such as deployment stage or version.

What’s new with API service discovery

If you want an enterprise application component such as a database hosted on Amazon EC2 instances to provide an endpoint to your database service, you have to register your applications’ EC2 IP address with AWS Cloud Map. You could register additional metadata attributes, like INSTANCE_STATUS, and then use this attribute with AWS Cloud Map to identify when the service is READY so that querying applications can only attempt a connection when they see a READY status in AWS Cloud Map. In cases where different microservices or enterprise applications have endpoints that have to be discovered, you can use AWS Cloud Map to register those as well. Examples of such endpoints include types of ELB load balancers, including ELB Classic, Application Load Balancers (ALB), and Network Load Balancers (NLB) with Auto Scaling groups.

Compute stack choices

Modern application architectures require a way to expose and advertise the service endpoint, register and de-register the endpoints, and query them. The dependencies of applications are expected to be handled by the applications themselves where a service registry becomes critical.

These microservices could follow different patterns of architecture lending themselves to use:

  1. Traditional workloads running on Amazon EC2 fronted by Auto Scaling groups or an ELB load balancer such as ELB Classic, Application Load Balancer, or Network Load Balancer.
  2. Amazon API Gateway and AWS Lambda for event-driven workflows.
  3. Container-based workloads on Amazon Elastic Container Service (ECS) using EC2 or Fargate launch types and Amazon Elastic Container Service for Kubernetes (EKS) for workloads that run as services (long-running) or daemons or run to completion (Batch / cron type).

This image shows a typical enterprise application composed of components that run different architectures. There is a web server running on Amazon EKS, a backend on Amazon ECS, a serverless event registration service, and payments running on EC2 Auto Scaling groups (ASG) while leveraging databases on Amazon Relational Database Service (RDS).

 

From a service discovery perspective, this is how the applications would want to be discovered and queried:

Let’s see how we can register each of these microservices (which are running on different cloud compute products) with AWS Cloud Map using both DNS-based and API-based service discovery and leveraging attributes for discovery when components are ready for traffic.

Microservice endpoints and discovery

AWS Cloud Map is a managed solution that lets you map logical names to the components/resources for an application. It allows applications to discover the resources using one of the AWS SDKs, RESTful API calls, or DNS queries. AWS Cloud Map serves registered resources, which can be Amazon DynamoDB tables, Amazon Simple Queue Service (SQS) queues, any higher-level application services that are built using EC2 instances or ECS tasks, or using a serverless stack.

When you register a resource, you can specify attributes and clients that can use the attributes to filter which resources are to be returned. For example, an application can request resources in a particular deployment stage, like Gamma or Prod. Additionally, you can choose to enable health checking for your IP-based resources, ensuring that AWS Cloud Map returns only healthy endpoints. Each API call is authenticated, and developers can control access to service locations and configuration using AWS Identity and Access Management (IAM).  This ensures that clients always discover the services that they’re authorized to use.

Let’s cover fundamentals

There are two aspects to service discovery:

  • The microservices themselves that register/de-register
  • Other microservices that are discover / query microservices

To register a microservice, follow these steps:

  1. Create a namespace.
  2. Create a service.
  3. Register instances with the service.

Steps 1 and 2  are performed once for each service. A utility function for registration and de-registration of a microservice has to be created. This utility function can be invoked for microservices regardless of the compute stack choice and deployed through your CI/CD/DevOps processes.

Step 3 is an ongoing operation that has to be updated each time the underlying EC2 compute that powers it changes. Examples include: EC2 Amazon Machine Image (AMI) changes, code changes for the service, and version changes.

Creating a namespace

A namespace is a logical group of services that share the same domain name, such as example1.example.com or example2.example.com. If you want these namespaces to be queried from only within your VPC, opt for a private namespace. If you want them to be accessible over the Internet, create a public namespace. In our example, the public namespace could be example1, but the total count of use of items in example1 in a tracker/reporting service could be an internal service.

Microservice using DNS-based service discovery:

Microservice using API-based service discovery:

Creating a service

When you register a service, AWS Cloud Map will create a record in the hosted zone – which is a combination of the name of the service and the name of the namespace. You could optionally define a health check for the service, too.

If the service you are creating is meant for DNS-based discovery using one of the A, AAAA, or SRV records, then you can create your service using the following syntax. Examples of this could be your application code running on an EC2 instance or as a container (ECS/EKS).

For services that are meant to be used only in an API-only namespace, the API call would look like this:

Register the compute backend with the service

Container IP address register/de-register

Amazon ECS is tightly integrated with AWS Cloud Map to enable service discovery for compute workloads running in ECS. When you enable service discovery for ECS services, it automatically keeps track of all task instances in AWS Cloud Map. Now your applications can discover them using DNS queries and AWS Cloud Map DiscoverInstances API calls. The ECS control plane that issues the calls would register the IP address of the task (and containers) with the AWS Cloud Map.

When the task goes away – either because a newer version has been deployed or there is a crash or a restart – the ECS control plane handles the de-registration process as well.

If you are using ECS for running containers, this is done seamlessly with ECS and AWS Cloud Map API integration.

API Gateway URL and AWS Lambda

When you create a microservice with an API namespace, you could use any attributes you prefer, without providing the IP/port information.

EC2 instance IP address registration and de-registration

As the EC2 instances are coming online, the userdata section of the bootstrap configuration will issue commands to register the EC2 instance’s IP address with the service. An alternate approach would be to run a Lambda function that runs against a microservice’s Auto Scaling group, lists the IP addresses, and registers the instance against the service.

If you are using an EC2 instance, if the instance is integrated with an Auto Scaling group, lifecycle hooks could also be used to run the de-register scripts. Another approach would be to use a Lambda function that runs periodically against an Auto Scaling group or even fires on Auto Scaling group notification events.

Query/Discovery

Both DNS and API service discovery are now supported by the AWS Cloud Map service discovery. Supported DNS record types are – A, AAAA, SRV, and CNAME.

It is typical in a microservices architecture for a service to be able to discover other services. We recommend that you query only by name and/or endpoint, and do not use the IP address of the compute stack (AWS Lambda / container/ EC2) that is backing the service.

The API commands list_services and get_services provide the information on what services are available and their corresponding details.

A DNS protocol also has clients caching the responses, so make sure that you handle caching settings. AWS Cloud Map uses regional endpoints here. Any A records created will use either a WEIGHTED response or MULTIVALUE answer policy. If you are using a Java-based compute stack, you might not want to choose DNS-based service discovery as the JVM caches DNS name lookups. When the JVM resolves a hostname to an IP address, it caches the IP address for a specified period of time, TTL. In such cases, you could use API-based service discovery and leverage the same approach as your other microservices that can use AWS Cloud Map.

DiscoverInstances API

DiscoverInstances API discovers registered instances for a specified namespace and service using regional endpoints. Updates to your services, such as new instances registered or existing instances removed, will be available in the API faster than via DNS. The API provides the ability to decorate the resources with additional metadata (service attributes) that can be used during discovery. For example, get the services with attributes of blue or green or other application attributes. These attributes can be used to complement health checks while performing discovery (such as finding out whether the instance is ready or not).

Here is a screenshot that shows how the registered ECS task instances appear in the AWS Cloud Map console:

The idea is that as the container or EC2 instance comes online or goes offline, it needs to issue a call to the AWS Cloud Map API to register or de-register the compute IP address.

Get started by visiting the AWS Cloud Map page. To learn more, take a look at the demo code in the GitHub repo here. If your compute workloads use EKS, please refer to this blog post that shows how to make EKS automatically publish all services in AWS Cloud Map.

AWS Storage Update: Amazon S3 & Amazon S3 Glacier Launch Announcements for Archival Workloads

Post Syndicated from AWS Admin original https://aws.amazon.com/blogs/architecture/amazon-s3-amazon-s3-glacier-launch-announcements-for-archival-workloads/

By Matt Sidley, Senior Product Manager for S3

Customers have built archival workloads for several years using a combination of S3 storage classes, including S3 Standard, S3 Standard-Infrequent Access, and S3 Glacier. For example, many media companies are using the S3 Glacier storage class to store their core media archives. Most of this data is rarely accessed, but when they need data back (for example, because of breaking news), they need it within minutes. These customers have found S3 Glacier to be a great fit because they can retrieve data in 1-5 minutes and save up to 82% on their storage costs. Other customers in the financial services industry use S3 Standard to store recently generated data, and lifecycle older data to S3 Glacier.

We launched Glacier in 2012 as a secure, durable, and low-cost service to archive data. Customers can use Glacier either as an S3 storage class or through its direct API. Using the S3 Glacier storage class is popular because many applications are built to use the S3 API and with a simple lifecycle policy, older data can be easily shifted to S3 Glacier. S3 Glacier continues to be the lowest-cost storage from any major cloud provider that durably stores data across three Availability Zones or more and allows customers to retrieve their data in minutes.

We’re constantly listening to customer feedback and looking for ways to make it easier to build applications in the cloud. Today we’re announcing six new features across Amazon S3 and S3 Glacier.

Amazon S3 Object Lock

S3 Object Lock is a new feature that prevents data from being deleted during a customer-defined retention period. You can use Object Lock with any S3 storage class, including S3 Glacier. There are many use cases for S3 Object Lock, including customers who want additional safeguards for data that must be retained, and for customers migrating from existing write-once-read-many (WORM) systems to AWS. You can also use S3 Lifecycle policies to transition data and S3 Object Lock will maintain WORM protection as your data is tiered.

S3 Object Lock can be configured in one of two modes: Governance or Compliance. When deployed in Governance mode, only AWS accounts with specific IAM permissions are able to remove the lock. If you require stronger immutability to comply with regulations, you can use Compliance mode. In Compliance mode, the lock cannot be removed by any user, including the root account. Take a look here:

S3 Object Lock is helpful in industries where long-term records retention is mandated by regulations or compliance rules. S3 Object Lock has been assessed for SEC Rule 17a-4(f), FINRA Rule 4511, and CFTC Regulation 1.31 by Cohasset Associates. Cohasset Associates is a management consulting firm specializing in records management and information governance. Read more and find a copy of the Cohasset Associates Assessment report in our documentation here.

New S3 Glacier Features

One of the things we hear from customers about using S3 Glacier is that they prefer to use the most common S3 APIs to operate directly on S3 Glacier objects. Today we’re announcing the availability of S3 PUT to Glacier, which enables you to use the standard S3 “PUT” API and select any storage class, including S3 Glacier, to store the data. Data can be stored directly in S3 Glacier, eliminating the need to upload to S3 Standard and immediately transition to S3 Glacier with a zero-day lifecycle policy. You can “PUT” to S3 Glacier like any other S3 storage class:

Many customers also want to keep a low-cost durable copy of their data in a second region for disaster recovery. We’re also announcing the launch of S3 Cross-Region Replication to S3 Glacier. You can now directly replicate data into the S3 Glacier storage class in a different AWS region.

Restoring Data from S3 Glacier

S3 Glacier provides three restore speeds for you to access your data: expedited (to retrieve data in 1-5 minutes), standard (3-5 hours), or bulk (5-12 hours). With S3 Restore Speed Upgrade, you can now issue a second restore request at a faster restore speed and get your data back sooner. This is useful if you originally requested standard or bulk speed, but later determine that you need a faster restore speed.

After a restore from S3 Glacier has been requested, you likely want to know when the restore completes. Now, with S3 Restore Notifications, you’ll receive a notification when the restoration has completed and the data is available. Many applications today are being built using AWS Lambda and event-driven actions, and you can now use the restore notification to automatically trigger the next step in your application as soon as S3 Glacier data is restored. For example, you can use notifications and Lambda functions to package and fulfill digital orders using archives restored from S3 Glacier.

Here, I’ve set up notifications to fire when my restores complete so I can use Lambda to kick off a piece of analysis I need to run:

You might need to restore many objects from S3 Glacier; for example, to pull all of your log files within a given time range. Using the new feature in Preview, you can provide a manifest of those log files to restore and, with one request, initiate a restore on millions or even trillions of objects just as easily as you can on just a few. S3 Batch Operations automatically manages retries, tracks progress, sends notifications, generates completion reports, and delivers events to AWS CloudTrail for all changes made and tasks executed.

To get started with the new features on Amazon S3, visit https://aws.amazon.com/s3/. We’re excited about these improvements and think they’ll make it even easier to build archival applications using Amazon S3 and S3 Glacier. And we’re not yet done. Stay tuned, as we have even more coming!

Your guide to AWS Digital User Engagement at re:Invent 2018

Post Syndicated from Hannah Nilsson original https://aws.amazon.com/blogs/messaging-and-targeting/your-guide-to-aws-digital-user-engagement-at-reinvent-2018/

Plan your Digital User Engagement agenda at re:Invent 2018

With roughly 50,000 attendees expected to descend upon Las Vegas next week for the seventh AWS re:Invent, our re:Invent 2018 agenda has unsurprisingly grown along with the crowd size. This year, we have added a full day of additional content, expanded our campus, added expo hours, and included over 1,000+ breakout sessions, hackathons, bootcamps, workshops, and more ways to get hands-on experiences with AWS. With so much going on, we wanted to create a quick guide for how you can connect with the AWS Digital User Engagement team to learn what’s new with the technology and tactics for effectively engaging your customers.

Meet us in the Expo Hall

Stop by the Mobile booth in the Venetian on Level 2 to learn more about our latest features (psst—did you know we just released a voice channel and event-based campaigns?), pick up some Amazon Pinpoint swag, and discuss strategies for transformative digital user engagement with our team of experts.

Expo hours are:

  • Monday, 11/26 from 4pm-7pm PT
  • Tuesday, 11/27 from 8am-8pm PT
  • Wednesday, 11/28 from 10am-6pm PT
  • Thursday, 11/29 from 10am-4pm PT

Livestream our Launchpad Session with Disney Streaming Services:

Unable to attend re:Invent in person? Check out our session with Disney Streaming Services live on Twitch. William Liu and Jimmy Tam from Disney Streaming will join us to discuss how migrating to Amazon Pinpoint from their internal digital engagement platform has allowed them to send billions of messages to their end users in real time.

  • Tuesday, 11/27, 5:15 PM – 5:35 PM  PT
  • Watch it live on Twitch, or stop by the Launchpad stage in the Expo Hall

Attend the Digital User Engagement Sessions:

Must-attend sessions:

(DIG302-L) Leadership Session: Overview of Amazon Digital User Engagement Solutions
Wednesday, 11/27, 3:15 PM – 4:15 PM PT – Venetian, Level 4, Delfino 4002
In this session, Disney Streaming Services will share how they utilize AWS Digital User Engagement platform to send billions of users relevant content in real time. Simon Poile, GM of AWS Digital User Engagement, will also describe how AWS provides the Amazon customer-centric culture of innovation, key technology building blocks, and a user engagement platform to help companies better engage their users. Plus, session attendees can claim a special piece of Amazon Pinpoint swag.

Enable Your Marketing Teams to Engage Users with Relevant & Personalized Content (DIG204)
Monday, Nov 26, 4:45 PM – 5:45 PM– Aria East, Level 2, Mariposa 8
This chalk talk is delivered by the innovation team from Claro, a major Brazilian telecommunications company that serves over 50 million customers with broadband, mobile, cable TV, and video-on-demand services. Members of the Claro innovation team describe how they enable their marketing departments to engage customers through campaigns that send personalized messages, such as billing reminders, updates on internet credits, and upcoming program alerts. They share how they then measure customer engagement and user behaviors, all using Amazon Pinpoint.

Game On! Building Hulu’s Real-Time Notification Platform for Live TV with Amazon Pinpoint (MOB304)
Wednesday, Nov 28, 1:45 PM – 2:45 PM– Venetian, Level 4, Marcello 4505
Notifying their viewers when their favorite teams are playing helps Hulu drive growth and improve viewer engagement, but building this feature was a complex process. Managing their live TV metadata while generating audiences in real-time in high scalability scenarios posed unique challenges for the engineering team at Hulu. In this session, Hulu will talk about their challenges in building their real-time notification platform, how Pinpoint helped them with their goals, and how they architected their solution for global scale and deliverability.

Full session list for re:Invent attendees:

Monday, November 26, 2o18

Perform Social Media Sentiment Analysis with Amazon Pinpoint & Amazon Comprehend (MOB314)
Monday, 11/26, 10:00 AM – 12:15 PM PT – MGM, Level 1, Grand Ballroom 124
In this workshop, we show you how to easily deploy an AWS solution that ingests all Tweets from any Twitter handle, uses Amazon Comprehend to generate a sentiment score, and then automatically engages customers with a dynamic, personalized message. The intended audience is developers and marketers who want to leverage AWS to create powerful user engagement scenarios. We highlight how quickly you can deploy a machine learning marketing solution. We cover Amazon Pinpoint, the AWS user engagement service, and Amazon Comprehend, the AWS natural language processing service that uses artificial intelligence and machine learning to find insights and relationships in text.

Delight your customers through natural language conversational experiences – powered by Amazon Lex and Amazon Pinpoint (AIM360)
Monday, 11/26, 11:30 AM – 12:30 PM PT – Mirage, St. Croix A
In this chalk talk, we first describe various use cases to engage customers through natural language conversations. We then showcase how to prepare, implement, and continuously improve such solutions. We use Amazon Lex to drive the chatbot interactions and capture user events in Amazon Pinpoint analytics. The event data is then used to measure user engagement and sentiment within conversations.

Understand & Remediate Message Deliverability Issues to Improve Customer Reach (DIGF202)
Monday, 11/26, 12:15 PM – 1:15 PM PT – Mirage, St. Thomas B
Several factors determine whether your messages reach your recipients. In this chalk talk, learn some of the critical factors that affect the deliverability of your messages across messaging channels, such as email and SMS. We walk you through some of the best practices, and we introduce you to deliverability tools, such as Phone Number Validate. With these tools, you can improve your customer reach, increasing the effectiveness of your campaigns and ultimately drive more revenue. The intended audience is developers and marketers who send outbound messages. We cover the AWS services Amazon Pinpoint and Amazon Simple Email Service (Amazon SES).

Effectively Engage Millions of Users in Seconds (MOB321 -R)
Monday, 11/26, 1:45 PM -2:45 PM PT – Aria East, Level 1, Joshua 4
In this session, we describe the challenges that companies face with high volume messaging-based engagement and the solutions AWS provides. Learn from current customers about how AWS can be used to message millions of users in seconds, while being cost effective and maintaining high deliverability rates. The intended audience is developers who are supporting their company’s growth or marketing activities using AWS services.

The repeat of this session (MOB321 -R1) will take place: Tuesday, 11/27, 5:30pm-6:30pm PT – Bellagio, Level 1, Grand Ballroom 1.

How to Use Amazon Pinpoint and Amazon Kinesis for Events-Based Messaging (MOB408-R)  
Monday, Nov 26, 2:30 PM – 3:30 PM – Aria West, Level 3, Starvine 10, Table 7
In this session, we demonstrate when and how to engage users in real time based on events and user behaviors to drive contextual and relevant user interactions. The intended audience is developers who are supporting marketing activities using AWS services.

The repeats of this session will take place:

Enable Your Marketing Teams to Engage Users with Relevant & Personalized Content (DIG204)
Monday, Nov 26, 4:45 PM – 5:45 PM– Aria East, Level 2, Mariposa 8
This chalk talk is delivered by the innovation team from Claro, a major Brazilian telecommunications company that serves over 50 million customers with broadband, mobile, cable TV, and video-on-demand services. Members of the Claro innovation team describe how they enable their marketing departments to engage customers through campaigns that send personalized messages, such as billing reminders, updates on internet credits, and upcoming program alerts. They share how they then measure customer engagement and user behaviors, all using Amazon Pinpoint.

Tuesday, November 27, 2018

Personalize User Targeting through ML & Measure User Engagement with Your Brand (DIG203)
Tuesday, Nov 27, 10:00 AM – 11:00 AM– Mirage, St. Croix A
In this session, we describe the critical role that multi-channel analytics plays in targeting users and how to use machine learning (ML) to predict the best content, channel, and time to engage. The intended audience is developers and marketers who need to target users and measure their engagement levels. We cover the AWS service Amazon Pinpoint.

Engage Users in Real-Time through Event-Based Messaging (MOB322-R)
Tuesday, Nov 27, 4:00 PM – 5:00 PM– Venetian, Level 4, Lando 4305
In this session, we describe when and how to engage users in real time, based on external events and user behaviors, to drive contextual and relevant user interactions. The intended audience is developers who support marketing activities using AWS services. We cover Amazon Pinpoint and Amazon Kinesis in this session.

The repeat of this session (MOB322-R1) will take place: Thursday, Nov 29, 1:00 PM – 2:00 PM– Venetian, Level 3, Murano 3202

Wednesday, November 28, 2018

Game On! Building Hulu’s Real-Time Notification Platform for Live TV with Amazon Pinpoint (MOB304)
Wednesday, Nov 28, 1:45 PM – 2:45 PM– Venetian, Level 4, Marcello 4505
Notifying their viewers when their favorite teams are playing helps Hulu drive growth and improve viewer engagement, but building this feature was a complex process. Managing their live TV metadata while generating audiences in real-time in high scalability scenarios posed unique challenges for the engineering team at Hulu. In this session, Hulu will talk about their challenges in building their real-time notification platform, how Pinpoint helped them with their goals, and how they architected their solution for global scale and deliverability.

Thursday, November 29, 2018:

Listen to Your Customers’ Social Voice & Engage Them with Delightful Experiences (DIG301)
Thursday, Nov 29, 12:15 PM – 1:15 PM– MGM, Level 1, Grand Ballroom 116
In this session, we demonstrate how to easily deploy an AWS solution that ingests all Tweets from any Twitter handle, uses Amazon Comprehend to generate a sentiment score, and then automatically engages customers with a personalized message. The intended audience includes developers and marketers who want to leverage AWS to create powerful user engagement scenarios. We highlight how quickly a machine-learning marketing solution can be deployed. We cover the AWS services Amazon Pinpoint, a digital user engagement service, and Amazon Comprehend, a natural language processing service that uses artificial intelligence and machine learning to find insights and relationships in text

Please note that session times and locations are subject to change. View the session catalogue for the most up-to-date information. For more helpful hints on how to make the most of your re:Invent experience, please visit our “How to re:Invent” guides.

Which session are you most excited about? Let us know in the comments!

Event-based campaigns let you automatically send messages to your customer when they perform certain actions

Post Syndicated from Zach Barbitta original https://aws.amazon.com/blogs/messaging-and-targeting/event-based-campaigns-let-you-automatically-send-messages-to-your-customer-when-they-perform-certain-actions/

Today, we added a new campaign management feature to Amazon Pinpoint: event-based campaigns. You can now use Amazon Pinpoint to set up campaigns that send messages (such as text messages, push notifications, and emails) to your customers when they take specific actions. For example, you can set up a campaign to send a message when a customer creates a new account, or when they spend a certain dollar amount in your app, or when they add an item to their cart but don’t purchase it. Event-based campaigns help you send messages that are timely, personalized, and relevant to your customers, which ultimately increases their trust in your brand and gives them a reason to return.

You can create event-based campaigns by using the Amazon Pinpoint console, or by using the Amazon Pinpoint API. Event-based campaigns are an effective way to implement both transactional and targeted campaign use cases. For transactional workloads, imagine that you want to send an email to customers immediately after they choose the Password reset button in your app. You can create an event-based campaign that addresses this use case in as few as four clicks.

For targeted campaign workloads, event-based campaigns are a great way to introduce cross-sale opportunities for complementary items. For example, if a customer adds a phone to their shopping cart, you can send an in-app push notification that offers them a special price on a case that fits the device that they’re purchasing.

When you use the campaign wizard in the Amazon Pinpoint console, you now have the option to create event-based campaigns. Rather than define a time to send your message to customers, you select specific events, attributes, and metric values. As you go through the event scheduling component of the wizard, you define the event that you want Amazon Pinpoint to look for, as shown in the following image.

In order to use this feature, you have to set up your mobile and web apps to send event data to Amazon Pinpoint. To learn more about sending event data to Amazon Pinpoint, see Reporting Events in Your Application in the Amazon Pinpoint Developer Guide.

A real-world scenario

Let’s look at a scenario that explores how you can use event-based campaigns. Say you have a music streaming service such as Amazon Music. You’ve secured a deal for a popular artist to record a set of live tracks that will only be available on your service, and you want to let people who’ve previously listened to that artist know about this exciting exclusive.

To create an event-based campaign, you complete the usual steps involved in creating a campaign in Amazon Pinpoint (you can learn more about these steps in the Campaigns section of the Amazon Pinpoint User Guide). When you arrive at step 4 of the campaign creation wizard, it asks you when the campaign should be sent. At this point, you have two options: you can send the campaign at a specific time, or you can send it when an event occurs. In this example, we choose When an event occurs.

Next, you choose the event that causes the campaign to be sent. In our example, we’ll choose the event named song.played. If we stopped here and launched the campaign, Amazon Pinpoint would send our message to every customer who generated that event—in this case, every customer who played any song in our app, ever. That’s not what we want, but fortunately we can refine our criteria by choosing attributes and metrics.

In the Attributes box, we’ll choose the artistName attribute, and then choose the attribute value that corresponds with the name of the band we want to promote, The Alexandrians.

We’ve already narrowed down our list of recipients quite a bit, but we can go even further. In the Metrics box, we can specify certain quantitative values. For example, we could refine our event so that only customers who spent more than 10 minutes listening to songs by The Alexandrians are sent notifications. To complete this step, we choose the timePlayed metric and set it to look for endpoints for which this metric is greater than 600 seconds. When we finish setting up the triggering event, we see something similar to the following example:

Now that we’ve finished setting up the event that will result in our campaign to be sent, we can finish creating the campaign as we normally would.

Transparent and affordable pricing

There are no additional charges associated with creating event-based campaigns. You pay only for the number of endpoints that you target, the number of messages that you send, and the number of analytics events that you send to Amazon Pinpoint. To learn more about the costs associated with using Amazon Pinpoint, see our Pricing page.

As an example, assume that your app has 50,000 monthly active users, and that you target each user once a day with a push notification. In this scenario, you’d pay around $55.50 per month to send messages to all 50,000 users. Note that this pricing is subject to change, and is not necessarily reflective of what your actual costs may be.

Limitations and best practices

There are a few limitations and best practices that you should consider when you create event-based campaigns:

  • You can only create an event-based campaign if the campaign uses a dynamic segment (as opposed to an imported segment).
  • Event-based campaigns only consider events that are reported by recent versions of the AWS Mobile SDK. Your apps should use the following versions of the SDK in order to work with event-based campaigns:
    • AWS Mobile SDK for Android: version 2.7.2 or later
    • AWS Mobile SDK for iOS: version 2.6.30 or later
  • Because of this restriction, we recommend that you set up your segments so that they only include customers who use a version of your app that runs a compatible version of the SDK.
  • Choose your events carefully. For example, if you send an event-based campaign every time a session.start event occurs, you might quickly overwhelm your users with messages. You can limit the number of messages that Amazon Pinpoint sends to a single endpoint in a 24-hour period. For more information, see General Settings in the Amazon Pinpoint User Guide.

Ready to get started?

You can use these features today in every AWS Region where Amazon Pinpoint is available. We hope that these additions help you to better understand your customers and find exciting new ways to use Amazon Pinpoint to connect and engage with them.