All posts by Sébastien Stormacq

Machine Learning-Powered Amazon Connect, Now With Call Summarization

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/machine-learning-powered-amazon-connect-now-with-call-summarization/

At AWS our mission is to make machine learning (ML) accessible to data scientists, developers, and business users. To help businesses easily leverage the power of ML, we create purpose-built solutions that embed ML and deep learning technologies directly into a business process to address real customer needs, rather than leaving companies to sort it out on their own.

One place where we have seen ML have an impact is within the contact center—the place you receive and respond to customer inquiries and issues. Because of the growing role of customer experience (CX) and the increase in contact less commerce via phone or email, contact centers are essentials to maintaining the human connections that businesses depend on. However, analog or outdated methods make it difficult to address every customer need in an effective way that delivers timely resolutions, delivers great experiences, and fosters customer loyalty.

Embedding AWS ML technologies into a cloud contact center solution helps decrease the friction of calls, chats, and other engagements. It also makes it possible to automate outdated processes.

Amazon Connect is an easy-to-use, cloud-based, ML-powered contact center service that helps companies of any size deliver superior customer service at a lower cost.

Let me take three examples with Voice ID, Wisdom, and Contact Lens.

Amazon Connect Voice ID
ML capabilities might help streamline customer experience for authentication. Instead of asking customers to repeat their email address and their mother’s maiden name several times, ML-powered voice identification can establish a digital voice print associated with each customer’s unique voice. Then, it can recognize it at the beginning of each subsequent call. Voice identification provides a confidence score that may be used to automate authentication workflows.

Amazon Connect Wisdom
ML might also help search the vast documentation and knowledge base to find the most relevant answers to the questions raised by the customer. ML helps resolve customer issues faster and better.

Contact Lens for Amazon Connect
ML technologies also shine at analyzing the tone and content of a conversation, capturing customer sentiment in the moment, and learning from it. ML can help transcribe calls, track customer sentiment, detect common issues and customer trends, or even pinpoint discrepancies.

At just about the same time last year, I announced the addition of real-time capabilities for Contact Lens. This lets supervisors identify when to assist an agent on live calls so that they can provide guidance via chat or have the agent transfer the call. Last September, we added support for eight new languages, ending up with a total of 21 languages for post-call analytics and 12 languages for both post-call and real-time analytics.

Contact Lens Adds Call Summarization
But we didn’t stop there. Today, I am pleased to announce the addition of a new capability that helps you improve customer experience and agent and supervisor productivity by automatically summarizing the important aspects of each customer call.

You told us that keeping notes of customer conversations is time consuming, especially, for agents that must take notes during the call and import them manually in your CRM tool afterward. In the end, this is more time for us, the customers, waiting in queue for an agent to become available. Likewise, using automatically generated call transcripts doesn’t save time for supervisors. It is time consuming for supervisors to read these full call transcripts to understand what happened during customer conversations.

How it Works
Starting today, Contact Lens has added a summary of the key moments in a conversation. It is enabled by default, and there is no additional configuration step. You may toggle the Show transcript summary button to show or hide the summary when you don’t need it.

Contac Lens - Show Transcript Summary - Toggle button

Once a call is analyzed, the summary is available on the contact detail page.

Contact Lens identifies and summarizes the sections corresponding to Issue (e.g., lost package), Outcome (e.g., customer refund), and Action item (e.g., send a follow-up mail confirming the refund was processed). A manager can quickly see where there’s an action to send a customer a follow-up email and take action to ensure it happens.

Contact Lens Call Summary Example

The call summary is also available in JSON format. Contact Lens uploads these in the S3 bucket of your choice. Having access to the JSON file lets you import the summaries programmatically in your CRM or other tools.

... redacted for brevity ...

"IssuesDetected": [
{
   "CharacterOffsets": {
      "BeginOffsetChar": 31,
      "EndOffsetChar": 73
   },
   "Text": "I would like to cancel my subscription"
}
]
...
"ActionItemsDetected": [
 {
   "CharacterOffsets": {
      "BeginOffsetChar": 32,
      "EndOffsetChar": 116
   },
   "Text": "I will send you an email with details"
 }
 ]

Availability and Pricing
Call summarization by Contact Lens is available in all AWS Regions where Contact Lens is available today. We support post-call analytics in the US West (Oregon), US East (N. Virginia), Canada (Central), Europe (London), Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Tokyo), and Asia Pacific (Sydney) regions. We support real-time analytics in the US West (Oregon), US East (N. Virginia), Canada (Central), Europe (London), Europe (Frankfurt), Asia Pacific (Seoul), Asia Pacific (Tokyo), and Asia Pacific (Sydney) regions.

Call summary comes at no additional cost on top of the usual charges for Contact Lens. This is why we choose to enable it by default. Contact Lens is charged $0.015 per minute of voice conversation analyzed. Most of our Contact Lens customers analyze millions of conversation minutes per month. The price is $0.0125 per minute when you analyze more than 5 millions minutes per month.

If you do not have Contact Lens enabled on your call center, go ahead and start using it today.

— seb

New – Amazon EBS Snapshots Archive

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-amazon-ebs-snapshots-archive/

I am pleased to announce the availability of Amazon EBS Snapshots Archive, a new storage tier for the long-term retention of Amazon Elastic Block Store (EBS) snapshots of your EBS volumes.

In a nutshell, EBS is an easy-to-use high-performance block storage service for your Amazon Elastic Compute Cloud (Amazon EC2) instances. An EBS volume mounted to your EC2 instances lets you boot an operating system and store data for your most performance-demanding workloads. You may use EBS snapshots to create point-in-time copies of your volume data. The first snapshot of a volume contains all of the data written into that volume. Subsequent snapshots are incremental. Snapshots are stored on Amazon Simple Storage Service (Amazon S3), and they may be shared between AWS accounts and AWS Regions.

The ability to take frequent snapshots and easily restore volumes makes EBS snapshots an obvious choice for your data management strategy, alongside other backup options. The incremental nature of snapshots makes them cost-effective for daily and weekly backups that need immediate restores. However, you were telling us that business compliance and regulatory needs have meant that you needed to retain EBS snapshots for longer periods of time (months or years). For example, snapshots taken at the end of a project, or snapshots for test and development preserved for future project releases. The vast majority of these snapshots are taken and never read. For these snapshots, you are looking to lower your storage costs. Today, to benefit from lower storage costs, you may have written complex scripts involving temporary EC2 instances to restore snapshots, mount the corresponding volumes, and transfer the data to lower-cost storage tiers, such as Amazon Glacier.

EBS Snapshots Archive provides a low-cost storage tier to archive full, point-in-time copies of EBS Snapshots that you must retain for 90 days or more for regulatory and compliance reasons, or for future project releases. Now, you can easily archive and manage EBS Snapshots, thereby eliminating the need for custom scripts and third-party tools to manage these snapshots. This lets you move your rarely accessed snapshots to EBS Snapshots Archive to achieve up to 75% lower storage costs, and avoid licensing costs for third-party tools. Furthermore, you can retrieve an archived snapshot within 24-72 hours, and, once restored, use the snapshot to recover an EBS volume.

As per usual, let me show you how it works.

How to Get Started
I have a snapshot available in the US East (N. Virginia) Region, and I want to archive this snapshot for compliance reasons. I open the AWS Management Console, navigate to EC2, then to Snapshots. I select the snapshot I want to archive, and select the Actions menu. I select the Archive snapshot menu option.

EBS Snapshot Archive - create snapshot

I carefully read the confirmation message :-), and I select Archive snapshot.

EBS Snapshot Archive - create snapshot - confirmation

I may monitor the progress of the archive operation with the new Storage Tier tab at the bottom of the screen. After some time, depending on the size of the snapshot, the Tiering status becomes ✅ Archival completed.

EBS Snapshot Archive - create snapshot - archival completedArchived snapshots stay visible in the console. The new Storage tier column indicates the tier used for storage (Standard or Archive).

How do I Restore a Volume?
Restoring a volume from EBS Snapshots Archive is a two-step process. First, I retrieve the snapshot from EBS Snapshots Archive to its original snapshot ID, using RestoreSnapshotTier API call or the management console. It takes between 24-72 hours to retrieve the snapshot from the archive, depending on the snapshot size. Once retrieved, the snapshot appears as a regular snapshot on my account. At this stage, I hydrate the retrieved snapshot into an EBS volume using the default snapshot restore or Fast Snapshot Restore (FSR) for expedited restores, just like usual.

A CloudWatch event is generated when the snapshot is restored. You may listen to this event to avoid pulling the status with the API.

A CreateVolume API call on an archived snapshot will fail. You must restore a snapshot from archive before you use it to create a volume.

Using the AWS Management Console, I select the snapshot that I want to restore, I select the Actions menu, and then I select the Restore snapshot from archive menu option.

EBS Snapshot Archive - create snapshot - restore archive

I have the choice to restore the snapshot permanently, or just temporarily. At the end of the temporary duration, the standard tier snapshot is deleted, and only the archive is preserved.

EBS Snapshot Archive - create snapshot - restore archive - confirmation

After a while, depending on the snapshot size, the archive is restored to standard storage and may be used to recreate a volume, just like usual. I may monitor the progress of the retrieval and the lifetime for temporarily restored archives in the new Storage tier tab in the bottom half of the screen. Temporary restored snapshots may be kept for up to 180 days.

Pricing and Availability
EBS Snapshots Archive is available for you today in 17 AWS Regions. At the time of launch, it is not available in the two Regions in China, Asia Pacific (Seoul), Asia Pacific (Osaka), Canada (Central), and South America (São Paulo).

As per usual, you pay as-you-go, with no minimum or fixed fees. There are two metrics that influence EBS Snapshots Archive billing: data storage and data retrieval. We charge you $0.0125 per GB-month of stored data and $0.03 per GB retrieved. You are charged for a 90-day period at minimum. This means that if you delete a snapshot archive or permanently restore it less than 90 days after creation, then we charge for the full 90-day period. The EBS pricing page has the details.

Go ahead and start to configure your long term storage for EBS snaphots today.

— seb

New – Amazon CloudWatch Evidently – Experiments and Feature Management

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/cloudwatch-evidently/

As a developer, I am excited to announce the availability of Amazon CloudWatch Evidently. This is a new Amazon CloudWatch capability that makes it easy for developers to introduce experiments and feature management in their application code. CloudWatch Evidently may be used for two similar but distinct use-cases: implementing dark launches, also known as feature flags, and A/B testing.

Features flags is a software development technique that lets you enable or disable features without needing to deploy your code. It decouples the feature deployment from the release. Features in your code are deployed in advance of the actual release. They stay hidden behind if-then-else statements. At runtime, your application code queries a remote service. The service decides the percentage of users who are exposed to the new feature. You can also configure the application behavior for some specific customers, your beta testers for example.

When you use feature flags you can deploy new code in advance of your launch. Then, you can progressively introduce a new feature to a fraction of your customers. During the launch, you monitor your technical and business metrics. As long as all goes well, you may increase traffic to expose the new feature to additional users. In the case that something goes wrong, you may modify the server-side routing with just one click or API call to present only the old (and working) experience to your customers. This lets you revert back user experience without requiring rollback deployments.

A/B Testing shares many similarities with feature flags while still serving a different purpose. A/B tests consist of a randomized experiment with multiple variations. A/B testing lets you compare multiple versions of a single feature, typically by testing the response of a subject to variation A against variation B, and determining which of the two is more effective. For example, let’s imagine an e-commerce website (a scenario we know quite well at Amazon). You might want to experiment with different shapes, sizes, or colors for the checkout button, and then measure which variation has the most impact on revenue.

The infrastructure required to conduct A/B testing is similar to the one required by feature flags. You deploy multiple scenarios in your app, and you control how to route part of the customer traffic to one scenario or the other. Then, you perform deep dive statistical analysis to compare the impacts of variations. CloudWatch Evidently assists in interpreting and acting on experimental results without the need for advanced statistical knowledge. You can use the insights provided by Evidently’s statistical engine, such as anytime p-value and confidence intervals for decision-making while an experiment is in progress.

At Amazon, we use feature flags extensively to control our launches, and A/B testing to experiment with new ideas. We’ve acquired years of experience to build developers’ tools and libraries and maintain and operate experimentation services at scale. Now you can benefit from our experience.

CloudWatch Evidently uses the terms “launches” for feature flags and “experiments” for A/B testing, and so do I in the rest of this article.

Let’s see how it works from an application developer point of view.

Launches in Action
For this demo, I use a simple Guestbook web application. So far, the guest book page is read-only, and comments are entered from our back-end only. I developed a new feature to let customers enter their comments on the guestbook page. I want to launch this new feature progressively over a week and keep the ability to revert the change back if it impacts important technical or business metrics (such as p95 latency, customer engagement, page views, etc.). Users are authenticated, and I will segment users based on their user ID.

Before launch:
Evidently - experiment off
After launch:
Evidently - experiment on

Create a Project
Let’s start by configuring Evidently. I open the AWS Management Console and navigate to CloudWatch Evidently. Then, I select Create a project.

Evidently - create project

 

I enter a Project name and Description.

Evidently lets you optionally store events to CloudWatch logs or S3, so that you can move them to systems such as Amazon Redshift to perform analytical operations. For this demo, I choose not to store events. When done, I select Create project.

Evidently - create project second part

Add a Feature
Next, I create a feature for this project by selecting Add feature. I enter a Feature name and Feature description. Next, I define my Feature variations. In this example, there are two variations, and I use a Boolean type. true indicates the guestbook is editable and false indicates it is read only. Variations types might be boolean, double, long, or string.

Evidently - create featureI may define overrides. Overrides let me pre-define the variation for selected users. I want the user “seb”, my beta tester, to always receive the editable variation.

Evidently - Create feature - overridesThe console shares the JavaScript and Java code snippets to add into my application.

Evidently - code snippetTalking about code snippets, let’s look at the changes at the code level.

Instrument my Application Code
I use a simple web application for this demo. I coded this application using JavaScript. I use the AWS SDK for JavaScript and Webpack to package my code. I also use JQuery to manipulate the DOM to hide or show elements. I designed this application to use standard JavaScript and a minimum number of frameworks to make this example inclusive to all. Feel free to use higher level tools and frameworks, such as React or Angular for real-life projects.

I first initialize the Evidently client. Just like other AWS Services, I have to provide an access key and secret access key for authentication. Let’s leave the authentication part out for the moment. I added a note at the end of this article to discuss the options that you have. In this example, I use Amazon Cognito Identity Pools to receive temporary credentials.

// Initialize the Amazon CloudWatch Evidently client
const evidently = new AWS.Evidently({
    endpoint: EVIDENTLY_ENDPOINT,
    region: 'us-east-1',
    credentials: fromCognitoIdentityPool({
        client: new CognitoIdentityClient({ region: 'us-west-2' }),
        identityPoolId: IDENTITY_POOL_ID
    }),
});

Armed with this client, my code may invoke the EvaluateFeature API to make decisions about the variation to display to customers. The entityId is any string-based attribute to segment my customers. It might be a session ID, a customer ID, or even better, a hash of these. The featureName parameter contains the name of the feature to evaluate. In this example, I pass the value EditableGuestBook.

const evaluateFeature = async (entityId, featureName) => {

    // API request structure
    const evaluateFeatureRequest = {
        // entityId for calling evaluate feature API
        entityId: entityId,
        // Name of my feature
        feature: featureName,
        // Name of my project
        project: "AWSNewsBlog",
    };

    // Evaluate feature
    const response = await evidently.evaluateFeature(evaluateFeatureRequest).promise();
    console.log(response);
    return response;
}

The response contains the assignment decision from Evidently, as based on traffic rules defined on the server-side.

{
 details: {
   launch: "EditableGuestBook", group: "V2"},
   reason: "LAUNCH_RULE_MATCH", 
   value: {boolValue: false},
   variation: "readonly"
}}

The last part consists of hiding or displaying part of the user interface based on the value received above. Using basic JQuery DOM manipulation, it would be something like the following:

window.aws.evaluateFeature(entityId, 'EditableGuestbook').then((response, error) => {
    if (response.value.boolValue) {
        console.log('Feature Flag is on, showing guest book');
        $('div#guestbook-add').show();
    } else {
        console.log('Feature Flag is off, hiding guest book');
        $('div#guestbook-add').hide();
    }
});

Create a Launch
Now that the feature is defined on the server-side, and the client code is instrumented, I deploy the code and expose it to my customers. At a later stage, I may decide to launch the feature. I navigate back to the console, select my project, and select Create Launch. I choose a Launch name and a Launch description for my launch. Then, I select the feature I want to launch.

Evidently - create launchIn the Launch Configuration section, I configure how much traffic is sent to each variation. I may also schedule the launch with multiple steps. This lets me plan different steps of routing based on a schedule. For example, on the first day, I may choose to send 10% of the traffic to the new feature, and on the second day 20%, etc. In this example, I decide to split the traffic 50/50.

Evidently - launch configurationFinally, I may define up to three metrics to measure the performance of my variations. Metrics are defined by applying rules to data events.

Evidently - Custom MetricsAgain, I have to instrument my code to send these metrics with PutProjectEvents API from Evidently. Once my launch is created, the EvaluateFeature API returns different values for different values of entityId (users in this demo).

At any moment, I may change the routing configuration. Moreover, I also have access to a monitoring dashboard to observe the distribution of my variations and the metrics for each variation.

Evidently - launch monitoringI am confident that your real-life launch graph will get more data than mine did, as I just created it to write this post.

A/B Testing
Doing an A/B test is similar. I create a feature to test, and I create an Experiment. I configure the experiment to route part of the traffic to variation 1, and then the other part to variation 2. When I am ready to launch the experiment, I explicitly select Start experiment.

Evidently - start experiment

In this experiment, I am interested in sending custom metrics. For example:

// pageLoadTime custom metric
const timeSpendOnHomePageData = `{
   "details": {
      "timeSpendOnHomePage": ${timeSpendOnHomePageValue}
   },
   "userDetails": { "userId": "${randomizedID}", "sessionId": "${randomizedID}" }
}`;

const putProjectEventsRequest: PutProjectEventsRequest = {
   project: 'AWSNewsBlog',
   events: [
    {
        timestamp: new Date(),
        type: 'aws.evidently.custom',
        data: JSON.parse(timeSpendOnHomePageData)
    },
   ],
};

this.evidently.putProjectEvents(putProjectEventsRequest).promise().then(res =>{})

Switching to the Results page, I see raw values and graph data for Event Count, Total Value, Average, Improvement (with 95% confidence interval), and Statistical significance. The statistical significance describes how certain we are that the variation has an effect on the metric as compared to the baseline.

These results are generated throughout the experiment and the confidence intervals and the statistical significance are guaranteed to be valid anytime you want to view them. Additionally, at the end of the experiment, Evidently also generates a Bayesian perspective of the experiment that provides information about how likely it is that a difference between the variations exists.

The following two screenshots show graphs for the average value of two metrics over time, and the improvement for a metric within a 95% confidence interval.

Evidently - experiment monitoring - average valuesEvidently - experiment monitoring - improvement

Additional Thoughts
Before we wrap-up, I’d like to share some additional considerations.

First, it is important to understand that I choose to demo Evidently in the context of front-end application development. However, you may use Evidently with any application type: front-end web or mobile, back-end API, or even machine learning (ML). For example, you may use Evidently to deploy two different ML models and conduct experiments just like I showed above.

Second, just like with other AWS Services, Evidently API is available in all of our AWS SDK. This lets you use EvaluateFeature and other APIs from nine programing languages: C++, Go, Java, JavaScript (and Typescript), .Net, NodeJS, PHP, Python, and Ruby. AWS SDK for Rust and Swift are in the making.

Third, for a front-end application as I demoed here, it is important to consider how to authenticate calls to Evidently API. Hard coding access keys and secret access keys is not an option. For the front-end scenario, I suggest that you use Amazon Cognito Identity Pools to exchange user identity tokens for a temporary access and secret keys. User identity tokens may be obtained from Cognito User Pools, or third-party authentications systems, such as Active Directory, Login with Amazon, Login with Facebook, Login with Google, Signin with Apple, or any system compliant with OpenID Connect or SAML. Cognito Identity Pools also allows for anonymous access. No identity token is required. Cognito Identity Pools vends temporary tokens associated with IAM roles. You must Allow calls to the evidently:EvaluateFeature API in your policies.

Finally, when using feature flags, plan for code cleanup time during your sprints. Once a feature is launched, you might consider removing calls to EvaluateFeature API and the if-then-else logic used to initially hide the feature.

Pricing and Availability
Amazon Cloudwatch Evidently is generally available in nine AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Ireland), Europe (Frankfurt), and Europe (Stockholm). As usual, we will gradually extend to other Regions in the coming months.

Pricing is pay-as-you-go with no minimum or recurring fees. CloudWatch Evidently charges your account based on Evidently events and Evidently analysis units. Evidently analysis units are generated from Evidently events, based on rules you have created in Evidently. For example, a user checkout event may produce two Evidently analysis units: checkout value and the number of items in cart. For more information about pricing, see Amazon CloudWatch Pricing.

Start experimenting with CloudWatch Evidently today!

— seb

New – AWS Migration Hub Refactor Spaces Helps to Incrementally Refactor Your Applications

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-aws-migration-hub-refactor-spaces-helps-to-incrementally-refactor-your-applications/

I am excited to announce the preview of AWS Migration Hub Refactor Spaces, a new capability of AWS Migration Hub to let you refactor existing applications into distributed applications, typically based on microservices.

There are multiple reasons why you want to refactor existing applications. You might want to make your code more modular, use more modern frameworks, use different data storage, etc. In general, when refactoring, your objective is to make your application easier to maintain and evolve over time. Other benefits might include handling larger workloads, increasing resiliency, or lowering costs. But let’s face it, refactoring is hard. I usually compare refactoring to changing the engines, cabin seats, and entertainment system of a plane while keeping the plane in the air, fully loaded with passengers, and without having them notice any change.

When talking with customers who have successfully been through these refactoring projects, we noticed a common pattern: the Strangler Fig design pattern.

strangler fig

A strangler fig is a family of plants that grow their roots from the top of the trees that host them, eventually enveloping or replacing their host. Author Martin Fowler first coined the term to describe a migration design pattern. The idea is “to gradually create a new system around the edges of the old, letting it grow slowly over several years until the old system is strangled”.

How Can I Apply This Plant Behavior To My Application Migration?
Inspired by this family of plants, I might want to extract capabilities from a monolithic application and rewrite them as microservices. Then, I incrementally route traffic away from the old to the new. Over time, all of the requests are routed to microservices, and the existing application is retired.

While effective, this approach to application transformation creates hurdles. I must create the required infrastructure to separate the existing applications and the microservices. In the AWS cloud, this often involves creating multiple AWS accounts, so teams or services can more easily operate independently. Having multiple accounts is the most efficient way to separate concerns and billing across teams. When dealing with multiple AWS accounts, it is required to maintain networking infrastructure to connect my existing application and new services together. Furthermore, I must create a routing control system to route traffic gradually from the old application to the new services in different accounts. Creating and managing that infrastructure at scale is complex. It introduces additional risks and costs to the refactor project.

How Refactor Spaces Helps
AWS Migration Hub Refactor Spaces takes care of the heavy lifting for me. First, it lays down the networking infrastructure to enable connectivity between multiple AWS accounts. Second, it creates and manages a mechanism to route API calls away from my legacy application.

Let’s imagine I have a monolithic application that I want to refactor. The application is made of a web-based front-end using ReactJS. The front-end application is hosted on Amazon Simple Storage Service (Amazon S3) and distributed through Amazon CloudFront. The front-end makes API calls to a monolithic application developed in NodeJS or Python and deployed on several EC2 instances. The API uses a relational database, because this is how we store data since the company existed.

The architecture of this application is illustrated by the following diagram.

refactor spaces - monolith

Each API has a distinct URI. For example the /cart API handles the shopping basket, the /order API handles the ordering system, etc. I apply the strangler fig pattern and decide to extract the /cart capabilities to a set of new microservices. I create an AWS account for these microservices. I develop and deploy a set of AWS Lambda functions to implement the cart management functionalities. I chose to use Amazon DynamoDB for the shopping basket data storage because of its low latency at scale.

The schema of my new architecture is shown in the following diagram:

Refactor Space - target architectureBut now I have two challenges. First, I have to design, code, and deploy a routing mechanism to route API calls made by the front-end application to the correct back-end: either the monolith, or the new microservices. This service will likely be deployed into a distinct AWS account. Then, I have to configure network connectivity between these multiple AWS accounts.

This is where Refactor Spaces comes into the picture.

Introducing AWS Migration Hub Refactor Spaces
Refactor Spaces makes it easy to manage application refactoring by taking care of the two challenges I just described: the routing of the API calls and the network connectivity between AWS accounts. It is made of Environments, Services, and an Application proxy. Let’s see it in action.

I open the AWS Management Console, navigate to AWS Migration Hub, and select Refactor Spaces.

I first create a Refactor Spaces Environment. An Environment is a multi-account network fabric consisting of peered VPCs. This lets AWS resources in service VPCs added to the environment communicate directly across AWS accounts. It also provides a unified view of networking and services across accounts.

In Create environment, I give my environment a name and a description, and then select Next.

Refactor Spaces - Create environment

Then, I define my application. I give my application a name, and select the VPC where the proxy will be deployed.

An application is a services container. It has a proxy that defines routes. The proxy lets your front-end application use a single endpoint to contact multiple services. All of the traffic hits the single proxy endpoint, and then it’s sent to multiple services based on your rules.

Refactor Space - Create application

You may want to use multiple AWS accounts as explained before. Typically, an application is made of one AWS Account that hosts the Refactor Spaces Application proxy, one or multiple AWS accounts to host the legacy application, and one AWS account for the first microservice. Therefore, I invite the other AWS account owners to join this Refactor Spaces environment. I add one principal per AWS Account. Refactor Spaces doesn’t reinvent the wheel, but it leverages AWS Resource Access Manager (RAM) to do so.

This step is optional. Refactor Spaces may work within one AWS Account. It is possible to share the environments with other AWS accounts at a later stage.

I enter the AWS account IDs as Principals, and then select Next.

Refactor Space - Shared Accounts

Finally, I review my choices and select Create & share environment (not shown here).

Assuming that the microservices are ready to use, the next step is registering them as Refactor Spaces Services. Refactor Spaces Services are entities that provide business capabilities, typically microservices. These services are reachable through unique endpoints, and they can interoperate across accounts in a Refactor Spaces Environment. In this example, there are four services:

  • The monolithic app. This is the default service where Refactor Spaces routes all API calls initiated by the front-end.
  • Three microservices to implement the /cart capability. I decided to refactor this capability with three distinct sevices: AddItem, RemoveItem, and ListItems.

A Refactor Spaces Service may target any compute resource type: EC2, containers deployed on AWS Fargate, an Application Load Balancer, an AWS Lambda function, etc.

I select Create service from the left menu. The service configuration is in three steps. First, I select the Refactor Spaces Environment and Application where I want to define this service. Second, I give my service a name and a description. And third, I select the service endpoint: either an HTTP/HTTPS URL in a VPC, or a Lambda function.

The monolithic application is the default route where Refactor Spaces Application proxy routes all of the API calls, unless otherwise specified. I enter / as Source path and select Include child paths. Then, I make make sure Match all is selected for HTTP verbs.

When finished, I select Create service. I repeat this process for each of my microservices. For this demo, I create four Refactor Spaces Services in total.

AWS Migration Hub refactor Spaces - create service

The last step defines the routing rules for the Refactor Spaces Application proxy. When configured, the proxy becomes the new API endpoint for my front-end application. The sole change that I have to make in my front-end application is to point it at the Refactor Spaces Application proxy URI. The proxy routes API calls to Services, according to a route definition. An Application proxy supports routing to all compute platforms with public or private visibility. At the moment, private endpoints must be referred through a public DNS name or their private IP address. Each API call is run against the set of routes configured in the proxy. When a path matches a rule, the request is sent to the target service configured for that path. Proxies have a default route that forwards requests to a default service if they don’t match any of the path rules.

I select the service that I just created. Then, I enter the route Source path and the HTTP Verb to support. When my service expects subpaths (such as /cart/123), I make sure to select Include child paths, as well.

Refactor Spaces - Define route

I repeat this process for the GetItem and RemoveItem microservices. They are invoked for different HTTP verbs: GET and DELETE respectively.

Based on this configuration, Refactor Spaces creates and manages the following architecture for me. The Refactor Spaces Application proxy and network fabric are deployed in a separate AWS account. I might further configure the Amazon API Gateway based on the needs of my monolithic application or microservices.

Refactor Spaces - final architecture

The ultimate change is for the application front-end. I modify its configuration to point to the Refactor Spaces Application proxy endpoint, instead of the monolith’s endpoint. From now on, Refactor Spaces routes API calls to the monolith by default. It routes the /cart calls for GET, POST, and DELETE verbs to my new microservices implemented as Lambda functions.

Over time, I will repeat this process to move other capabilities out of the monolithic application, one-by-one, until the old monolith is strangled replaced by the new microservices architecture.

Pricing and Availaibility
AWS Migration Hub Refactor Spaces is available today in the ten following AWS Regions: US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Singapore) Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Ireland), Europe (Frankfurt), Europe (London), and Europe (Stockholm). As per usual, we’re looking forward to expanding to additional Regions in the future.

This new capability is available today as an open preview, and no registration is necessary. You can start to use it today. There is no charge for using Refactor Space during the preview period. However, you may be charged for the resources that it provisions on your AWS accounts: Amazon API Gateway, AWS Transit Gateway, and Network Load Balancer. The pricing details are available on AWS Migration Hub’s pricing page. Billing will start when Refactor Spaces will be generally available.

Go and start refactoring your applications today!

— seb

Measure and Improve Your Application Resilience with AWS Resilience Hub

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/monitor-and-improve-your-application-resiliency-with-resilience-hub/

I am excited to announce the immediate availability of AWS Resilience Hub, a new AWS service designed to help you define, track, and manage the resilience of your applications.

You are building and managing resilient applications to serve your customers. Building distributed systems is hard; maintaining them in an operational state is even harder. The question is not if a system will fail, but when it will, and you want to be prepared for that.

Resilience targets are typically measured by two metrics: Recovery Time Objective (RTO), the time it takes to recover from a failure, and Recovery Point Objective (RPO), the maximum window of time in which data might be lost after an incident. Depending on your business and application, these can be measured in seconds, minutes, hours, or days.

AWS Resilience Hub lets you define your RTO and RPO objectives for each of your applications. Then it assesses your application’s configuration to ensure it meets your requirements. It provides actionable recommendations and a resilience score to help you track your application’s resiliency progress over time. Resilience Hub gives a customizable single dashboard experience, accessible through the AWS Management Console, to run assessments, execute prebuilt tests, and configure alarms to identify issues and alert the operators.

AWS Resilience Hub discovers applications deployed by AWS CloudFormation (this includes SAM and CDK applications), including cross Regions and cross account stacks. Resilience Hub also discovers applications from Resource Groups and tags or chooses from applications already defined in AWS Service Catalog AppRegistry.

The term “application” here refers not just to your application software or code; it refers to the entire infrastructure stack to host the application: networking, virtual machines, databases, and so on.

Resilience assessment and recommendations
AWS Resilience Hub’s resilience assessment utilizes best practices from the AWS Well-Architected Framework to analyze the components of your application and uncover potential resilience weaknesses caused by incomplete infrastructure setup, misconfigurations, or opportunities for additional configuration improvements. Resilience Hub provides actionable recommendations to improve the application’s resilience.

For example, Resilience Hub validates that the application’s Amazon Relational Database Service (RDS), Amazon Elastic Block Store (EBS), and Amazon Elastic File System (Amazon EFS) backup schedule is sufficient to meet the application’s RPO and RTO you defined in your resilience policy. When insufficient, it recommends improvements to meet your RPO and RTO objectives.

The resilience assessment generates code snippets that help you create recovery procedures as AWS Systems Manager documents for your applications, referred to as standard operating procedures (SOPs). In addition, Resilience Hub generates a list of recommended Amazon CloudWatch monitors and alarms to help you quickly identify any change to the application’s resilience posture once deployed.

Continuous resilience validation
After the application and SOPs have been updated to incorporate recommendations from the resilience assessment, you may use Resilience Hub to test and verify that your application meets its resilience targets before it is released into production. Resilience Hub is integrated with AWS Fault Injection Simulator (FIS), a fully managed service for running fault injection experiments on AWS. FIS provides fault injection simulations of real-world failures, such as network errors or having too many open connections to a database. Resilience Hub also provides APIs for development teams to integrate their resilience assessment and testing into their CI/CD pipelines for ongoing resilience validation. Integrating resilience validation into CI/CD pipelines helps ensure that every change to the application’s underlying infrastructure does not compromise its resilience.

Visibility
AWS Resilience Hub provides a comprehensive view of your overall application portfolio resilience status
through its dashboard. To help you track the resilience of applications, Resilience Hub aggregates and
organizes resilience events (for example, unavailable database or failed resilience validation), alerts, and insights from services like Amazon CloudWatch and AWS Fault Injection Simulator (FIS). Resilience Hub also generates a resilience score, a scale that indicates the level of implementation for recommended resilience tests, alarms and recovery SOPs. This score can be used to measure resilience improvements over time.

The intuitive dashboard sends alerts for issues, recommends remediation steps, and provides a single place to manage application resilience. For example, when a CloudWatch alarm triggers, Resilience Hub alerts you and recommends recovery procedures to deploy.

AWS Resilience Hub in Action
I developed a non-resilient application made of a single EC2 instance and an RDS database. I’d like Resilience Hub to assess this application. The CDK script to deploy this application on your AWS Account is available on my GitHub repository. Just install CDK v2 (npm install -g aws-cdk@next) and deploy the stack (cdk bootstrap && cdk deploy --all).

There are four steps when using Resilience Hub:

  • I first add the application to assess. I can start with CloudFormation stacks, AppRegistry, Resource Groups, or another existing application.
  • Second, I define my resilience policy. The policy document describes my RTO and RPO objectives for incidents that might impact either my application, my infrastructure, an entire availability zone, or an entire AWS Region.
  • Third, I run an assessment against my application. The assessment lists policy breaches, if any, and provides a set of recommendations, such as creating CloudWatch alarms, standard operating procedures documents, or fault injection experiment templates.
  • Finally, I might setup any of the recommendations made or run experiments on a regular basis to validate the application’s resilience posture.

Preparation
To start, I open my browser and navigate to the AWS Management Console. I select AWS Resilience Hub and select Add application.

Resilience hub add application

My sample app is deployed with three CloudFormation stacks: a network, a database, and an EC2 instance.  I select these three stacks and select Next on the bottom of the screen:

Resilience Hub add cloud formations tack

Resilience Hub detects the resources created by these stacks that might affect the resilience of my applications and I select the ones I want to include or exclude from the assessments and click Next. In this example, I select the NAT gateway, the database instance, and the EC2 instance.

Resilience Hub Select resources

I create a resilience policy and associate it with this application. I can choose from policy templates or create a policy from scratch. A policy includes a name and the RTO and RPO values for four types of incidents: the ones affecting my application itself, like a deployment error or a bug at code level; the ones affecting my application infrastructure, like a crash of the EC2 instance; the ones affecting an availability zone; and the ones affecting an entire region. The values are expressed in seconds, minutes,  hours, or days.

Resilience Hub Create Policy

Finally, I review my choices and select Publish.

Assessment
Once this application and its policy are published, I start the assessment by selecting Assess resiliency.

Resilience Hub Assess resiliency

Without surprise, Resilience Hub reports my resilience policy is breached.

Resilience Hub Policy breach

I select the report to get the details.  The dashboard shows how Region, availability zone, infrastructure and application-level incident expected RTO/RPO compare to my policy.

Resilience Hub Assessment dashboard

I have access to Resiliency recommendations and Operational recommendations.

In Resiliency recommendations, I see if components of my application are compliant with the resilience policy. I also discover recommendations to Optimize for availability zone RTO/RPO, Optimize for cost, or Optimize for minimal changes.

Resilience Hub Optimisation

In Operational recommendations, on the first tab, I see a list of proposed Alarms to create in CloudWatch.

Resilience Hub Alarms

The second tab lists recommended Standard operating procedures. These are Systems Manager documents I can run on my infrastructure, such as Restore from Backup.

Resilience Hub SOP

The third tab (Fault injection experiment templates) proposes experiments to run on my infrastructure to test its resilience. Experiments are run with FIS. Proposed experiments are Inject memory load or Inject process kill.

Resilience Hub - FIS

When I select Set up recommendations, Resilience Hub generates CloudFormation templates to create the alarms or to execute the SOP or experiment proposed.

Resilience Hub - Set up recommandations

The follow up screens are quite self-explanatory. Once generated, templates are available to execute in the Templates tab. I apply the template and observe how it impacts the resilience score of the application.

Resilience Hub Resilience score

The CDK script you used to deploy the sample applications also creates a highly available infrastructure for the same application. It has a load balancer, an auto scaling group, and a database cluster with two nodes. As an exercise, run the same assessment report on this application stack and compare the results.

Pricing and Availability
AWS Resilience Hub is available today in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (Ireland), and Europe (Frankfurt). We will add more regions in the future.

As usual, you pay only for what you use. There are no upfront costs or minimum fees. You are charged based on the number of applications you described in Resilience Hub. You can try Resilience Hub free for 6 months, up to 3 applications. After that, Resilience Hub‘s price is $15.00 per application per month. Metering begins once you run the first resilience assessment in Resilience Hub. Remember that Resilience Hub might provision services for you, such as CloudWatch alarms, so additional charges might apply. Visit the pricing page to get the details.

Let us know your feedback and build your first resilience dashboard today.

— seb

Goodbye Microsoft SQL Server, Hello Babelfish

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/goodbye-microsoft-sql-server-hello-babelfish/

Many of our customers are telling us they want to move away from commercial database vendors to avoid expensive costs and burdensome licensing terms. But migrating away from commercial and legacy databases can be time-consuming and resource-intensive. When migrating your databases, you can automate the migration of your database schema and data using the AWS Schema Conversation Tool and AWS Database Migration Service. But there is always more work to do to migrate the application itself, including rewriting application code that interacts with the database. Motivation is there, but costs and risks are often limiting factors.

Today, we are making Babelfish for Aurora PostgreSQL available. Babelfish allows Amazon Aurora PostgreSQL-Compatible Edition to understand the SQL Server wire protocol. It allows you to migrate your SQL Server applications to PostgreSQL cheaper, faster, and with less risks involved with such change.

You can migrate your application in a fraction of the time that a traditional migration would require. You continue to use the existing queries and drivers your application uses today. Just point the application to an Amazon Aurora PostgreSQL database with Babelfish activated. Babelfish adds the capability to Amazon Aurora PostgreSQL to understand the SQL Server wire protocol Tabular Data Stream (TDS), as well as extending PostgreSQL to understand commonly used T-SQL commands used by SQL Server. Support for T-SQL includes elements such as the SQL dialect, static cursors, data types, triggers, stored procedures, and functions. Babelfish reduces the risk associated with database migration projects by significantly reducing the number of changes required to the application. When adopting Babelfish, you save on licensing costs of using SQL Server. Amazon Aurora provides the security, availability, and reliability of commercial databases at 1/10th the cost.

SQL Server has evolved over more than 30 years, and we do not expect to support all functionalities right away. Instead, we focused on the most common T-SQL commands and returning the correct response or an error message. For example, the MONEY datatype has different characteristics in SQL Server (with four decimals precision) and PostgreSQL (with two decimals precision). Such a subtle difference might lead to rounding errors and have a significant impact on downstream processes, such as financial reporting. In this case, and many others, Babelfish ensures the semantics of SQL Server data types and T-SQL functionality are preserved: we created a MONEY datatype that behaves as SQL Server apps would expect. When you create a table with this datatype through the Babelfish connection, you get this compatible datatype and behaviors that a SQL Server app would expect.

Create a Babelfish Cluster Using the Console
To show you how Babelfish works, let’s first connect to the console and create a new Amazon Aurora PostgreSQL cluster. The procedure is no different than for the regular Amazon Aurora database. In the RDS launch wizard, I first make sure I select an Aurora version compatible with PostgreSQL 13.4, or more recent. The updated console has additional filters to help you select the versions that are compatible with Babelfish.

Babelfish Create databaseThen, lower on the page, I select the option Turn on Babelfish.

Aurora turn on babelfish

Under Monitoring section, I also make sure I turn off Enable Enhanced monitoring. This option requires additional IAM permissions and preparation that are not relevant for this demo.

Enable Enhanced MonitoringAfter a couple of minutes, my cluster is created, it has two instances, one writer and one reader.

Babelfish cluster created

Create a Babelfish Cluster Using the CLI
Alternatively, I may use the CLI to create a cluster. I first create a parameter group to activate Babelfish (the console does it automatically):

aws rds create-db-cluster-parameter-group             \
    --db-cluster-parameter-group-name myapp-babelfish \
    --db-parameter-group-family aurora-postgresql13   \
    --description "babelfish APG 13"
aws rds modify-db-cluster-parameter-group             \
    --db-cluster-parameter-group-name myapp-babelfish \
    --parameters "ParameterName=rds.babelfish_status,ParameterValue=on,ApplyMethod=pending-reboot" \

Then I create the database cluster (when using the command below, adjust the security group id and the subnet group name) :

aws rds create-db-cluster \
    --db-cluster-identifier awsnewblog-cli-demo \
    --master-username postgres \  
    --master-user-password Passw0rd \
    --engine aurora-postgresql \
    --engine-version 13.4 \
    --vpc-security-group-ids sg-abcd1234 \
    --db-subnet-group-name default-vpc-1234abcd \
    --db-cluster-parameter-group-name myapp-babelfish
{
    "DBCluster": {
        "AllocatedStorage": 1,
        "AvailabilityZones": [
            "us-east-1c",
            "us-east-1d",
            "us-east-1a"
        ],
        "BackupRetentionPeriod": 1,
        "DBClusterIdentifier": "awsnewblog-cli-demo",
        "Status": "creating",
        ... <redacted for brevity> ...
    }
}

Once the cluster is created, I create an instance using

aws rds create-db-instance \
    --db-instance-identifier myapp-db1 \
    --db-instance-class db.r5.4xlarge \
    --db-subnet-group-name default-vpc-1234abcd \
    --db-cluster-identifier awsnewblog-cli-demo \
    --engine aurora-postgresql
{
    "DBInstance": {
        "DBInstanceIdentifier": "myapp-db1",
        "DBInstanceClass": "db.r5.4xlarge",
        "Engine": "aurora-postgresql",
        "DBInstanceStatus": "creating",
        ... <redacted for brevity> ...

Connect to the Babelfish Cluster
Once the cluster and instances are ready, I connect to the writer instance to create the database itself. I may connect to the instance using SQL Server Management Studio (SSMS) or other SQL client such as sqlcmd. The Windows client must be able to connect to the Babelfish cluster, I made sure the RDS security group authorizes connections from the Windows host.

Using SSMS on Windows, I select New Query in the toolbar, I enter the database DNS name as Server name. I select SQL Server Authentication and I enter the database Login and Password. I click on Connect.

Important: Do not connect via the SSMS Object Explorer. Be sure to connect using the query editor via the New Query button. At this time, Babelfish supports the query editor, but not the Object Explorer.

SSMS Connect to babelfish

Once connected, I check the version with select @@version statement and click the green Execute button in the toolbar. I can read the statement result on the bottom part of the screen.

Babelfish check version

Finally, I create the database on the instance with the create database demo statement.

babelfish create database

By default, Babelfish runs in single-db mode. Using this mode, you can have maximum one user database per instance. It allows to have a close mapping of schema names between SQL Server and PostgreSQL. Alternatively, you may turn on multi-db mode at cluster creation time. This allows you to create multiple user databases per instance. In PostgreSQL, user databases will be mapped to multiple schemas with the database name as a prefix.

Run an Application
For the purpose of this demo, I use a database schema provided by SQLServerTutorial.net as part of their SQL Server Tutorial to create a schema and populate it with data. The SQL script and application C# code I use in this demo are available on my GitHub repository. A big thanks to my colleague Anuja for providing me with a C# demo application.

In SQL Server Management Studio, I open the create_objects.sql script and I choose the green execute icon on the top toolbar. A confirmation message tells me the database schema is created.

babelfish create schema

I repeat the operation with the load_data.sql script to load data in the newly created tables. Data loading takes a few minutes to run.

Now the database is loaded, let’s open Anuja‘s  C# application developed to access a SQL Server database. I modify two lines of code:

  • line 12 : I type the DNS name of the Babelfish cluster I created earlier. Note that I use the DNS name of a “write” node from my cluster.
  • line 15 : I type the password I entered when I created the database cluster.

Visual Studio Code - Prepare app to connect to babelfish

And that’s it! No other modification is required on this app. This code written to query and interact with SQL Server is just working “as-is” on Aurora PostgreSQL with Babelfish.

babelfish application execution

Open Source Transparency
We decided to open-source the technology behind Babelfish to create the Babelfish for PostgreSQL open source project. It uses the permissive Apache 2.0 and PostgreSQL licenses, meaning you can modify or tweak or distribute Babelfish in whatever fashion you see fit. Over time, we are shifting Babelfish to fully open development on GitHub, so there is transparency from the start. Now, anyone, whether you are an AWS customer or not, can use Babelfish to leave behind SQL Server and quickly, easily, and cost-effectively migrate your applications to open source PostgreSQL. We believe Babelfish is going to make PostgreSQL accessible to a much wider group of customers and developers than ever before, particularly those with large numbers of complex applications originally written for SQL Server.

Availability
Babelfish for Aurora PostgreSQL is available today in all publicly available AWS Regions at no additional cost. Start your application migration today.

— seb

PS : if you wonder where the name Babelfish comes from, just remember the answer is 42. (Or you can read this slightly longer answer.)

AWS Local Zones Are Now Open in Las Vegas, New York City, and Portland

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/aws-local-zones-are-now-open-in-las-vegas-new-york-city-and-portland/

Today, we are opening three new AWS Local Zones in Las Vegas, New York City (located in New Jersey), and Portland metro areas. We are now at a total of 14 Local Zones in 13 cities since Jeff Barr announced the first Local Zone in Los Angeles in December 2019. These three new Local Zones join the ones in full operation in Boston, Chicago, Dallas, Denver, Houston, Kansas City, Los Angeles, Miami, Minneapolis, and Philadelphia.

Local Zones are one of the ways we bring select AWS services much closer to large populations and geographic areas where major industries come together. By having this proximity, you can deploy latency-sensitive workloads such as real-time gaming platforms, financial transaction processing, media and entertainment content creation, or ad services. Using Local Zones for migrations or hybrid strategies are two additional use cases allowing you to migrate your applications to a nearby AWS Local Zone while still meeting the low-latency requirements of hybrid deployments.

Local Zones support the deployment of workloads using Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Block Store (EBS), Amazon FSx for Windows File Server and Amazon FSx for Lustre, Elastic Load Balancing, Amazon Relational Database Service (RDS), and Amazon Virtual Private Cloud (VPC). Local Zones provide a high-bandwidth, secure connection between local workloads and those running in the parent AWS Region, while offering the full range of services found in a Region through the same APIs, console and tool sets. This page lists the exact AWS services and features available in each Local Zone.

Local Zones are easy to use and can be enabled in only three clicks! This article will help you learn how to provision infrastructure in a Local Zone, which is very similar to creating infrastructure in an Availability Zone. Once enabled, Local Zones appear as additional Availability Zones in your AWS Management Console or AWS Command Line Interface (CLI).

Local Zones in Action
Examples of workloads that our customers run in Local Zones include:

Dish Wireless is building the US-telecom’s first cloud-native 5G network. They are unleashing 5G connectivity with better speed, better security, and better latency. DISH is leveraging AWS Regions, AWS Local Zones, and AWS Outposts to extend AWS infrastructure and services to wherever they – or their customers – need it.

Integral Ad Science (IAS) is a global leader in digital media quality. Every millisecond counts when it comes to delivering actionable insights for its advertiser and publisher customers. Leveraging AWS Regions and AWS Local Zones, IAS ensures rapid response times in milliseconds when analyzing data and delivering insights.

Esports Engine (a Vindex company) is a turnkey esports solutions company working with gaming publishers, rights holders, brands, and teams to provide production, broadcast, tournament, and program design. Their graphic-intensive streaming content is live-fed from the locations where the games are recorded and then broadcast from the studios to viewers. AWS Local Zones replace their previous on-premises data centers to reduce the need for support for the physical data center buildings.

Proof Trading is a financial services company looking forward to taking advantage of AWS Local Zones to bring trading workloads closer to the major trading venues located in Chicago and New Jersey. Our industry blog has a detailed article that provides more context on trading-related workloads.

Ubitus is a cloud gaming technology leader. They deploy latency-sensitive game servers all over the world to be closer to gamers. An important part of having a great gaming experience is to have consistent low-latency game plays. AWS Local Zones are a game changer for them. Now, they can easily deploy and test clusters of game servers in many cities across the US, ensuring that more customers get a consistent experience regardless of where they are located.

What’s Next?
In 2019 when we launched our first Local Zone at AWS re:Invent 2019, we said we were just getting started. In addition to today’s announcement, we are working on opening three additional Local Zones in Atlanta, Phoenix, and Seattle by the end of the year, and we keep expanding. If you would like to express your interest in a particular location, please let us know by filling out the AWS Local Zones Interest form.

We are also listening to your feedback on additional services that we should add to Local Zones, such as more EC2 instance types to give you even more flexibility.

Build and deploy your workload on a Local Zone today.

— seb

Computer Vision at the Edge with AWS Panorama

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/computer-vision-at-the-edge-with-aws-panorama/

Today, the AWS Panorama Appliance is generally available to all of you. The AWS Panorama Appliance is a computer vision (CV) appliance designed to be deployed on your network to analyze images provided by your on-premises cameras.

AWS Panorama Appliance

Every week, I read about new and innovative use cases for computer vision. Some customers are using CV to verify pallet trucks are parked in designated areas to ensure worker safety in warehouses, some are analyzing customer walking flows in retail stores to optimize space and product placement, and some are using it to recognize cats and mice, just to name a few.

AWS customers agree the cloud is the most convenient place to train computer vision models thanks to its virtually infinite access to storage and compute resources. In the cloud, data scientists have access to powerful tools such as Amazon SageMaker and a wide variety of compute resources and frameworks.

However, when it’s time to analyze images from one or multiple video feeds, many of you are telling us the cloud is not the place where you want to run such workloads. There are a number of reasons for that: sometimes the facilities where the images are captured do not have enough bandwidth to send video feeds to the cloud, some use cases require very low latency, or some just want to keep their images on premises and not send them for analysis outside of their network.

At re:Invent 2020, we announced the AWS Panorama Appliance and SDK to address these requirements.

AWS Panorama is a machine learning appliance and software development kit (SDK) that allows you to bring computer vision to on-premises cameras to make predictions locally with high accuracy and low latency. With the AWS Panorama Appliance, you can automate tasks that have traditionally required human inspection to improve visibility into potential issues. For example, you can use AWS Panorama Appliance to evaluate manufacturing quality, identify bottlenecks in industrial processes, and monitor workplace security even in environments with limited or no internet connectivity. The software development kit allows camera manufacturers to bring equivalent capabilities directly inside their IP camera.

As usual on this blog, I would like to walk you through the development and deployment of a computer vision application for the AWS Panorama Appliance. The demo application from this blog uses a machine learning model to recognise objects in frames of video from a network camera. The application loads a model onto the AWS Panorama Appliance, gets images from a camera, and runs those images through the model. The application then overlays the results on top of the original video and outputs it to a connected display. The application uses libraries provided by AWS Panorama to interact with input and output video streams and the model, no low level programming is required.

Let’s first define a few concepts. I borrowed the following definitions from the AWS Panorama documentation page.

Concepts
The AWS Panorama Appliance is the hardware that runs your applications. You use the AWS Panorama console or AWS SDKs to register an appliance, update its software, and deploy applications to it. The software that runs on the appliance discovers and connects to camera streams, sends frames of video to your application, and optionally displays video output on an attached display.

The appliance is an edge device. Instead of sending images to the AWS Cloud for processing, it runs applications locally on optimized hardware. This enables you to analyze video in real time and process the results with limited connectivity. The appliance only requires an internet connection to report its status, upload logs, and get software updates and deployments.

An application comprises multiple components called nodes, which represent cameras, models, code, or global variables. A node can be configuration only (inputs and outputs), or include artifacts (models and code). Application nodes are bundled in node packages that you upload to an S3 access point, where the AWS Panorama Appliance can access them. An application manifest is a configuration file that defines connections between the nodes.

A computer vision model is a machine learning network that is trained to process images. Computer vision models can perform various tasks such as classification, detection, segmentation, and tracking. A computer vision model takes an image as input and outputs information about the image or objects in the image.

AWS Panorama supports models built with Apache MXNet, DarkNet, GluonCV, Keras, ONNX, PyTorch, TensorFlow, and TensorFlow Lite. You can build models with Amazon SageMaker and import them from an Amazon Simple Storage Service (Amazon S3) bucket.

AWS Panorama Context

Now that we grasp the concepts, let’s get our hands on.

Unboxing Your AWS Panorama Appliance
In the box the service team sent me, I found the appliance itself (no surprise!), a power cord and two ethernet cables. The box also contains a USB key to initially configure the appliance. The device is designed to work in industrial environments. It has two ethernet ports next to the power connector on the back. On the front, protected behind a sliding door, I found a SD card reader, one HDMI connector and two USB ports. There is also a power button and a reset button to reinitialise the device to its factory state.

AWS Panorama Appliance Front Side AWS Panorama Appliance Back Side AWS Panorama USB key

Configuring Your Appliance
I first configured it for my network (cable + DHCP, but it also supports static IP configuration) and registered it to securely connect back to my AWS Account. To do so, I navigated to the AWS Management Console, entered my network configuration details. It generated a set of configuration files and certificates. I copied them to the appliance using the provided USB key. My colleague Martin Beeby shared screenshots of this process. The team slightly modified the screens based on the feedback they received during the preview, but I don’t think it is worth going through the step-by-step process again. Tip from the field: be sure to use the USB key provided in the box, it is correctly formatted and automatically recognised by the appliance (my own USB key was not recognized properly).

I then downloaded a sample application from the Panorama GitHub repository and tried it with the Test Utility for Panorama, also available on this GitHub (the test utility is an EC2 instance configured to act as a simulator). The Test Utility for Panorama uses Jupyter notebooks to quickly experiment with sample applications or your code before deploying it to the appliance. It also lists commands allowing you to deploy your applications to the appliance programmatically.

Panorama Command Line
The Panorama command line simplifies the operations to create a project, import assets, package it, and deploy it to the AWS Panorama Appliance. You can follow these instructions to download and install the Panorama command line.

When receiving an application developed by someone else, like the sample application, I have to replace AWS account IDs in all application files and directory names. I do this with one single command:

panorama-cli import-application

Application Structure
A Panorama application structure looks as follows:

├── assets
├── graphs
│   └── example_project
│       └── graph.json
└── packages
    ├── accountXYZ-model-1.0
    │   ├── descriptor.json
    │   └── package.json
    └── accountXYZ-sample-app-1.0
        ├── Dockerfile
        ├── descriptor.json
        ├── package.json
        └── src
            └── app.py

  • graph.json lists all the packages and nodes in this application. Nodes are the way to define an application in Panorama.
  • in each package package.json has details about the package and the assets it uses.
  • model package model has a descriptor.json which contains the metadata required for compiling the model.
  • container packagesample-app package contains the application code in the src directory and a Dockerfile to build the container. descriptor.json has details about which command and file to use when the container is launched.
  • assets directory is where all the assets reside, such as packaged code and compiled models. You should not make any changes in this directory.

Note that package names are prefixed with your account number.

When my application is ready, I build the container (I am using a Linux machine with Docker Engine and Docker CLI to avoid using Docker Desktop for macOS or Windows.)

$ panorama-cli build-container                               \
               --container-asset-name {container_asset_name} \ 
               --package-path packages/{account_id}-{package_name}-1.0 

A Note About the Cameras
AWS Panorama Appliance has a concept of “abstract cameras”. Abstract camera sources are placeholders that can be mapped to actual camera devices during application deployment. The Test Utility for Panorama allows you to map abstract cameras to video files for easy, repeatable tests.

Adding a ML Model
The AWS Panorama Appliance supports multiple ML Model frameworks. Models may be trained on Amazon SageMaker or any other solution of your choice. I downloaded my ML model from S3 and import it to my project:

panorama-cli add-raw-model                                                 \
    --model-asset-name {asset_name}                                        \
    --model-s3-uri s3://{S3_BUCKET}/{project_name}/{ML_MODEL_FNAME}.tar.gz \
    --descriptor-path {descriptor_path}                                    \
    --packages-path {package_path}

Behind the scenes, ML Models are compiled to optimise them to the Nvidia Accelerated Linux Arm64 architecture of the AWS Panorama Appliance.

Package the Application
Now that I have a ML model and my application code packaged in a container, I am ready to package my application assets for AWS Panorama Appliance:

panorama-cli package-application

This command uploads all my application assets to the AWS cloud account along with all the manifests.

Deploy the Application
Finally I deploy the application to the AWS Panorama Appliance. A deployment copies the application and its configuration, like camera stream selection, from the AWS cloud to my on-premise AWS Panorama Appliance. I may deploy my application programmatically using Python code (and the Boto3 SDK you might know already):


client = boto3.client('panorama')
client.create_application_instance(
    Name="AWS News Blog Sample Application",
    Description="An object detection app",
    ManifestPayload={
       'PayloadData': manifest         # <== this is the graph.json file content 
    },
    RuntimeRoleArn=role,               # <== this is a role that gives my app permissions to use AWS Services such as Cloudwatch
    DefaultRuntimeContextDevice=device # <== this is my device name 
)

Alternatively, I may use the AWS Management Console:

On Deployed applications, I select Deploy application.

AWS Panorama - Start the deployment

I copy and paste the content of graphs/<project name>/graph.json to the console and select Next.

AWS Panorama - Copy Graph JSON

I give my application a name and an optional description. I select Proceed to deploy.

AWS Panoram Deploy - Application Name

The next steps are

  • declare an IAM role to give permissions to my application to use AWS Service. The minimal permissions set allows to call the PuMetricData API on CloudWatch.
  • select the AWS Panorama Appliance I want to deploy to
  • map the abstract cameras defined in the application descriptors.json to physical cameras known by the AWS Panorama Appliance
  • fill in any application-specific inputs, such as acceptable threshold value, log level etc.

An example IAM policy is

AWSTemplateFormatVersion: '2010-09-09'
Description: Resources for an AWS Panorama application.
Resources:
  runtimeRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          -
            Effect: Allow
            Principal:
              Service:
                - panorama.amazonaws.com
            Action:
              - sts:AssumeRole
      Policies:
        - PolicyName: cloudwatch-putmetrics
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action: 'cloudwatch:PutMetricData'
                Resource: '*'
      Path: /service-role/

These six screenhots capture this process:

AWS Panorama - Deploy 2 AWS Panorama - Deploy 1 AWS Panorama - Deploy 3
AWS Panorama - Deploy 4 AWS Panorama - Deploy 5 AWS Panorama - Deploy 5

The deployment takes 15-30 minutes depending on the size of your code and your ML models, and the appliance available bandwidth. Eventually, the status turn green to “Running”.

AWS Panorama - Deployment Completed

Once the application is deployed to your AWS Panorama Appliance it begins to run, continuously analyzing video and generating highly accurate predictions locally within milliseconds. I connect an HDMI cable to the AWS Panorama Appliance to monitor the output, and I can see:

AWS Panorama HDMI output

Should anything goes wrong during the deployment or during the life of the application, I have access to the logs on Amazon CloudWatch. There are two log streams created, one for the AWS Panorama Appliance itself and one for the application.

AWS Panorama Appliance log AWS Panorama application log

Pricing and Availability
The AWS Panorama Appliance is available to purchase at AWS Elemental order page in the AWS Console. You can place orders from the United States, Canada, the United Kingdom, and the European Union. There is a one-time charge of $4,000 for the appliance itself.

There is a usage charge of $8.33 / month / camera feed.

AWS Panorama stores versioned copies of all assets deployed to the AWS Panorama Appliance (including ML models and business logic) in the cloud. You are charged $0.10 per-GB, per-month for this storage.

You may incur additional charges if the business logic deployed to your AWS Panorama Appliance uses other AWS services. For example, if your business logic uploads ML predictions to S3 for offline analysis, you will be billed separately by S3 for any storage charges incurred.

The AWS Panorama Appliance can be installed anywhere. The appliance connects back to the AWS Panorama service in the AWS cloud in one of the following AWS Region : US East (N. Virginia), US West (Oregon), Canada (Central), or Europe (Ireland).

Go and build your first computer vision model today.

— seb

AWS Cloud Control API, a Uniform API to Access AWS & Third-Party Services

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/announcing-aws-cloud-control-api/

Today, I am happy to announce the availability of AWS Cloud Control API a set of common application programming interfaces (APIs) that are designed to make it easy for developers to manage their AWS and third-party services.

AWS delivers the broadest and deepest portfolio of cloud services. Builders leverage these to build any type of cloud infrastructure. It started with Amazon Simple Storage Service (Amazon S3) 15 years ago and grew over 200+ services. Each AWS service has a specific API with its own vocabulary, input parameters, and error reporting. For example, you use the S3 CreateBucket API to create an Amazon Simple Storage Service (Amazon S3) bucket and the Amazon Elastic Compute Cloud (Amazon EC2) RunInstances API to create an EC2 instances.

Some of you use AWS APIs to build infrastructure-as-code, some to inspect and automatically improve your security posture, some others for configuration management, or to provision and to configure high performance compute clusters. The use cases are countless.

As applications and infrastructures become increasingly sophisticated and you work across more AWS services, it becomes increasingly difficult to learn and manage distinct APIs. This challenge is exacerbated when you also use third-party services in your infrastructure, since you have to build and maintain custom code to manage both the AWS and third-party services together.

Cloud Control API is a standard set of APIs to Create, Read, Update, Delete, and List (CRUDL) resources across hundreds of AWS Services (more being added) and dozens of third-party services (and growing).

It exposes five common verbs (CreateResource, GetResource, UpdateResource, DeleteResource, ListResource) to manage the lifecycle of services. For example, to create an Amazon Elastic Container Service (Amazon ECS) cluster or an AWS Lambda function, you call the same CreateResource API, passing as parameters the type and attributes of the resource you want to create: an Amazon ECS cluster or an Lambda function. The input parameters are defined by an unified resource model using JSON. Similarly, the return types and error messages are uniform across all verbs and all resources.

Cloud Control API provides support for hundreds of AWS resources today, and we will continue to add support for existing AWS resources across services such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3) in the coming months. It will support new AWS resources typically on the day of launch.

Until today, when I want to get the details about an Lambda function or a Amazon Kinesis stream, I use the get-function API to call Lambda and the describe-stream API to call Kinesis. Notice in the example below how different these two API calls are: they have different names, different naming conventions, different JSON outputs, etc.

aws lambda get-function --function-name TictactoeDatabaseCdkStack
{
    "Configuration": {
        "FunctionName": "TictactoeDatabaseCdkStack",
        "FunctionArn": "arn:aws:lambda:us-west-2:0123456789:function:TictactoeDatabaseCdkStack",
        "Runtime": "nodejs14.x",
        "Role": "arn:aws:iam::0123456789:role/TictactoeDatabaseCdkStack",
        "Handler": "framework.onEvent",
        "CodeSize": 21539,
        "Timeout": 900,
        "MemorySize": 128,
        "LastModified": "2021-06-07T11:26:39.767+0000",

...

aws kinesis describe-stream --stream-name AWSNewsBlog
{
    "StreamDescription": {
        "Shards": [
            {
                "ShardId": "shardId-000000000000",
                "HashKeyRange": {
                    "StartingHashKey": "0",
                    "EndingHashKey": "340282366920938463463374607431768211455"
                },
                "SequenceNumberRange": {
                    "StartingSequenceNumber": "49622132796672989268327879810972713309953024040638611458"
                }
            }
        ],
        "StreamARN": "arn:aws:kinesis:us-west-2:012345678901:stream/AWSNewsBlog",
        "StreamName": "AWSNewsBlog",
        "StreamStatus": "ACTIVE",
        "RetentionPeriodHours": 24,
        "EncryptionType": "NONE",
        "KeyId": null,
        "StreamCreationTimestamp": "2021-09-17T14:58:20+02:00"
    }
}	

In contrast, when using the Cloud Control API, I use a single API name get-resource, and I receive a consistent output.

aws cloudcontrol get-resource        \
    --type-name AWS::Kinesis::Stream \
    --identifier NewsBlogDemo
{
   "TypeName": "AWS::Kinesis::Stream",
   "ResourceDescription": {
      "Identifier": "NewsBlogDemo",
      "Properties": "{\"Arn\":\"arn:aws:kinesis:us-west-2:486652066693:stream/NewsBlogDemo\",\"RetentionPeriodHours\":168,\"Name\":\"NewsBlogDemo\",\"ShardCount\":3}"
   }
}	

Similary, to create the resource above I used the create-resource API.


aws cloudcontrol create-resource    \
   --type-name AWS::Kinesis::Stream \
   --desired-state "{"Name": "NewsBlogDemo","RetentionPeriodHours":168, "ShardCount":3}"

In my opinion, there are three types of builders that are going to adopt Cloud Control API:

Builders
The first community is builders using AWS Services APIs to manage their infrastructure or their customer’s infrastructure. The ones requiring usage of low-level AWS Services APIs rather than higher level tools. For example, I know companies that manages AWS infrastructures on behalf of their clients. Many developed solutions to list and describe all resources deployed in their client’s AWS Accounts, for management and billing purposes. Often, they built specific tools to address their requirements, but find it hard to keep up with new AWS Services and features. Cloud Control API simplifies this type of tools by offering a consistent, resource-centric approach. It makes easier to keep up with new AWS Services and features.

Another example: Stedi is a developer-focused platform for building automated Electronic Data Interchange (EDI) solutions that integrate with any business system. “We have a strong focus on infrastructure as code (IaC) within Stedi and have been looking for a programmatic way to discover and delete legacy cloud resources that are no longer managed through CloudFormation – helping us reduce complexity and manage cost,” said Olaf Conjin, Serverless Engineer at Stedi, Inc. “With AWS Cloud Control API, our teams can easily list each of these legacy resources, cross-reference them against CloudFormation managed resources, apply additional logic and delete the legacy resources. By deleting these unused legacy resources using Cloud Control API, we can manage our cloud spend in a simpler and faster manner. Cloud Control API allows us to remove the need to author and maintain custom code to discover and delete each type of resource, helping us improve our developer velocity”.

APN Partners
The second community that benefits from Cloud Control API is APN Partners, such as HashiCorp (maker of Terraform) and Pulumi, and other APN Partners offering solutions that relies on AWS Services APIs. When AWS releases a new service or feature, our partner’s engineering teams need to learn, integrate, and test a new set of AWS Service APIs to expose it in their offerings. This is a time consuming process and often leads to a lag between the AWS release and the availability of the service or feature in their solution. With the new Cloud Control API, partners are now able to build a unique REST API code base, using unified API verbs, common input parameters, and common error types. They just have to merge the standardized pre-defined uniform resource model to interact with new AWS Services exposed as REST resources.

Launch Partners
HashiCorp and Pulumi are our launch partners, both solutions are integrated with Cloud Control API today.

HashiCorp provides cloud infrastructure automation software that enables organizations to provision, secure, connect, and run any infrastructure for any application. “AWS Cloud Control API makes it easier for our teams to build solutions to integrate with new and existing AWS services,” said James Bayer – EVP Product, HashiCorp. “Integrating HashiCorp Terraform with AWS Cloud Control API means developers are able to use the newly released AWS features and services, typically on the day of launch.”

Pulumi’s new AWS Native Provider, powered by the AWS Cloud Control API, “gives Pulumi’s users faster access to the latest AWS innovations, typically the day they launch, without any need for us to manually implement support,” said Joe Duffy, CEO at Pulumi. “The full surface area of AWS resources provided by AWS Cloud Control API can now be automated from familiar languages like Python, TypeScript, .NET, and Go, with standard IDEs, package managers, and test frameworks, with high fidelity and great quality. Using this new provider, developers and infrastructure teams can develop and ship modern AWS applications and infrastructure faster and with more confidence than ever before.”

To learn more about HashiCorp and Pulumi’s integration with Cloud Control API, refer to their blog post and announcements. I will add the links here as soon as they are available.

AWS Customers
The third type of builders that will benefit from Cloud Control API is AWS customers using solution such as Terraform or Pulumi. You can benefit from Cloud Control API too. For example, when using the new Terraform AWS Cloud Control provider or Pulumi’s AWS Native Provider, you can benefit from availability of new AWS Services and features typically on the day of launch.

Now that you understand the benefits, let’s see Cloud Control API in action.

How It Works?
To start using Cloud Control API, I first make sure I use the latest AWS Command Line Interface (CLI) version. Depending on how the CLI was installed, there are different methods to update the CLI. Cloud Control API is available from our AWS SDKs as well.

To create an AWS Lambda function, I first create an index.py handler, I zip it, and upload the zip file to one of my private bucket. I pay attention that the S3 bucket is in the same AWS Region where I will create the Lambda function:

cat << EOF > index.py  
heredoc> import json 
def lambda_handler(event, context):
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }
EOF

zip index.zip index.py
aws s3 cp index.zip s3://private-bucket-seb/index.zip

Then, I call the create-resource API, passing the same set of arguments as required by the corresponding CloudFormation resource. In this example, the Code, Role, Runtime, and Handler arguments are mandatory, as per the CloudFormation AWS::Lambda::Function documentation.

aws cloudcontrol create-resource          \
       --type-name AWS::Lambda::Function   \
       --desired-state '{"Code":{"S3Bucket":"private-bucket-seb","S3Key":"index.zip"},"Role":"arn:aws:iam::0123456789:role/lambda_basic_execution","Runtime":"python3.9","Handler":"index.lambda_handler"}' \
       --client-token 123


{
    "ProgressEvent": {
        "TypeName": "AWS::Lambda::Function",
        "RequestToken": "56a0782b-2b26-491c-b082-18f63d571bbd",
        "Operation": "CREATE",
        "OperationStatus": "IN_PROGRESS",
        "EventTime": "2021-09-26T12:05:42.210000+02:00"
    }
}

I may call the same command again to get the status or to learn about an eventual error:

aws cloudcontrol create-resource          \
       --type-name AWS::Lambda::Function   \
       --desired-state '{"Code":{"S3Bucket":"private-bucket-seb","S3Key":"index.zip"},"Role":"arn:aws:iam::0123456789:role/lambda_basic_execution","Runtime":"python3.9","Handler":"index.lambda_handler"}' \
       --client-token 123

{
    "ProgressEvent": {
        "TypeName": "AWS::Lambda::Function",
        "Identifier": "ukjfq7sqG15LvfC30hwbRAMfR-96K3UNUCxNd9",
        "RequestToken": "f634b21d-22ed-41bb-9612-8740297d20a3",
        "Operation": "CREATE",
        "OperationStatus": "SUCCESS",
        "EventTime": "2021-09-26T19:46:46.643000+02:00"
    }
}

Here, the OperationStatus is SUCCESS and the function name is ukjfq7sqG15LvfC30hwbRAMfR-96K3UNUCxNd9 (I can pass my own name if I want something more descriptive 🙂 )

I then invoke the Lambda function to ensure it works as expected:

aws lambda invoke \
    --function-name ukjfq7sqG15LvfC30hwbRAMfR-96K3UNUCxNd9 \
    out.txt && cat out.txt && rm out.txt 

{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}
{"statusCode": 200, "body": "\"Hello from Lambda!\""}

When finished, I delete the Lambda function using Cloud Control API:

aws cloudcontrol delete-resource \
     --type-name AWS::Lambda::Function \
     --identifier ukjfq7sqG15LvfC30hwbRAMfR-96K3UNUCxNd9 
{
    "ProgressEvent": {
        "TypeName": "AWS::Lambda::Function",
        "Identifier": "ukjfq7sqG15LvfC30hwbRAMfR-96K3UNUCxNd9",
        "RequestToken": "8923991d-72b3-4981-8160-4d9a585965a3",
        "Operation": "DELETE",
        "OperationStatus": "IN_PROGRESS",
        "EventTime": "2021-09-26T20:06:22.013000+02:00"
    }
}

Idempotency
You might have noticed the client-token parameter I passed to the create-resource API call. Create, Update, and Delete requests all accept a ClientToken, which is used to ensure idempotency of the request.

  • We recommend always passing a client token. This will disambiguate requests in case a retry is needed. Otherwise, you may encounter unexpected errors like ConcurrentOperationException or AlreadyExists.
  • We recommend that client tokens always be unique for every single request, such as by passing a UUID.

One More Thing
At the heart of AWS Cloud Control API source of data, there is the CloudFormation Public Registry, which my colleague Steve announced earlier this month in this blog post. It allows anyone to expose a set of AWS resources through CloudFormation and AWS CDK. This is the mechanism AWS Service teams are now using to release their services and features as CloudFormation and AWS CDK resources. Multiple third-party vendors are also publishing their solutions in the CloudFormation Public Registry. All resources published are modelled with a standard schema that defines the resource, its properties, and their attributes in a uniform way.

AWS Cloud Control API is a CRUDL API layer on top of resources published in the CloudFormation Public Registry. Any resource published in the registry exposes its attributes with standard JSON schemas. The resource can then be created, updated, deleted, or listed using Cloud Control API with no additional work.

For example, imagine I decide to expose a public CloudFormation stack to let any AWS customer create VPN servers, based on EC2 instances. I model the VPNServer resource type and publish it in the CloudFormation Public Registry. With no additional work on my side, my custom resource “VPNServer” is now available to all AWS customers through the Cloud Control API REST API. Not only, it is also automatically available through solutions like Hashicorp’s Terraform and Pulumi, and potentially others who adopt Cloud Control API in the future.

It is worth mentioning Cloud Control API is not aimed at replacing the traditional AWS service-level APIs. They are still there and will always be there, but we think that Cloud Control API is easier and more consistent to use and you should use it for new apps.

Availability and Pricing
Cloud Control API is available in all AWS Regions, except China.

You will only pay for the usage of underlying AWS resources, such as a CloudWatch logs or Lambda functions invocations, or pay for the number of handler operations and handler operation duration associated with using third-party resources (such as Datadog monitors or MongoDB Atlas clusters). There are no minimum fees and no required upfront commitments.

I can’t wait to discover what you are going to build on top of this new Cloud Control API. Go build!

— seb

New for Amazon Connect: Voice ID, Wisdom, and Outbound Communications

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/three-new-capabilities-for-amazon-connect/

During the AWS re:Invent conference last year, I wrote about new capabilities added to Amazon Connect. Today, I am happy to announce the general availability of two of these capabilities, Voice ID and Wisdom, and the launch of a new one. High-volume outbound communications allows, as the name implies, the initiation and management of outbound communications over voice, SMS, or email.

Amazon Connect is an easy-to-use omnichannel cloud contact center that helps you provide customer service at a lower cost. In just a few clicks, you can set up and make changes to your contact center, so agents can begin helping customers right away.

Amazon Connect Wisdom
Wisdom reduces the time agents spend searching for answers. Today, when agents require access to information to help a customer, they lose time trying to navigate different data sources in siloes: FAQ, files, wiki pages, customer call history, knowledge bases, etc.

When using Wisdom, agents simply enter a question or phrase in their agent desktop application, such as “what is the pet policy in hotel rooms”, and Wisdom searches connected repositories and returns the most relevant information and best answer to handle the customer issue.

Amazon Connect Wisdom launch screenshot

Wisdom also uses real-time call transcripts from Contact Lens for Amazon Connect to automatically detect customer issues during calls and to recommend relevant content stored across connected knowledge repositories, without requiring agents to even enter a question.

Wisdom connects to knowledge repositories with built-in connectors for third-party applications including Salesforce and ServiceNow. You can also ingest content from other knowledge stores using the Wisdom ingestion APIs.

Amazon Connect Wisom configure connectors

Amazon Connect Voice ID
How many times have you been through an authentication procedure when calling a contact center? Voice ID simplifies this to make voice interactions faster and more secure. It uses machine learning to provide real-time caller authentication based on the caller’s voice.

To effectively recognize me as “Sébastien”, Voice ID must learn how I talk. This is the enrollment phase. It only requires 30 seconds of voice recording to enroll a caller.

When I call the same contact center again, Voice ID compares the sound of my voice with the one enrolled earlier. This is the verification phase. It only requires between 5 and 10 seconds of my voice to authenticate me. The verification phase generates a confidence score and a status displayed in the agent desktop app.

Amazon Connect Voice ID agent desktop view

Contact Center administrators can use this result to configure different flows depending on the verification outcome. The routing is configured with a simple configuration panel such as this one:Amazon Connect Voice ID Configuration panelTo meet with personal data protection laws, contact center agents capture my consent to use Voice ID.

High-Volume Outbound Communications
Typical contact centers are designed to receive customer calls. However, there is a growing set of use cases where contact centers send outbound communications as well. For example, to call customers back,to inform them about the progress of a case, to confirm an appointment, to renew a subscription, or for telemarketing, just to name a few.

The majority of these outbound communications are phone calls. When doing so, traditional contact center agents dial the number provided by a customer management system and wait for someone to answer. Typically, only 10% of the calls are answered. This process is highly inefficient.

In the Amazon Connect administration console, I select High volume outbound communication, then I select Create Campaign.

Connect High Volume Outbout Communication - Create Campaign

I then configure the details. I give the campaign a name, then I select one of my outbound contact flow and a contact queue associated with an outbound phone number.

Connect High Volume Outbout Communication - Campaign details

The predictive dialer makes more calls than available agents. It uses metrics such as campaign performance, expected pick-up rates, and the number of available agents to adjust the number of calls. When a call is answered, it detects when a human is on the line (vs. an automatic machine, a fax line, etc.). Only calls answered by humans are routed to an available agent. The Amazon Connect agent application shows the call script that was specified during setup, along with relevant customer information.

The progressive dialer is more conservative, it uses a 1:1 ratio between calls and available agents.

Amazon Connect not only adds high-volume outbound communication capabilities for voice, but also for SMS, and email. Amazon Connect comes with pre-built connectors for importing customer contact lists from external systems, such as Salesforce, Zendesk, Marketo, and Amazon Pinpoint. No coding required.

Contact center managers have access to real-time metrics such as contact volume, abandonment rates, average connection times, and minimum ring times to optimize agent efficiency. These metrics help to understand the status of their campaigns and ensure compliance with applicable regulations, such as maximum call abandonment rates. Contact center managers use historical reports of these metrics to understand the effectiveness of all their communications campaigns over time.

To ensure a fair usage of the high-volume outbound communication capability, you must apply for production access to use the predictive dialer as well as SMS and email. You may submit a service request detailing your use cases and business context, which will be used to validate your legitimacy as a sender. Once access is granted, Amazon Connect continuously monitors your usage and the team might revoke access when fraud is suspected.

If you want to try it out yourself, you may apply to the preview by filling out this form.

Pricing and Availability
As usual, there are no upfront costs or minimum usage fees. You pay only what you use: a price per minute of outbound calls and per email or SMS message. The details are up-to-date on the Amazon Connect pricing page.

Regional availability slightly differ for each of these three new capabilities, here is a the list of AWS Regions where they are available:

  • Wisdom: US West (Oregon), US East (N. Virginia), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Sydney).
  • Voice ID: US West (Oregon), US East (N. Virginia), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney)
  • High volume outbound communication (preview): US East (N. Virginia), Europe (London), and US West (Oregon). More Regions will be added when it will be generally available.

As usual, let us know what you think about these new capabilities and how you use them. Go build your own contact center in the cloud today.

— seb

Inspect Subnet to Subnet traffic with Amazon VPC More Specific Routing

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/inspect-subnet-to-subnet-traffic-with-amazon-vpc-more-specific-routing/

Since December 2019, Amazon Virtual Private Cloud (VPC) has allowed you to route all ingress traffic (also known as north – south traffic) to a specific network interface. You might use this capability for a number of reasons. For example, to inspect incoming traffic using an intrusion detection system (IDS) appliance or to route ingress traffic to a firewall.

Since we launched this feature, many of you asked us to provide a similar capability to analyze traffic flowing from one subnet to another inside your VPC, also known as east – west traffic. Until today, it was not possible because a route in a routing table cannot be more specific than the default local route (check the VPC documentation for more details). In plain English, it means that no route can have a destination using a smaller CIDR range than the default local route (which is the CIDR range of the whole VPC). For example, when the VPC range is 10.0.0/16 and a subnet has 10.0.1.0/24, a route to 10.0.1.0/24 is more specific than a route to 10.0.0/16.

Routing tables no longer have this restriction. Routes in a routing table can have routes more specific than the default local route. You can use such more specific route to send all traffic to a dedicated appliance or service to inspect, analyze, or filter all traffic flowing between two subnets (east-west traffic). The route target can be the network interface (ENI) attached to an appliance you built or you acquired, an AWS Gateway Load Balancer (GWLB) endpoint to distribute traffic to multiple appliances for performance or high availability reasons, an AWS Firewall Manager endpoint, or a NAT gateway. It also allows to insert an appliance between a subnet and an AWS Transit Gateway.

It is possible to chain appliances to have more than one type of analysis in between source and destination subnets. For examples, you might want first to filter traffic using a firewall (AWS managed or a third-party firewall appliance), second send the traffic to an intrusion detection and prevention systems, and finally, perform deep packet inspection. You can access virtual appliances from our AWS Partner Network and AWS Marketplace.

When you chain appliances, each appliance and each endpoint have to be in separate subnets.

Let’s get our hands dirty and try this new capability.

How It Works
For the purpose of this blog post, let’s assume I have a VPC with three subnets. The first subnet is public and has a bastion host. It requires access to resources, such as an API or a database in the second subnet. The second subnet is private. It hosts the resources required by the bastion. I wrote a simple CDK script to help you to deploy this setup.

VPC More Specific Routing

For compliance reasons, my company requires that traffic to this private application flows through an intrusion detection system. The CDK script also creates a third subnet, a private one, to host a network appliance. It provides three Amazon Elastic Compute Cloud (Amazon EC2) instances : the bastion host, the application instance and the network analysis appliance. The script also creates a NAT gateway allowing to bootstrap the application instance and to connect to the three instances using AWS Systems Manager Session Manager (SSM).

Because this is a demo, the network appliance is just a regular Amazon Linux EC2 instance configured as an IP router. In real life, you’re most probably going to use either one of the many appliances provided by our partners on the AWS Marketplace, or a Gateway Load Balancer endpoint, or a Network Firewall.

Let’s modify the routing tables to send the traffic through the appliance.

Using either the AWS Management Console, or the AWS Command Line Interface (CLI), I add a more specific route to the 10.0.0.0/24 and 10.0.1.0/24 subnet routing tables. These routes point to eni0, the network interface of the traffic-inspection appliance.

Using the CLI, I first collect the VPC ID, Subnet IDs, routing table IDs, and the ENI ID of the appliance.

VPC_ID=$(aws                                                    \
    --region $REGION cloudformation describe-stacks             \
    --stack-name SpecificRoutingDemoStack                       \
    --query "Stacks[].Outputs[?OutputKey=='VPCID'].OutputValue" \
    --output text)
echo $VPC_ID

APPLICATION_SUBNET_ID=$(aws                                                                      \
    --region $REGION ec2 describe-instances                                                      \
    --query "Reservations[].Instances[] | [?Tags[?Key=='Name' && Value=='application']].NetworkInterfaces[].SubnetId" \
    --output text)
echo $APPLICATION_SUBNET_ID

APPLICATION_SUBNET_ROUTE_TABLE=$(aws                                                             \
    --region $REGION  ec2 describe-route-tables                                                  \
    --query "RouteTables[?VpcId=='${VPC_ID}'] | [?Associations[?SubnetId=='${APPLICATION_SUBNET_ID}']].RouteTableId" \
    --output text)
echo $APPLICATION_SUBNET_ROUTE_TABLE

APPLIANCE_ENI_ID=$(aws                                                                           \
    --region $REGION ec2 describe-instances                                                      \
    --query "Reservations[].Instances[] | [?Tags[?Key=='Name' && Value=='appliance']].NetworkInterfaces[].NetworkInterfaceId" \
    --output text)
echo $APPLIANCE_ENI_ID

BASTION_SUBNET_ID=$(aws                                                                         \
    --region $REGION ec2 describe-instances                                                     \
    --query "Reservations[].Instances[] | [?Tags[?Key=='Name' && Value=='BastionHost']].NetworkInterfaces[].SubnetId" \
    --output text)
echo $BASTION_SUBNET_ID

BASTION_SUBNET_ROUTE_TABLE=$(aws \
 --region $REGION ec2 describe-route-tables \
 --query "RouteTables[?VpcId=='${VPC_ID}'] | [?Associations[?SubnetId=='${BASTION_SUBNET_ID}']].RouteTableId" \
 --output text)
echo $BASTION_SUBNET_ROUTE_TABLE

Next, I add two more specific routes. One route sends traffic from the bastion public subnet to the application private subnet through the appliance network interface.  The second route is in the opposite direction to route replies. It routes more specific traffic from the application private subnet to the bastion public subnet through the appliance network interface.  Confused? Let’s look at the following diagram:

VPC More Specific Routing

First, let’s modify the bastion routing table:

aws ec2 create-route                                  \
     --region $REGION                                 \
     --route-table-id $BASTION_SUBNET_ROUTE_TABLE     \
     --destination-cidr-block 10.0.1.0/24             \
     --network-interface-id $APPLIANCE_ENI_ID

Next, let’s modify the application routing table:

aws ec2 create-route                                  \
    --region $REGION                                  \
    --route-table-id $APPLICATION_SUBNET_ROUTE_TABLE  \
    --destination-cidr-block 10.0.0.0/24              \
    --network-interface-id $APPLIANCE_ENI_ID

I can also use the Amazon VPC Console to make these modifications. I simply choose the “Bastion” routing tables and from the Routes tab and click Edit routes.MSR : Select a routing table

I add a route to send traffic for 10.0.1.0/24 (subnet of the application) to the appliance ENI (eni-055...).MSR : create a route

The next step is to define the opposite route for replies, from the application subnet send traffic to 10.0.0.0/24 to the appliance ENI (eni-05...).  Once finished, the application subnet routing table should look like this:

MSR : Final route table

Configure the Appliance Instance
Finally, I configure the appliance instance to forward all traffic it receives. Your software appliance usually does that for you. No extra step is required when you use AWS Marketplace appliances or the instance created by the CDK script I provided for this demo. If you’re using a plain Linux instance, complete these two extra steps:

1. Connect to the EC2 appliance instance and configure IP traffic forwarding in the kernel:

sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv6.conf.all.forwarding=1

2. Configure the EC2 instance to accept traffic for destinations other than itself (known as source/destination check) :

APPLIANCE_ID=$(aws --region $REGION ec2 describe-instances                     \
     --filter "Name=tag:Name,Values=appliance"                                 \
     --query "Reservations[].Instances[?State.Name == 'running'].InstanceId[]" \
     --output text)

aws ec2 modify-instance-attribute --region $REGION     \
                         --no-source-dest-check        \
                         --instance-id $APPLIANCE_ID

Test the Setup
The appliance is now ready to forward traffic to the other EC2 instances.

If you are using the demo setup, there is no SSH key installed on the bastion host. Access is made through AWS Systems Manager Session Manager.

BASTION_ID=$(aws --region $REGION ec2 describe-instances                      \
    --filter "Name=tag:Name,Values=BastionHost"                               \
    --query "Reservations[].Instances[?State.Name == 'running'].InstanceId[]" \
    --output text)

aws --region $REGION ssm start-session --target $BASTION_ID

After you’re connected to the bastion host, issue the following cURL command to connect to the application host:

sh-4.2$ curl -I 10.0.1.239 # use the private IP address of your application host
HTTP/1.1 200 OK
Server: nginx/1.18.0
Date: Mon, 24 May 2021 10:00:22 GMT
Content-Type: text/html
Content-Length: 12338
Last-Modified: Mon, 24 May 2021 09:36:49 GMT
Connection: keep-alive
ETag: "60ab73b1-3032"
Accept-Ranges: bytes

To verify the traffic is really flowing through the appliance, you can enable source/destination check on the instance again. Use the --source-dest-check parameter with the modify-instance-attribute CLI command above. The traffic is blocked when the source/destination check is enabled.

I can also connect to the appliance host and inspect traffic with the tcpdump command.

(on your laptop)
APPLIANCE_ID=$(aws --region $REGION ec2 describe-instances     \
                   --filter "Name=tag:Name,Values=appliance" \
		   --query "Reservations[].Instances[?State.Name == 'running'].InstanceId[]" \
  		   --output text)

aws --region $REGION ssm start-session --target $APPLIANCE_ID

(on the appliance host)
tcpdump -i eth0 host 10.0.0.16 # the private IP address of the bastion host

08:53:22.760055 IP ip-10-0-0-16.us-west-2.compute.internal.46934 > ip-10-0-1-104.us-west-2.compute.internal.http: Flags [S], seq 1077227105, win 26883, options [mss 8961,sackOK,TS val 1954932042 ecr 0,nop,wscale 6], length 0
08:53:22.760073 IP ip-10-0-0-16.us-west-2.compute.internal.46934 > ip-10-0-1-104.us-west-2.compute.internal.http: Flags [S], seq 1077227105, win 26883, options [mss 8961,sackOK,TS val 1954932042 ecr 0,nop,wscale 6], length 0
08:53:22.760322 IP ip-10-0-1-104.us-west-2.compute.internal.http > ip-10-0-0-16.us-west-2.compute.internal.46934: Flags [S.], seq 4152624111, ack 1077227106, win 26847, options [mss 8961,sackOK,TS val 4094021737 ecr 1954932042,nop,wscale 6], length 0
08:53:22.760329 IP ip-10-0-1-104.us-west-2.compute.internal.http > ip-10-0-0-16.us-west-2.compute.internal.46934: Flags [S.], seq 4152624111, ack 1077227106, win 26847, options [mss 

Cleanup
If you used the CDK script I provided for this post, be sure to run cdk destroy when you’re finished so that you’re not billed for the three EC2 instances and the NAT gateway I use for this demo. Running the demo script in us-west-2 costs $0.062 per hour.

Things to Keep in Mind.
There are couple of things to keep in mind when using VPC more specific routes :

  • The network interface or service endpoint you are sending the traffic to must be in a dedicated subnet. It cannot be in the source or destination subnet of your traffic.
  • You can chain appliances. Each appliance must live in its dedicated subnet.
  • Each subnet you’re adding consumes a block of IP addresses.  If you’re using IPv4, be conscious of the number of IP addresses consumed (A /24 subnet consumes 256 addresses from your VPC). The smallest CIDR range allowed in a subnet is /28, it just consumes 16 IP addresses.
  • The appliance’s security group must have a rule accepting incoming traffic on the desired port. Similarly, the application’s security group must authorize traffic coming from the appliance security group or IP address.

This new capability is available in all AWS Regions, at no additional cost.

You can start using it today.

Introducing Amazon Route 53 Application Recovery Controller

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-route-53-application-recovery-controller/

I am pleased to announce the availability today of Amazon Route 53 Application Recovery Controller, a Amazon Route 53 set of capabilities that continuously monitors an application’s ability to recover from failures and controls application recovery across multiple AWS Availability Zones, AWS Regions, and on premises environments to help you to build applications that must deliver very high availability.

At AWS, the security and availability of your data and workloads are our top priorities. From the very beginning, AWS global infrastructure allowed you to build application architectures that are resilient to different type of failures. When your business or application requires high availability, you typically use AWS global infrastructure to deploy redundant application replicas across AWS Availability Zones inside an AWS Region. Then, you use a Network or Application Load Balancer to route traffic to the appropriate replica. This architecture handles the requirements of the vast majority of workloads.

However, some industries and workloads have higher requirements in terms of high availability: availability rate at or above 99.99% with recovery time objectives (RTO) measured in seconds or minutes. Think about how real-time payment processing or trading engines can affect entire economies if disrupted. To address these requirements, you typically deploy multiple replicas across a variety of AWS Availability Zones, AWS Regions, and on premises environments. Then, you use Amazon Route 53 to reliably route end users to the appropriate replica.

Amazon Route 53 Application Recovery Controller helps you to build these applications requiring very high availability and low RTO, typically those using active-active architectures, but other type of redundant architectures might also benefit from Amazon Route 53 Application Recovery Controller. It is made of two parts: readiness check and routing control.

Readiness checks continuously monitor AWS resource configurations, capacity, and network routing policies, and allow you to monitor for any changes that would affect the ability to execute a recovery operation. These checks ensure that the recovery environment is scaled and configured to take over when needed. They check the configuration of Auto Scaling groups, Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Elastic Block Store (EBS) volumes, load balancers, Amazon Relational Database Service (RDS) instances, Amazon DynamoDB tables, and several others. For example, readiness check verifies AWS service limits to ensure enough capacity can be deployed in an AWS Region in case of failover. It also verifies capacity and scaling characteristics of application replicas are the same across AWS Region.

Routing controls help to rebalance traffic across application replicas during failures, to ensure that the application stays available. Routing controls work with Amazon Route 53 health checks to redirect traffic to an application replica, using DNS resolution. Routing controls improve traditional automated Amazon Route 53 health check-based failovers in three ways:

  • First, routing controls give you a way to failover the entire application stack based on application metrics or partial failures, such as a 5% increased error rate or a millisecond of increased latency.
  • Second, routing controls give you safe and simple manual overrides. You can use them to shift traffic for maintenance purposes or to recover from failures when your monitors fail to detect an issue.
  • Third, routing controls can use a capability called safety rules to prevent common side effects associated with fully automated health checks, such as preventing fail over to an unprepared replica, or flapping issues.

To help you understand how Route 53 Application Recovery Controller works, I’ll walk you through the process I used to configure my own high availability application.

How It Works
For demo purposes, I built an application made up of a load balancer, an Auto Scaling group with two EC2 instances, and a global DynamoDB table. I wrote a CDK script to deploy the application in two AWS Regions: US East (N. Virginia) and US West (Oregon). The global DynamoDB table ensures data is replicated across the two AWS Regions. This is an active-standby architecture, as I described earlier.

The application is a multi-player TicTacToe game, an application that typically needs 99.99% availability or more :-). One DNS record (tictactoe.seb.go-aws.com) points to the load balancer in the US East (N. Virginia) region. The following diagram shows the architecture for this application:

Example application architecture

Preparing My Application
To configure Route 53 Application Recovery Controller for my application, I first deployed independent replicas of my application stack so that I can fail over traffic across the stacks. These copies are deployed across AWS high-availability boundaries, such as Availability Zones, or AWS Regions. I chose to deploy my application replicas across multiple AWS Regions

Then, I configured data replication across these independent replicas. I’m using DynamoDB global tables to help replicate my data.

Lastly, I configured each independent stack to expose a DNS name. This DNS name is the entry point into my application, such as a regional load balancer DNS name.

Terminology
Before I configure readiness check, let me share some basic terminology.

A cell defines the silo that contains my application’s independent units of failover. It groups all AWS resources that are required for my application to operate independently. For my demo, I have two cells: one per AWS Region where my application is deployed. A cell is typically aligned with AWS high-availability boundaries, such as AWS Regions or Availability Zones, but it can be smaller too. It is possible to have multiple cells in one Availability Zone. This is an effective way to reduce blast radius, especially when you follow one-cell-at-a-time change management practices.

definition of a cell

A recovery group is a collection of cells that represent an application or group of applications that I want to check for failover readiness. A recovery group typically consists of two or more cells that mirror each other in terms of functionality.

definition of a recovery group

A resource set is a set of AWS resources that can span multiple cells. For this demo, I have three resource sets: one for the two load balancers in us-east-1 and us-west-2, one for the two Auto Scaling groups in the two Regions, and one for the global DynamoDB table.

A readiness check validates a set of AWS resources readiness to be failed over to. In this example, I want to audit readiness for my load balancers, Auto Scaling groups, and DynamoDB table. I create a readiness check for the Auto Scaling groups. The service constantly monitors the instance types and counts in the groups to make sure that each group is scaled equally. I repeat the process for the load balancer and the global DynamoDB table.

definition of a resource set

To help determine recovery readiness for my application, Route 53 Application Recovery Controller continuously audits mismatches in capacity, AWS resource limits, and AWS throttle limits across application cells (Availability Zones or Regions). When Route 53 Application Recovery Controller detects a mismatch in limits, it raises an AWS Service Quota request for the resource across the cells. If Route 53 Application Recovery Controller detects a capacity mismatch in resources, I can take actions to align capacity across the cells. For example, I could trigger a scaling increase for my Auto Scaling groups.

Create a Readiness Check
To create a readiness check, I open the AWS Management Console and navigate to the Application Recovery Controller section under Route 53.

Create Recovery Group

To create a recovery group for my application, I navigate to the Getting Started section, then I choose Create recovery group.

Create Recovery Group - enter a name

I enter a name (for example AWSNewsBlogDemo) and then choose Next.

Create Recovery readiness - create cells

In Configure Architecture, I choose Add Cell, then I enter a cell name (AWSNewsBlogDemo-RegionWEST) and then choose Add Cell again to add a second cell. I enter AWSNewsBlogDemo-RegionEAST for the second cell. I choose Next to review my inputs, then I choose Create recovery group.

I now need to associate resources such as my load balancers, Auto Scaling groups, and DynamoDB table with my recovery group.

Create Resource Set

In the left navigation pane, I choose Resource Set and then I choose Create.

Create Resource Set - load balancers

I enter a name for my first resource set (for example, load_balancers). For Resource type, I choose Network Load Balancer or Application Load Balancer and I then choose Add to add the load balancer ARN.

I choose Add again to enter the second load balancer ARN, and then I choose Create resource set.

I repeat the process to create one resource set for the two Auto Scaling groups and a third resource set for the global DynamoDB table (one ARN). I now have three resource sets:

Create Resource Set - 3 Resource Sets

My last step is to create the readiness check. This will associate the resources with cells in the resource groups.

Create Readiness Check

In Readiness check, I choose Create at the top right of the screen, then Readiness check.

Create Readiness Check Step 1

Step 1 (Create readiness check), I enter a name (for example, load_balancers). For Resource Type, I choose Network Load Balancer or Application Load Balancer and then choose Next.

Create Readiness Check Step 2

Step 2 (Add resource set), I keep the default selection Use an existing resource set and for Resource set name, I choose load_balancers and then I choose Next.

Step 3 (Apply readiness rules), I review the rules and then choose Next.

Recovery Group Options

Step 4 (Recovery Group Options), I keep the default selection Associate with an existing recovery group. For Recovery group name, I choose AWSNewsBlog. Then, I associate the two cells (EAST and WEST) with the two load balancers ARN. Be sure to associate the correct load balancer to each cell. The Region name is included in the ARN.

Step 5 (Review and create), I review my choices and then choose Create readiness check.

Three readiness checks

I repeat this process for the Auto Scaling group and the DynamoDB global table.

Recovery Groups in Ready mode

When all readiness checks in the group are green, the group has a status of Ready.

Now, I can configure and test the routing controls.

Terminology
Before I configure routing controls, let me share some basic terminology.

A cluster is a set of five redundant Regional endpoints against which you can execute API calls to update or get the state of routing controls. You can host multiple control panels and routing controls on one cluster.

A routing control is a simple on/off switch, hosted on a cluster, that you use to control routing of client traffic in and out of cells. When you create a routing control, you add a health check in Route 53 so that you can reroute traffic when you update the routing control in Route 53 Application Recovery Controller. The health checks must be associated with DNS failover records that front each application replica if you want to use them to route traffic with routing controls.

A control panel groups together a set of related routing controls.

Configure Routing Controls
I can use the Route 53 console or API actions to create a routing control for each AWS Region for my application. After I create routing controls, I create an Amazon Route 53 Application Recovery Controller health check for each one, and then associate each health check with a DNS failover record for my load balancers in each Region. Then, to fail over traffic between Regions, I change the routing control state for one routing control to off and another routing control state to on.

The first step is to create a cluster. A cluster is charged $2.5 / hour. When you create a cluster to experience Route 53 Application Recovery Controller, be sure to delete the cluster after your experimentation.

Create Cluster

In the left navigation pane, I navigate to the cluster panel and then I choose Create.

Create Cluster - enter name

I enter a name for my cluster and then choose Create cluster.

The cluster is in Pending state for a few minutes. After a while, its status changes to Deployed.

After it’s deployed, I select the cluster name to discover the five redundant API endpoints. You must specify one of those endpoints when you build recovery tools to retrieve or set routing control states. You can use any of the cluster endpoints, but in complex or automated scenarios, we recommend that your systems be prepared to retry with each of the available endpoints, using a different endpoint with each retry request.

Routing Control Cluster Endpoints

Traffic routing is managed through routing controls that are grouped in a control panel. You can create one or use the default one that is created for you.

Default Control Panel

I choose DefaultControlPanel.

Default Control Panel - Add routing control

I choose Add routing control.

Create Routing Control

I enter a name for my routing (FailToWEST) control and then choose Create routing control. I repeat the operation for the second routing control (FailToEAST).

Control Panel - Create Health Check

After the routing control is created, I choose it from the list. On the detail page, I choose Create health check to create a health check in Route 53.

Control Panel - Create Health Check

I enter a name for the health check and then choose Create. I navigate to the Route 53 console to verify the health check was correctly created.

I create one health check for each routing control.

You might have noticed that the Control Panel provides a place where you can add Safety Rules. When you work with several routing controls at the same time, you might want some safeguards in place when you enable and disable them. These help you to avoid initiating a failover when a replica is not ready, or unintended consequences like turning both routing controls off and stopping all traffic flow. To create these safeguards, you create safety rules. For more information about safety rules, including usage examples, see the Route 53 Application Recovery Controller developer guide.

Now the routing controls and the DNS health check are in place, the last step is to route traffic to my application.

Adjust My DNS Settings
To route traffic to my application. I assign a DNS alias to the top-level entry point of the application in the cell. For this example, using the Route 53 console, I create two ALIAS A records of type FAILOVER and associate each health check with each DNS record. The two records have the same record name. One is the primary record and the other is the secondary record. For more information about Amazon Route 53 health checks, see the Amazon Route 53 developer guide.

DNS Alias Record Primary DNS Alias Record Secondary

On the application recovery routing controls page, I enable one of the two routing controls.

Application recovery Control - enable one control state

As soon as I do, all the traffic pointed to tictactoe.seb.go-aws.com goes to the infrastructure deployed on us-east-1.

Testing My Setup
To test my setup, I first use the dig command in a terminal. It shows the DNS CNAME record that points to the load balancer deployed in us-east-1.

testing alias for us-east-1

I also test the application with a web browser. I observe the name tictactoe.seb.go-aws.com goes to us-east-1.

Tic Tac Toe application

Now, using the update-routing-control-state API action, the CLI, or the console, I turn off the routing control to the us-east-1 Region and turn on the one to the us-west-2 Region. When I use the CLI, I use the endpoints provided by my cluster.

aws route53-recovery-cluster update-routing-control-state \
     --routing-control-arn arn:aws:route53-recovery-control::012345678:controlpanel/xxx/routingcontrol/abcd \
     --routing-control-state On \
     --region us-west-2 \
     --endpoint-url https://host-xxx.us-west-2.cluster.routing-control.amazonaws.com/v1

In the console, I navigate to the control panel, I select the routing control I want to change and click Change routing control states.

Changing routing control states

After less than a minute, the DNS address is updated. My application traffic is now routed to the us-west-2 Region.

DNS checked after a routing control state change

Readiness checks and routing controls provide a controlled failover for my application traffic, redirecting traffic from my active replica to my standby one, in another AWS Region. I can change the traffic routing manually, as I showed in the demo, or I can automate it using Amazon CloudWatch alarms based on technical and business metrics for my application.

Pricing
This new capability is charged on demand. There are no upfront costs. You are charged per readiness check and per cluster per hour. Readiness checks are charged $0.045 / hour. Cluster are charged $2.5 / hour. In the demo example used for this blog post, there are three readiness checks and one cluster. The price per hour for this setup, excluding the application itself, is 3 x $0.045 + 1 x $2.5 = $2.635 / hour. For more details about the pricing, including an example, see the Route 53 pricing page.

This new capability is a global service that can be used to monitor and control application recovery for application running in any of the public commercial AWS Regions. Give it a try and let us know what you think. As always, you can send feedback through your usual AWS Support contacts or post it on the AWS forum for Route 53 Application Recovery Controller.

— seb

AWS Contact Center Day – July 2021

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/aws-contact-center-day-july-2021/

AWS Contact Center Days

Earlier this week, I ordered from Amazon.fr a box of four toothpaste tubes, but only one was in the box. I called Amazon’s customer center. The agent immediately found my order without me having to share the long order number. She issued a refund and told me I even can keep the one tube I received, no return was needed.  As a customer, I can’t ask for better customer service.

Amazon strives to be the earth’s most customer-centric company, not only because it is the right thing to do for customers, but because over the long term, it’s good for the business. According to a Salesforce study, 80% of customers believe the experience a company provides is as important as its product or services. Over 90% of customers believe a positive customer service experience makes them more likely to make another purchase.

Just like me, you might have been delighted by Amazon customer service already. We know that you want fast, convenient support and it’s what makes you loyal.

This is why we created the AWS Contact Center Day conference. To learn from industry experts how to create your contact center of the future in a free on-demand video conference.

Amazon’s Contact Centers
Amazon’s contact centers are critical to our mission to be focused on customer experience. Our contact centers have more than 100,000 customer-service associates in 32 countries who support millions of customers in dozens of languages. Given the scale and our strict requirements for security, resiliency, flexibility, agility, or automation, we couldn’t purchase an off-the-shelf solution. We decided to build our own.

Everything that Amazon learned from our customer service organization, while looking to the future, has helped us bring to market Amazon Connect, an easy-to-use omnichannel cloud contact center that helps businesses provide superior customer service at a lower cost. As the notion of the contact center has evolved, so have the expectations of customers. The contact center of the future isn’t a collection of disparate point solutions for taking a call or a chat, and it isn’t just an application that consolidates those technologies. It’s a platform that makes it easy to integrate with your enterprise applications or system of record. The contact center of the future makes it easy to use customer data in real time to personalize and contextualize all customer experiences.

Contact Centers Best Practices
To further support business looking to improve their contact centers, Amazon designed our first contact center focused event, AWS Contact Contact Center Day, a way to share best practices, customer experience, and contact center technology, and to learn how to use Amazon Connect to accelerate the modernization of your contact centers.

The one-day conference took place on July 13, 2021 and brought together some of the most influential people invested in the future of contact centers including: Becky Ploeger, Global Head of Hilton Reservations and Customer Care, and member of the Customer Contact Week advisory board; Matt Dixon, Chief Research and Innovation Officer at Tethr, and author of multiple bestsellers, including The Challenger Sale; Customer service expert; author Shep Hyken, author of I’ll Be Back: How to Get Customers to Come Back Again and Again; Brian Solis, Global Innovation Evangelist, Salesforce, and best-selling author; and Mark Honeycutt, Director of Consumer Operations, Amazon.

At Amazon, we have gone through years of trial and error to get to where our customer experience stands today. This is why we wanted to share our experiences with you so that you can learn from our progress:

  • Customers want super-human service. You can now automate routine customer experience and agent tasks. When I call my airline for a rebooking after a delayed flight, I expect to be greeted by name. I expect the system to know my flight was delayed and to offer rebooking suggestions. This can happen automatically today, without involving a customer agent. These automatic chatbot systems are personalized per customer. They are dynamic because they answer customer questions before they ask, and they are natural because they are based on voice interactions, like conversations between humans.
  • Customers expect personalized engagement. Amazon Connect allows for fast and secure interactions with real-time voice biometric authentication. There is no need to go through a lengthy authentication questionnaire anymore. After the customer is authenticated, the customer service agent has a 360-degree view of the customer’s profile, integrating and displaying data from across the enterprise and using machine learning to provide the right information at the right moment.
  • Contact centers must evolve quickly to answer changing needs. Contact center interactions must take action based on real-time data or customer sentiment. Leaders want to experiment, learn, and improve using customer analytics and data.

Learn more
If you’re interested in learning more about contact center excellence, the entire Contact Center Day conference is now available on demand.

Check out the full agenda and watch a session now or learn more about Amazon Connect.

I am looking forward my next delightful customer experience using your contact centers.

— seb

Easily Manage Security Group Rules with the New Security Group Rule ID

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/easily-manage-security-group-rules-with-the-new-security-group-rule-id/

At AWS, we tirelessly innovate to allow you to focus on your business, not its underlying IT infrastructure. Sometimes we launch a new service or a major capability. Sometimes we focus on details that make your professional life easier.

Today, I’m happy to announce one of these small details that makes a difference: VPC security group rule IDs.

A security group acts as a virtual firewall for your cloud resources, such as an Amazon Elastic Compute Cloud (Amazon EC2) instance or a Amazon Relational Database Service (RDS) database. It controls ingress and egress network traffic. Security groups are made up of security group rules, a combination of protocol, source or destination IP address and port number, and an optional description.

When you use the AWS Command Line Interface (CLI) or API to modify a security group rule, you must specify all these elements to identify the rule. This produces long CLI commands that are cumbersome to type or read and error-prone. For example:

aws ec2 revoke-security-group-egress \
         --group-id sg-0xxx6          \
         --ip-permissions IpProtocol=tcp, FromPort=22, ToPort=22, IpRanges='[{CidrIp=192.168.0.0/0}, {84.156.0.0/0}]'

What’s New?
A security group rule ID is an unique identifier for a security group rule. When you add a rule to a security group, these identifiers are created and added to security group rules automatically. Security group IDs are unique in an AWS Region. Here is the Edit inbound rules page of the Amazon VPC console:

Security Group Rules Ids

As mentioned already, when you create a rule, the identifier is added automatically. For example, when I’m using the CLI:

aws ec2 authorize-security-group-egress                                  \
        --group-id sg-0xxx6                                              \
        --ip-permissions IpProtocol=tcp,FromPort=22,ToPort=22,           \
                         IpRanges=[{CidrIp=1.2.3.4/32}]
        --tag-specifications                                             \
                         ResourceType='security-group-rule',             \
                         "Tags": [{                                      \
                           "Key": "usage", "Value": "bastion"            \
                         }]

The updated AuthorizeSecurityGroupEgress API action now returns details about the security group rule, including the security group rule ID:

"SecurityGroupRules": [
    {
        "SecurityGroupRuleId": "sgr-abcdefghi01234561",
        "GroupId": "sg-0xxx6",
        "GroupOwnerId": "6800000000003",
        "IsEgress": false,
        "IpProtocol": "tcp",
        "FromPort": 22,
        "ToPort": 22,
        "CidrIpv4": "1.2.3.4/32",
        "Tags": [
            {
                "Key": "usage",
                "Value": "bastion"
            }
        ]
    }
]

We’re also adding two API actions: DescribeSecurityGroupRules and ModifySecurityGroupRules to the VPC APIs. You can use these to list or modify security group rules respectively.

What are the benefits ?
The first benefit of a security group rule ID is simplifying your CLI commands. For example, the RevokeSecurityGroupEgress command used earlier can be now be expressed as:

aws ec2 revoke-security-group-egress \
         --group-id sg-0xxx6         \
         --security-group-rule-ids "sgr-abcdefghi01234561"

Shorter and easier, isn’t it?

The second benefit is that security group rules can now be tagged, just like many other AWS resources. You can use tags to quickly list or identify a set of security group rules, across multiple security groups.

In the previous example, I used the tag-on-create technique to add tags with --tag-specifications at the time I created the security group rule. I can also add tags at a later stage, on an existing security group rule, using its ID:

aws ec2 create-tags                         \
        --resources sgr-abcdefghi01234561   \
        --tags "Key=usage,Value=bastion"

Let’s say my company authorizes access to a set of EC2 instances, but only when the network connection is initiated from an on-premises bastion host. The security group rule would be IpProtocol=tcp, FromPort=22, ToPort=22, IpRanges='[{1.2.3.4/32}]' where 1.2.3.4 is the IP address of the on-premises bastion host. This rule can be replicated in many security groups.

What if the on-premises bastion host IP address changes? I need to change the IpRanges parameter in all the affected rules. By tagging the security group rules with usage : bastion, I can now use the DescribeSecurityGroupRules API action to list the security group rules used in my AWS account’s security groups, and then filter the results on the usage : bastion tag. By doing so, I was able to quickly identify the security group rules I want to update.

aws ec2 describe-security-group-rules \
        --max-results 100 
        --filters "Name=tag-key,Values=usage" --filters "Name=tag-value,Values=bastion" 

This gives me the following output:

{
    "SecurityGroupRules": [
        {
            "SecurityGroupRuleId": "sgr-abcdefghi01234561",
            "GroupId": "sg-0xxx6",
            "GroupOwnerId": "40000000003",
            "IsEgress": false,
            "IpProtocol": "tcp",
            "FromPort": 22,
            "ToPort": 22,
            "CidrIpv4": "1.2.3.4/32",
            "Tags": [
                {
                    "Key": "usage",
                    "Value": "bastion"
                }
            ]
        }
    ],
    "NextToken": "ey...J9"
}

As usual, you can manage results pagination by issuing the same API call again passing the value of NextToken with --next-token.

Availability
Security group rule IDs are available for VPC security groups rules, in all commercial AWS Regions, at no cost.

It might look like a small, incremental change, but this actually creates the foundation for future additional capabilities to manage security groups and security group rules. Stay tuned!

Introducing AWS Systems Manager Change Manager

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/introducing-systems-manager-change-manager/

Because you are constantly listening to the feedback from your customer, you are iterating, innovating, and improving your applications and infrastructures. You continually modify your IT systems in the cloud. And let’s face it, changing something in a working system risks breaking things or introducing side effects that are sometimes unpredictable; it doesn’t matter how many tests you do. On the other hand, not making changes is stasis, followed by irrelevance, followed by death.

This is why organizations of all sizes and types have embraced a culture of controlling changes. Some organizations adopt change management processes such as the ones defined in ITIL v4. Some have adopted DevOps’ Continuous Deployment, or other methods. In any case, to support your change management processes, it is important to have tools.

Today, we are launching AWS Systems Manager Change Manager, a new change management capability for AWS Systems Manager. It simplifies the way ops engineers track, approve, and implement operational changes to their application configurations and infrastructures.

Using Change Manager has two primary advantages. First, it can improve the safety of changes made to application configurations and infrastructures, reducing the risk of service disruptions. It makes operational changes safer by tracking that only approved changes are being implemented. Secondly, it is tightly integrated with other AWS services, such as AWS Organizations and AWS Single Sign-On, or the integration with the Systems Manager change calendar and Amazon CloudWatch alarms.

Change Manager provides accountability with a consistent way to report and audit changes made across your organization, their intent, and who approved and implemented them.

Change Manager works across AWS Regions and multiple AWS accounts. It works closely with Organizations and AWS SSO to manage changes from a central point and to deploy them in a controlled way across your global infrastructure.

Terminology
You can use AWS Systems Manager Change Manager on a single AWS account, but most of the time, you will use it in a multi-account configuration.

The way you manage changes across multiple AWS accounts depends on how these accounts are linked together. Change Manager uses the relationships between your accounts defined in AWS Organizations. When using Change Manager, there are three types of accounts:

  • The management account – also known as the “main account” or “root account.” The management account is the root account in an AWS Organizations hierarchy. It is the management account by virtue of this fact.
  • The delegated administrator account – A delegated administrator account is an account that has been granted permission to manage other accounts in Organizations. In the Change Manager context, this is the account from which change requests will be initiated. You will typically log in to this account to manage templates and change requests. Using a delegated administrators account allows you to limit connections made to the root account. It also allows you to enforce a least privileges policy by using a specific subset of permissions required by the changes.
  • The member accounts – Member accounts are accounts that are not the management account or a delegated administrator account, but are still included in Organizations. In my mental model for Change Manager, these would be the accounts that hold the resources where changes are deployed. A delegated administrator account would initiate a change request that would impact resources in a member account. System administrators are discouraged from logging directly into these accounts.

Let’s see how you can use AWS Systems Manager Change Manager by taking a short walk-through demo.

One-Time Configuration
In this scenario, I show you how to use Change Manager with multiple AWS accounts linked together with Organizations. If you are not interested in the one-time configuration, jump to the Create a Change Request section below.

There are four one-time configuration actions to take before using Change Manager: one action in the root account and three in the delegated administrator account. In the root account, I use Quick Setup to define my delegated administrator account and initially configure permissions on the accounts. In the delegated administrator account, you define your source of user identities, you define what users have permissions to approve change templates, and you define a change request template.

First, I ensure I have an Organization in place and my AWS accounts are organized in Organizational Units (OU). For the purpose of this simple example, I have three accounts: the root account, the delegated administrator account in the management OU and a member account in the managed OU. When ready, I use Quick Setup on the root account to configure my accounts. There are multiple paths leading to Quick Setup; for this demo, I use the blue banner on top of the Quick Setup console, and I click Setup Change Manager.

Change Manager Quick Setup

 

On the Quick Setup page, I enter the ID of the delegated administrator account if I haven’t defined it already. Then I choose the permissions boundaries I grant to the delegated administrator account to perform changes on my behalf. This is the maximum permissions Change Manager receives to make changes. I will further restrict this permission set when I create change requests in a few minutes. In this example, I grant Change Manager permissions to call any ec2 API. This effectively authorizes Change Manager to only run changes related to EC2 instances.

Change Manager Quick Setup

Lower on the screen, I choose the set of accounts that are targets for my changes. I choose between Entire organization or Custom to select one or multiple OUs.

Change Manager Quick Setup 2

After a while, Quick Setup finishes configuring my AWS accounts permission and I can move to the second part of the one-time setup.

Change Manager Quick Setup 3

Second, I switch to my delegated administrator account. Change Manager asks me how I manage users in my organization: with AWS Identity and Access Management (IAM) or AWS Single Sign-On? This defines where Change Manager pulls user identities when I choose approvers. This is a one-time configuration option. This can be changed at any time in the Change Manager Settings page.

Change Manager Settings

Third, on the same page, I define an Amazon Simple Notification Service (SNS) topic to receive notifications about template reviews. This channel is notified any time a template is created or modified, to let template approvers review and approve templates. I also define the IAM (or SSO) user with permission to approve change templates (more about these in one minute).

Change Manager Template Reviewers

Optionally, you can use the existing AWS Systems Manager Change Calendar to define the periods where changes are not authorized, such as marketing events or holiday sales.

Finally, I define a change template. Every change request is created from a template. Templates define common parameters for all change requests based on them, such as the change request approvers, the actions to perform, or the SNS topic to send notifications of progress. You can enforce the review and approval of templates before they can be used. It makes sense to create multiple templates to handle different type of changes. For example, you can create one template for standard changes, and one for emergency changes that overrides the change calendar. Or you can create different templates for different types of automation run books (documents).

To help you to get started, we created a template for you: the “Hello World” template. You can use it as a starting point to create a change request and test out your approval flow.

At any time, I can create my own template. Let’s imagine my system administrator team is frequently restarting EC2 instances. I create a template allowing them to create change requests to restart one or multiple instances. Using the delegated administrator account, I navigate to the Change Manager management console and click Create template.

Change Manager Create Template

In a nutshell, a template defines the list of authorized actions, where to send notifications and who can approve the change request. Actions are an AWS Systems Manager runbook. Emergency change templates allow change requests to bypass the change calendar I wrote about earlier. Under Runbook Options, I choose one or multiple runbooks allowed to run. For this example, I choose the AWS EC2RestartInstance runbook.

I use the console to create the template, but templates are defined internally as YAML. I can edit the YAML using the Editor tab, or when I am using the AWS Command Line Interface (CLI) or API. This means I can version control them just like the rest of my infrastructure (as code).Change Manager Create Template part 1

Just below, I document my template using text formatted as markdown format. I use this section to document the defining characteristics of the template and provide any necessary instructions, such as back-out procedures, to the requestor.

Change Manager Template Documentation

I scroll down that page and click Add Approver to define approvers. Approvers can be individual users or groups. The list of approvers are defined either at the template level or in the change request itself. I also choose to create an SNS topic to inform approvers when any requests are created that require their approval.

In the Monitoring section I select the alarm that, when active, stops any change based on this template, and initiate a rollback.

In the Notifications section, I select or create another SNS topic so I’m notified when status changes for this template occur.

Change Manager Create Template part 2

Once I am done, I save the template and submit it for review.

Change Manager Submit Template for Review

Templates have to be reviewed and approved before they can be used. To approve the template, I connect the console as the template_approver user I defined earlier. As template_approver user, I see pending approvals on the Overview tab. Or, I navigate to the Templates tab, select the template I want to review. When I am done reviewing it, I click Approve.

Change Manager Approve Template

Voila, now we’re ready to create change requests based on this template. Remember that all the preceding steps are one-time configurations and can be amended at any time. When existing templates are modified, the changes go through a review and approval process again.

Create a Change Request
To create a change request on any account linked to the Organization, I open a AWS Systems Manager Change Manager console from the delegated administrator account and click Create request.

Change Manager Create Request

I choose the template I want to use and click Next.

Change Manager Select Template I enter a name for this change request. The change is initiated immediately after all approvals are granted, or I specify an optional scheduled time. When the template allows me, I choose the approver for this change. In this example, the approver is defined by the template and cannot be changed. I click Next.

Change Manager Create CR part 1

On the next screen, there are multiple important configuration options, relating to the actual execution of the change:

  • Target location – lets me define on which target AWS accounts and AWS Region I want to run this change.
  • Deployment target – lets me define which resources are the target of this change. One EC2 instance? Or multiple ones identified by their tags, their resources groups, a list of instance IDs, or all EC2 instances.
  • Runbook parameters – lets me define the parameters I want to pass to my runbook, if any.
  • Execution role – lets me define the set of permissions I grant the System Manager to deploy with this change. The permission set must have service changemanagement.ssm.amazonaws.com as principal for the trust policy. Selecting a role allows me to grant the Change Manager runtime a different permission set than the one I have.

Here is an example allowing Change Manager to stop an EC2 instance (you can scope it down to a specific AWS account, specific Region, or specific instances):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances"
            ],
            "Resource": "*",
        },
        {
            "Effect": "Allow",
            "Action": "ec2:DescribeInstances",
            "Resource": "*"
        }
    ]
}

And the associated trust policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "changemanagement.ssm.aws.internal"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

When I am ready, I click Next. On the last page, I review my data entry and click Submit for approval.

At this stage, the approver receives a notification, based on the SNS topic configured in the template. To continue this demo, I sign out of the console and sign in again as the cr_approver user, which I created, with permission to view and approve change requests.

As the cr_approver user, I navigate to the console, review the change request, and click Approve.

Change Manager Review Change Request

The change request status switches to scheduled, and eventually turns green to Success. At any time, I can click the change request to get the status, and to collect errors, if any.

Change Manager Dashboard with Succeeded Request

I click on the change request to see the details. In particular, the Timeline tab shows the history of this CR.

Change Management CR Timeline

Availability and Pricing
AWS Systems Manager Change Manager is available today in all commercial AWS Regions, except mainland China. The pricing is based on two dimensions: the number of change requests you submit and the total number of API calls made. The number of change requests you submit will be the main cost factor. We will charge $0.29 per change request. Check the pricing page for more details.

You can evaluate Change Manager for free for 30 days, starting on your first change request.

As usual, let us know what you think and let’s get started today

— seb

Amazon Connect – Now Smarter and More Integrated With Third-Party Tools

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-connect-smarter-and-more-integrated/

We launched Amazon Connect in 2017 and, since then, thousands of customers have created their own contact centers in the cloud. Amazon Connect makes it easy for non-technical customers to design interaction flows, manage agents, and track performance metrics.

For example, when I book a Best Western hotel room in Europe by phone, the call is managed by Amazon Connect. In the UK, the Post Office went from ideation to production rollout in just three weeks. In France, WebHelp, a global leader in Business Process Outsourcing, activated thousands of workstations and remote agents in just 72 hours.

Since I last blogged about Amazon Connect, the team has been continuously listening for your feedback and, today, I am happy to announce a new set of capabilities to make Amazon Connect smarter and more integrated with third-party tools.

We are using Machine Learning (ML) to make Amazon Connect smarter at analyzing conversations in real time, finding relevant information needed by contact center agents, and authenticating customers by the sound of their voice. The second set of capabilities makes Amazon Connect easier to integrate with third party tools or services to present unified customer profile information to contact center agents, and to make it easier to manage their tasks.

Let’s go into the details, one by one.

Contact Lens Real Time
Contact Lens for Amazon Connect is a set of machine learning (ML) capabilities allowing contact center supervisors to better understand the sentiment, trends, and compliance of customer conversations. It was first announced during re:Invent 2019 and available since July 2020. It allows to effectively train agents, replicate successful interactions, and to identify crucial company and product feedback.

Starting today, you can get real-time insights into customer experience during the live calls, such as a customer expressing dissatisfaction. Customer experience analytics and alerts for live calls are delivered in Amazon Connect’s real-time metrics dashboard. It makes it easy for supervisors to identify when to listen-in on a critical call, and to provide guidance to the agent via chat, or have the agent transfer the call to them for assistance.

You, as the contact center manager, can define rules using specific terms such as “not happy,” “poor quality product,” and “cancel my subscription.” Contact Lens uses natural language processing (NLP) to perform intelligent matching to automatically detect variations of the spoken words even when the example phrases are limited.

Create Rules for real time analytics

Contact Lens analyzes in-progress calls in real time to detect when the rule criteria for a customer experience issue is met, and immediately creates an alert next to the live call in the Amazon Connect dashboard to notify supervisors of the situation.

Real Time alert based on rules

With this launch, we are adding 13 language variants to post-call analytics, in addition to the 5 already supported :English (United States), English (Great Britain), English (Australia), English (India), and Spanish (United States).

The new language variants for post-call analytics are: English (Ireland), English (Scotland), English (Wales), Spanish (Spain), French (Canada), French (France), Portuguese (Portugal), Portuguese (Brazil), German (Germany), German (Switzerland), Italian (Italy), Arabic (Gulf), and Hindi (India).

Contact Lens for Amazon Connect real-time is available in 4 language variants: English (United States), English (Great Britain), English (Australia), and Spanish (United States). More language variants will be added at later stage.

For more details visit this launch page.

Amazon Connect Wisdom (Preview)
Wisdom provides built-in agent assistance capabilities in Amazon Connect, including machine learning (ML) powered search and real-time recommendations, to quickly enable agents with relevant information for resolving customer issues.

As an agent, I can type questions or phrases in the Wisdom search box, without guessing what keyword I should use. Wisdom understands what information I am searching for. It surfaces results in the agent’s preferred Amazon Connect application, the web-based one we do provide, or the ones you built.

Amazon Connect Wisdom search results

Wisdom comes with pre-built connectors to third-party knowledge repositories to provide most relevant results to agents. Wisdom includes connectors for Salesforce and ServiceNow during the preview, with more to come at launch.

Wisdom may use Contact Lens Real Time analytics to analyse the conversation in real-time. It detects the customer issue, finds related content in the connected repositories, and provides proactive recommendations to help the agent resolve it. For example, Wisdom can detect that a customer is talking about a problem with the handbag they bought last week, recommend an article that describes similar products defect, and provide instructions with a link to the order management application needed to initiate an exchange.

Wisdom is available in preview, you can signup today or visit the launch page.

Amazon Connect Voice ID (Preview)
Amazon Connect Voice ID provides real-time caller authentication which makes voice interactions in contact centers more secure and efficient.

To effectively recognise me as “Sébastien”, Voice ID must learn how I am talking. This is the enrolment phase. Then it compares the sound of my voice with the one enrolled earlier. This is the verification phase.

To meet with personal data protection laws, contact center agents capture my consent to use Voice ID.

During the enrolment phase, Voice ID listens to the call until it has captured 30 seconds of my voice. Then it creates my voiceprint, which uniquely authenticates me. A voiceprint is a mathematical representation that captures unique aspects of an individual’s voice such as speech rhythm, pitch, intonation, and loudness. I do not need to say or repeat any specific phrases to let Voice ID create my voiceprint. Voice ID provides an API that can be used to opt-out a customer.

When I call back in, Voice ID just needs 10 seconds of my voice to authenticate me. My voice can be captured as part of a typical interaction with the Interactive Voice Response (IVR) happening at the start of the call, or when I first start to talk with the agent. For example when I am answering questions, such as “what’s your first and last name?” and “what are you calling about?”, Voice ID uses this audio to generate my voiceprint again. It compares it with the one enrolled earlier. Voice ID then generates an authentication score depending on the confidence of the match. Contact center managers can use this score to create policies in Amazon Connect to let agents see a real-time result (“authenticated” or “not authenticated”) in their web-based application. Agents can then decide to proceed with the call or ask for additional authentication credentials.

Amazon Connect Voice ID is available in preview. You can signup today or visit this launch page.

Amazon Connect Customer Profiles
Customer Profiles is a unified profile for Amazon Connect that brings together customer information from disparate sources without having to build integrations or wrangle data.

Providing agents (or automatic IVR systems) with accurate and unified customer profile information at the right moment helps them to deliver better service to customers, and to resolve calls faster. Using Customer Profiles, agents must not navigate out of Amazon Connect, or switch between different applications to get the customer insights they need.

With just a few clicks, System Administrators can integrate customer profile data from applications like Salesforce, ServiceNow, Zendesk, and Marketo to build your own homegrown integration. Setting up connectors for Customer Profiles requires no programming or data integration expertise.

Once enabled, Customer Profiles automatically detects customer records from the applications. It matches and deduplicates them. This results in accurate and up-to-date profiles displayed to agents within their Connect web-based experience.

Amazon Connect Customer Profile

Learn more about Amazon Connect Customer Profile by visiting the launch page.

Amazon Connect Tasks
Amazon Connect Tasks makes it easy to automate, track, and manage contact center agent tasks. It provides a single place for contact center managers to prioritize, assign, and track customer service tasks across the disparate applications used by agents, so that they are focused on the highest priority work of any type.

Tasks can be sourced from third-party applications, such as a CRM solution, or to update a business-specific system. For example, you can programmatically create tasks for agents to follow-up on a customer case in a third party application like Salesforce, or complete an action item in a business-specific application, such as processing a claim in an insurance system. You can also automate tasks that dont require agent interaction, to ensure your agents spend more time focused on customers.

Using Amazon Connect tasks, agents no longer need to switch between applications to know what work should be completed, and with what priority. Agents can see all their assigned tasks right from the Amazon Connect contact control panel, the same web-based application they use to interact with customers over calls and chat. When a task is assigned, the agent receives a notification with the description of the task, and when required, links to any external applications needed to complete the action. Agents can also create tasks so that follow-up work is not forgotten, for example calling a customer back to provide a status update.

Amazon Connect Accept Tasks Amazon Connect View Task Create a task uisng Amazon Connect Tasks
Incoming Tasks Task Details Create a new Task

Amazon Connect Tasks provides pre-built connectors fo Salesforce and Zendesk. With just a few clicks, you can easily set up rules to automatically create tasks based on pre-defined conditions, as sown on the screenshot hereunder. It also provides an API to create tasks from any other application.

Amazon Connect Task Rules

Learn more about how to configure and to get started with Tasks by visiting the launch page.

Available Today
Three of these new capabilities are available today: Contact Lens Real Time, Customer Profiles, and Tasks. You must register to the preview program to test Wisdom and Voice ID.

Customer Profile and Tasks are available in all AWS Regions where Amazon Connect is available : US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (London). Contact Lens Real Time is available in US West (Oregon), US East (N. Virginia), and Asia Pacific (Sydney) at the moment. Wisdom is available in US East (N. Virginia) and US West (Oregon) during the preview, while Voice ID is available only in US West (Oregon) during the preview.

With Amazon Connect, you only pay for what you use. There are no required up-front payments, long-term commitments, or minimum monthly fees. The price metrics for these new capabilities are detailed on the Amazon Connect pricing page.

Should you need help adding Amazon Connect any of these capabilities to contact flows, please reach out to one of the dozens of Amazon Connect partners available worldwide.

— seb

New – Attributes Based Access Control with AWS Single Sign On

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-attributes-based-access-control-with-aws-single-sign-on/

Starting today, you can pass user attributes in the AWS session when your workforce sign-in into the cloud using AWS Single Sign-On. This gives you the centralized account access management of AWS Single Sign-On and ABAC, with the flexibility to use AWS SSO, Active Directory, or an external identity provider as your identity source. To learn more about the advantages of ABAC policies on AWS, you may read my previous blog post on the subject.

Overview
On one side, system administrators configure user attributes on the AWS Single Sign-On identity repository, or the managed Active Directory. System administrators may also configure an external identity provider, such as Okta, OneLogin or PingFederate to pass existing user attributes in the AWS sessions when their workforce federates into AWS. These attributes are known as session tags in AWS. On the other side, cloud administrators create fine-grained permissions policies such that your workforce get only access to cloud resources with matching resource tags.

Creating policies based on matching attributes instead of functional roles helps to reduce the number of distinct permissions and roles you must create and manage in your AWS environment. For example, when developers Bob from team red and Alice from team blue sign-in into AWS and assume the same AWS Identity and Access Management (IAM) role, they get distinct permissions to project resources tagged for their team. The identity system sends the team name attribute in the AWS session when Bob and Alice sign-in into AWS. The role’s permissions grant access to project resources with matching team name tags. Now, if Bob moves to team blue and system administrators update his team name in their identity provider directory, Bob automatically gets access to team blue’s project resources without requiring permissions updates in IAM.

How to Configure AWS SSO to Map User Attributes
Before to configure AWS SSO, there are two important points to highlight. First, ABAC will work with attributes from any identity source configured in AWS SSO : AWS SSO itself, a managed Active Directory, or an external identity provider. Second, there are two ways to pass attributes for access control to AWS SSO. Either you can pass attributes directly in the SAML assertion using the prefix https://aws.amazon.com/SAML/Attributes/AccessControl, or you can use attributes that are in the AWS SSO identity store. Those attributes are configured by your AWS SSO administrator for users created in AWS SSO, synchronized in from an Active Directory, or synchronized in from an external identity provider using automatic provisioning (SCIM).

For this demo, I choose to use an external identity provider and SCIM.

I can enable ABAC in AWS using AWS SSO with three steps:

Step 1: I configure my identity source with the associated user identities and attributes in the external identity provider. As of today, AWS SSO supports identity synchronization via SCIM with Azure AD, Okta, OneLogin, and PingFederate. Check this page to get an up-to-date list. The specifics depend on each identity provider.

Step 2: I configure the SCIM attributes I want to use for access control using the new Access Control Attributes global setting in the AWS SSO console or API. This screen allows me to select attributes for access control from the identity source I configured in step 1.

Attributes for Access Control

Step 3: I author ABAC rules through permission sets and resource-based policies using the attributes I configured in Step 2. More about this in a minute.

Now, when my workforce federates into an AWS account using SSO, they get access to their AWS resources based on matching attributes.

Attributes are passed as session tags. They are passed as comma-separated key:value pairs. The total character length of all the attributes together must be less than or equal to 460 characters.

What Does a Policy Look Like?
I now can use user attributes in my permission sets using the aws:PrincipalTag condition key when creating access control rules. For example, I can tag all the resources in my organization with their respective department name, and use a single permission set that grants developers access only to their department resources. Now, whenever developers federate into the AWS account, AWS SSO creates a department session tag with the value received from the identity provider. The security policies allow them to only get access to the resources in their respective department. As the team adds more developers and resources to their project, I only have to tag resources with the correct department name. As a result, as the organization adds new resources and developers to departments, developers can only manage resources aligned to their department without needing any permission updates.

An ABAC SSO permission set policy might look like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [ "ec2:DescribeInstances"],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": ["ec2:StartInstances","ec2:StopInstances"],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/Department": "${aws:PrincipalTag/Department}"
                }
            }
        }
    ]
}

This policy allows anybody to DescribeInstances, but only users with a aws:PrincipalTag/Department tag’s value matching the EC2 instance ec2:ResourceTag/Department tag’s value are authorized to stop or to start instances.

I attach this policy to an AWS Account’s Permission Set. On the left part of the AWS Single Sign-On console, I click AWS Accounts and select the Permission sets tab. Then I click Create permission set. On the next screen, I select Create a customer permission set.

Create a custom permission set

I enter a name and description, I make sure Create a custom permissions policy is selected. Then I can copy/paste the previous policy allowing to start and stop EC2 instances when the department name tag value is equal to the person’s department name tag value.

Create Custom Policy for Permission Set

On the next screen, I enter some tags, then I review my configuration before clicking Create. Et voila, I am ready to go.

If you have existing federation configured with AWS Security Token Service, remember that external identity providers consider AWS SSO as a new application configuration. This means when you move from direct IAM federation to AWS SSO, you have to update your external identity provider configuration to connect with AWS SSO and to introduce attributes as session tags for this configuration.

Available Today
There is no additional charge to configure user attributes with AWS Single Sign-On. You can start to use it today in all AWS Regions where AWS SSO is available.

— seb

New – Multi-Factor Authentication with WebAuthn for AWS SSO

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/multi-factor-authentication-with-webauthn-for-aws-sso/

Starting today, you can add WebAuthn as a new multi-factor authentication (MFA) to AWS Single Sign-On, in addition to currently supported one-time password (OTP) and Radius authenticators. By adding support for WebAuthn, a W3C specification developed in coordination with FIDO Alliance, you can now authenticate with a wide variety of interoperable authenticators provisioned by your system administrator or built into your laptops or smartphones. For example, you can now tap a hardware security key, touch a fingerprint sensor on your Mac, or use facial recognition on your mobile device or PC to authenticate into the AWS Management Console or AWS Command Line Interface (CLI).

With this addition, you can now self-register multiple MFA authenticators. Doing so allows you to authenticate on AWS with another device in case you lose or misplace your primary authenticator device. We make it easy for you to name your devices for long-term manageability.

WebAuthn two-factor authentication is available for identities stored in the AWS Single Sign-On internal identity store and those stored in Microsoft Active Directory, whether it is managed by AWS or not.

What are WebAuthn and FIDO2?

Before exploring how to configure two-factor authentication using your FIDO2-enabled devices, and to discover the user experience for web-based and CLI authentications, let’s recap how FIDO2, WebAuthn and other specifications fit together.

FIDO2 is made of two core specifications: Web Authentication (WebAuthn) and Client To Authenticator Protocol (CTAP).

Web Authentication (WebAuthn) is a W3C standard that provides strong authentication based upon public key cryptography. Unlike traditional code generator tokens or apps using TOTP protocol, it does not require sharing a secret between the server and the client. Instead, it relies on a public key pair and digital signature of unique challenges. The private key never leaves a secured device, the FIDO-enabled authenticator. When you try to authenticate to a website, this secured device interacts with your browser using the CTAP protocol.

WebAuthn is strong: Authentication is ideally backed by a secure element, which can safely store private keys and perform the cryptographic operations. It is scoped: A key pair is only useful for a specific origin, like browser cookies. A key pair registered at console.amazonaws.com cannot be used at console.not-the-real-amazon.com, mitigating the threat of phishing. Finally, it is attested: Authenticators can provide a certificate that helps servers verify that the public key did in fact come from an authenticator they trust, and not a fraudulent source.

To start to use FIDO2 authentication, you therefore need three elements: a website that supports WebAuthn, a browser that supports WebAuthn and CTAP protocols, and a FIDO authenticator. Starting today, the SSO Management Console and CLI now support WebAuthn. All modern web browsers are compatible (Chrome, Edge, Firefox, and Safari). FIDO authenticators are either devices you can use from one device or another (roaming authenticators), such as a YubiKey, or built-in hardware supported by Android, iOS, iPadOS, Windows, Chrome OS, and macOS (platform authenticators).

How Does FIDO2 Work?
When I first register my FIDO-enabled authenticator on AWS SSO, the authenticator creates a new set of public key credentials that can be used to sign a challenge generated by AWS SSO Console (the relaying party). The public part of these new credentials, along with the signed challenge, are stored by AWS SSO.

When I want to use WebAuthn as second factor authentication, the AWS SSO console sends a challenge to my authenticator. This challenge can then be signed with the previously generated public key credentials and sent back to the console. This way, AWS SSO console can verify that I have the required credentials.

How Do I Enable MFA With a Secure Device in the AWS SSO Console?
You, the system administrator, can enable MFA for your AWS SSO workforce when the user profiles are stored in AWS SSO itself, or stored in your Active Directory, either self-managed or a AWS Directory Service for Microsoft Active Directory.

To let my workforce register their FIDO or U2F authenticator in self-service mode, I first navigate to Settings, click Configure under Multi-Factor Authentication. On the following screen, I make four changes. First, under Users should be prompted for MFA, I select Every time they sign in. Second, under Users can authenticate with these MFA types, I check Security Keys and built-in authenticators. Third, under If a user does not yet have a registered MFA device, I check Require them to register an MFA device at sign in. Finally, under Who can manage MFA devices, I check Users can add and manage their own MFA devices. I click on Save Changes to save and return.

Configure SSO 2

That’s it. Now your workforce is prompted to register their MFA device the next time they authenticate.

What Is the User Experience?
As an AWS console user, I authenticate on the AWS SSO portal page URL that I received from my System Administrator. I sign in using my user name and password, as usual. On the next screen, I am prompted to register my authenticator. I check Security Key as device type. To use a biometric factor such as fingerprints or face recognition, I would click Built-in authenticator.

Register MFA Device

The browser asks me to generate a key pair and to send my public key. I can do that just by touching a button on my device, or providing the registered biometric, e.g. TouchID or FaceID.Register a security keyThe browser does confirm and shows me a last screen where I have the possibility to give a friendly name to my device, so I can remember which one is which. Then I click Save and Done.Confirm device registrationFrom now on, every time I sign in, I am prompted to touch my security device or use biometric authentication on my smartphone or laptop. What happens behind the scene is the server sending a challenge to my browser. The browser sends the challenge to the security device. The security device uses my private key to sign the challenge and to return it to the server for verification. When the server validates the signature with my public key, I am granted access to the AWS Management Console.

Additional verification required

At any time, I can register additional devices and manage my registered devices. On the AWS SSO portal page, I click MFA devices on the top-right part of the screen.

MFA device management

I can see and manage the devices registered for my account, if any. I click Register device to register a new device.

How to Configure SSO for the AWS CLI?
Once my devices are configured, I can configure SSO on the AWS Command Line Interface (CLI).

I first configure CLI SSO with aws configure sso and I enter the SSO domain URL that I received from my system administrator. The CLI opens a browser where I can authenticate with my user name, password, and my second-factor authentication configured previously. The web console gives me a code that I enter back into the CLI prompt.aws configure sso

When I have access to multiple AWS Accounts, the CLI lists them and I choose the one I want to use. This is a one-time configuration.

Once this is done, I can use the aws CLI as usual, the SSO authentication happens automatically behind the scene. You are asked to re-authenticate from time to time, depending on the configuration set by your system administrator.

Available today
Just like AWS Single Sign-On, FIDO2 second-factor authentication is provided to you at no additional cost, and is available in all AWS Regions where AWS SSO is available.

As usual, we welcome your feedback. The team told me they are working on other features to offer you additional authentication options in the near future.

You can start to use FIDO2 as second factor authentication for AWS Single Sign-On today. Configure it now.

— seb

Lightsail Containers: An Easy Way to Run your Containers in the Cloud

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/lightsail-containers-an-easy-way-to-run-your-containers-in-the-cloud/

When I am delivering an introduction to the AWS Cloud for developers, I usually spend a bit of time to mention and to demonstrate Amazon Lightsail. It is by far the easiest way to get started on AWS. It allows you to get your application running on your own virtual server in a matter of minutes. Today, we are adding the possibility to deploy your container-based workloads on Amazon Lightsail. You can now deploy your container images to the cloud with the same simplicity and the same bundled pricing Amazon Lightsail provides for your virtual servers.

Amazon Lightsail is an easy-to-use cloud service that offers you everything needed to deploy an application or website, for a cost effective and easy to understand monthly plan. It is ideal to deploy simple workloads, websites, or to get started with AWS. The typical Lightsail customers range from developers to small businesses or startups who are looking to get quickly started in the cloud and AWS. At any time, you can later adopt the broad AWS Services when you are getting more familiar with the AWS cloud.

Under the hood, Lightsail is powered by Amazon Elastic Compute Cloud (EC2), Amazon Relational Database Service (RDS), Application Load Balancer, and other AWS services. It offers the level of security, reliability, and scalability you are expecting from AWS.

When deploying to Lightsail, you can choose between six operating systems (4 Linux distributions, FreeBSD, or Windows), seven applications (such as WordPress, Drupal, Joomla, Plesk…), and seven stacks (such as Node.js, Lamp, GitLab, Django…). But what about Docker containers?

Starting today, Amazon Lightsail offers an simple way for developers to deploy their containers to the cloud. All you need to provide is a Docker image for your containers and we automatically containerize it for you. Amazon Lightsail gives you an HTTPS endpoint that is ready to serve your application running in the cloud container. It automatically sets up a load balanced TLS endpoint, and take care of the TLS certificate. It replaces unresponsive containers for you automatically, it assigns a DNS name to your endpoint, it maintains the old version till the new version is healthy and ready to go live, and more.

Let’s see how it works by deploying a simple Python web app as a container. I assume you have the AWS Command Line Interface (CLI) and Docker installed on your laptop. Python is not required, it will be installed in the container only.

I first create a Python REST API, using the Flask simple application framework. Any programming language and any framework that can run inside a container works too. I just choose Python and Flask because they are simple and elegant.

You can safely copy /paste the following commands:

mkdir helloworld-python
cd helloworld-python
# create a simple Flask application in helloworld.py
echo "

from flask import Flask, request
from flask_restful import Resource, Api

app = Flask(__name__)
api = Api(app)

class Greeting (Resource):
   def get(self):
      return { "message" : "Hello Flask API World!" }
api.add_resource(Greeting, '/') # Route_1

if __name__ == '__main__':
   app.run('0.0.0.0','8080')

"  > helloworld.py

Then I create a Dockerfile that contains the steps and information required to build the container image:

# create a Dockerfile
echo '
FROM python:3
ADD helloworld.py /
RUN pip install flask
RUN pip install flask_restful
EXPOSE 8080
CMD [ "python", "./helloworld.py"]
 '  > Dockerfile

Now I can build my container:

docker build -t lightsail-hello-world .

The build command outputs many lines while it builds the container, it eventually terminates with the following message (actual ID differs):

Successfully built 7848e055edff
Successfully tagged lightsail-hello-world:latest

I test the container by launching it on my laptop:

docker run -it --rm -p 8080:8080 lightsail-hello-world

and connect a browser to localhost:8080

Testing Flask API in the container

When I am satisfied with my app, I push the container to Docker Hub.

docker tag lightsail-hello-world sebsto/lightsail-hello-world
docker login
docker push sebsto/lightsail-hello-world

Now that I have a container ready on Docker Hub, let’s create a Lightsail Container Service.

I point my browser to the Amazon Lightsail console. I can see container services already deployed and I can manage them. To create a new service, I click Create container service:Lighsail Container Console

On the next screen, I select the size of the container I want to use, in terms of vCPU and memory available to my application. I also select the number of container instances I want to run in parallel for high availability or scalability reasons. I can change the number of container instances or their power (vCPU and RAM) at any time, without interrupting the service. Both these parameters impact the price AWS charges you per month. The price is indicated and dynamically adjusted on the screen, as shown on the following video.

Lightsail choose capacity

Slightly lower on the screen, I choose to skip the deployment for now. I give a name for the service (“hello-world“). I click Create container service.

Lightsail container name

Once the service is created, I click Create your first deployment to create a deployment. A deployment is a combination of a specific container image and version to be deployed on the service I just created.

I chose a name for my image and give the address of the image on Docker Hub, using the format user/<my container name>:tag. This is also where I have the possibility to enter environment variables, port mapping, or a launch command.

My container is offering a network service on port TCP 8080, so I add that port to the deployment configuration. The Open Ports configuration specifies which ports and protocols are open to other systems in my container’s network. Other containers or virtual machines can only connect to my container when the port is explicitly configured in the console or EXPOSE‘d in my Dockerfile. None of these ports are exposed to the public internet.

But in this example, I also want Lightsail to route the traffic from the public internet to this container. So, I add this container as an endpoint of the hello-world service I just created. The endpoint is automatically configured for TLS, there is no certificate to install or manage.

I can add up to 10 containers for one single deployment. When ready, I click Save and deploy.

Lightsail Deployment

After a while, my deployment is active and I can test the endpoint.

Lightsail Deployment Active

The endpoint DNS address is available on the top-right side of the console. If I must, I can configure my own DNS domain name.

Lightsail endpoint DNSI open another tab in my browser and point it at the https endpoint URL:

Testing Container DeploymentWhen I must deploy a new version, I use the console again to modify the deployment. I spare you the details of modifying the application code, build, and push a new version of the container. Let’s say I have my second container image version available under the name sebsto/lightsail-hello-world:v2. Back to Amazon Lightsail console, I click Deployments, then Modify your Deployments. I enter the full name, including the tag, of the new version of the container image and click Save and Deploy.

Lightsail Deploy updated VersionAfter a while, the new version is deployed and automatically activated.

Lightsail deployment sucesful

I open a new tab in my browser and I point it to the endpoint URI available on the top-right corner of Amazon Lightsail console. I observe the JSON version is different. It now has a version attribute with a value of 2.

lightsail v2 is deployed

When something goes wrong during my deployment, Amazon Lightsail automatically keeps the last deployment active, to avoid any service interruption. I can also manually activate a previous deployment version to reverse any undesired changes.

I just deployed my first container image from Docker Hub. I can also manage my services and deploy local container images from my laptop using the AWS Command Line Interface (CLI). To push container images to my Amazon Lightsail container service directly from my laptop, I must install the LightSail Controler Plugin. (TL;DR curl, cp and chmod are your friends here, I also maintain a DockerFile to use the CLI inside a container.)

To create, list, or delete a container service, I type:

aws lightsail create-container-service --service-name myservice --power nano --scale 1

aws lightsail get-container-services
{
   "containerServices": [{
      "containerServiceName": "myservice",
      "arn": "arn:aws:lightsail:us-west-2:012345678901:ContainerService/1b50c121-eac7-4ee2-9078-425b0665b3d7",
      "createdAt": "2020-07-31T09:36:48.226999998Z",
      "location": {
         "availabilityZone": "all",
         "regionName": "us-west-2"
      },
      "resourceType": "ContainerService",
      "power": "nano",
      "powerId": "",
      "state": "READY",
      "scale": 1,
      "privateDomainName": "",
      "isDisabled": false,
      "roleArn": ""
   }]
}

aws lightsail delete-container-service --service myservice

I can also use the CLI to deploy container images directly from my laptop. Be sure lightsailctl is installed.

# Build the new version of my image (v3)
docker build -t sebsto/lightsail-hello-world:v3 .

# Push the new image.
aws lightsail push-container-image --service-name hello-world --label hello-world --image sebsto/lightsail-hello-world:v3

After a while, I see the output:

Image "sebsto/lightsail-hello-world:v3" registered.
Refer to this image as ":hello-world.hello-world.1" in deployments.

I create a lc.json file to hold the details of the deployment configuration. it is aligned to the options I see on the console. I report the name given by the previous command on the image property:

{
  "serviceName": "hello-world",
  "containers": {
     "hello-world": {
        "image": ":hello-world.hello-world.1",
        "ports": {
           "8080": "HTTP"
        }
     }
  },
  "publicEndpoint": {
     "containerName": "hello-world",
     "containerPort": 8080
  }
}

Finally, I create a new service version with:
aws lightsail create-container-service-deployment --cli-input-json file://lc.json

I can query the deployment status with
aws lightsail get-container-services

...
"nextDeployment": {
   "version": 4,
   "state": "ACTIVATING",
   "containers": {
      "hello-world": {
      "image": ":hello-world.hello-world.1",
      "command": [],
      "environment": {},
      "ports": {
         "8080": "HTTP"
      }
     }
},
...

After a while, the status  becomes  ACTIVE, and I can test my endpoint.

curl https://hello-world.nxxxxxxxxxxx.lightsail.ec2.aws.dev/
{"message": "Hello Flask API World!", "version": 3}

If you plan to later deploy your container to Amazon ECS or Amazon Elastic Kubernetes Service, no changes are required. You can pull the container image from your repository, just like you do with Amazon Lightsail.

You can deploy your containers on Lightsail in all AWS Regions where Amazon Lightsail is available. As of today, this is US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), and Europe (Paris).

As usual when using Amazon Lightsail, pricing is easy to understand and predictable. Amazon Lightsail Containers have a fixed price per month per container, depending on the size of the container (the vCPU/memory combination you use). You are charged on the prorated hours you keep the service running. The price per month is the maximum price you will be charged for running your service 24h/7. The prices are identical in all AWS Regions. They are ranging from $7 / month for a Nano container (512MB memory and 0.25 vCPU) to $160 / month for a X-Large container (8GB memory and 4 vCPU cores). This price not only includes the container itself, but also the load balancer, the DNS, and a generous data transfer tier. The details and prices for other AWS Regions are on the Lightsail pricing page.

I can’t wait to discover what solutions you will build and deploy on Amazon Lightsail Containers!

— seb

Majority of Alexa Now Running on Faster, More Cost-Effective Amazon EC2 Inf1 Instances

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/majority-of-alexa-now-running-on-faster-more-cost-effective-amazon-ec2-inf1-instances/

Today, we are announcing that the Amazon Alexa team has migrated the vast majority of their GPU-based machine learning inference workloads to Amazon Elastic Compute Cloud (EC2) Inf1 instances, powered by AWS Inferentia. This resulted in 25% lower end-to-end latency, and 30% lower cost compared to GPU-based instances for Alexa’s text-to-speech workloads. The lower latency allows Alexa engineers to innovate with more complex algorithms and to improve the overall Alexa experience for our customers.

AWS built AWS Inferentia chips from the ground up to provide the lowest-cost machine learning (ML) inference in the cloud. They power the Inf1 instances that we launched at AWS re:Invent 2019. Inf1 instances provide up to 30% higher throughput and up to 45% lower cost per inference compared to GPU-based G4 instances, which were, before Inf1, the lowest-cost instances in the cloud for ML inference.

Alexa is Amazon’s cloud-based voice service that powers Amazon Echo devices and more than 140,000 models of smart speakers, lights, plugs, smart TVs, and cameras. Today customers have connected more than 100 million devices to Alexa. And every month, tens of millions of customers interact with Alexa to control their home devices (“Alexa, increase temperature in living room,” “Alexa, turn off bedroom’“), to listen to radios and music (“Alexa, start Maxi 80 on bathroom,” “Alexa, play Van Halen from Spotify“), to be informed (“Alexa, what is the news?” “Alexa, is it going to rain today?“), or to be educated, or entertained with 100,000+ Alexa Skills.

If you ask Alexa where she lives, she’ll tell you she is right here, but her head is in the cloud. Indeed, Alexa’s brain is deployed on AWS, where she benefits from the same agility, large-scale infrastructure, and global network we built for our customers.

How Alexa Works
When I’m in my living room and ask Alexa about the weather, I trigger a complex system. First, the on-device chip detects the wake word (Alexa). Once detected, the microphones record what I’m saying and stream the sound for analysis in the cloud. At a high level, there are two phases to analyze the sound of my voice. First, Alexa converts the sound to text. This is known as Automatic Speech Recognition (ASR). Once the text is known, the second phase is to understand what I mean. This is Natural Language Understanding (NLU). The output of NLU is an Intent (what does the customer want) and associated parameters. In this example (“Alexa, what’s the weather today ?”), the intent might be “GetWeatherForecast” and the parameter can be my postcode, inferred from my profile.

This whole process uses Artificial Intelligence heavily to transform the sound of my voice to phonemes, phonemes to words, words to phrases, phrases to intents. Based on the NLU output, Alexa routes the intent to a service to fulfill it. The service might be internal to Alexa or external, like one of the skills activated on my Alexa account. The fulfillment service processes the intent and returns a response as a JSON document. The document contains the text of the response Alexa must say.

The last step of the process is to generate the voice of Alexa from the text. This is known as Text-To-Speech (TTS). As soon as the TTS starts to produce sound data, it is streamed back to my Amazon Echo device: “The weather today will be partly cloudy with highs of 16 degrees and lows of 8 degrees.” (I live in Europe, these are Celsius degrees 🙂 ). This Text-To-Speech process also heavily involves machine learning models to build a phrase that sounds natural in terms of pronunciations, rhythm, connection between words, intonation etc.

Alexa is one of the most popular hyperscale machine learning services in the world, with billions of inference requests every week. Of Alexa’s three main inference workloads (ASR, NLU, and TTS), TTS workloads initially ran on GPU-based instances. But the Alexa team decided to move to the Inf1 instances as fast as possible to improve the customer experience and reduce the service compute cost.

What is AWS Inferentia?
AWS Inferentia is a custom chip, built by AWS, to accelerate machine learning inference workloads and optimize their cost. Each AWS Inferentia chip contains four NeuronCores. Each NeuronCore implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps cut down on external memory accesses, dramatically reducing latency and increasing throughput.

AWS Inferentia can be used natively from popular machine-learning frameworks like TensorFlow, PyTorch, and MXNet, with AWS Neuron. AWS Neuron is a software development kit (SDK) for running machine learning inference using AWS Inferentia chips. It consists of a compiler, run-time, and profiling tools that enable you to run high-performance and low latency inference.

Who Else is Using Amazon EC2 Inf1?
In addition to Alexa, Amazon Rekognition is also adopting AWS Inferentia. Running models such as object classification on Inf1 instances resulted in 8x lower latency and doubled throughput compared to running these models on GPU instances.

Customers, from Fortune 500 companies to startups, are using Inf1 instances for machine learning inference. For example, Snap Inc.​ incorporates machine learning (ML) into many aspects of Snapchat, and exploring innovation in this field is a key priority for them. Once they heard about AWS Inferentia, they collaborated with AWS to adopt Inf1 instances to help with ML deployment, including around performance and cost. They started with their recommendation models inference, and are now looking forward to deploying more models on Inf1 instances in the future.

Conde Nast, one of the world’s leading media companies, saw a 72% reduction in cost of inference compared to GPU-based instances for its recommendation engine. And Anthem, one of the leading healthcare companies in the US, observed 2x higher throughput compared to GPU-based instances for its customer sentiment machine learning workload.

How to Get Started with Amazon EC2 Inf1
You can start using Inf1 instances today.

If you prefer to manage your own machine learning application development platforms, you can get started by either launching Inf1 instances with AWS Deep Learning AMIs, which include the Neuron SDK, or you can use Inf1 instances via Amazon Elastic Kubernetes Service or Amazon ECS for containerized machine learning applications. To learn more about running containers on Inf1 instances, read this blog to get started on ECS and this blog to get started on EKS.

The easiest and quickest way to get started with Inf1 instances is via Amazon SageMaker, a fully managed service that enables developers to build, train, and deploy machine learning models quickly.

Get started with Inf1 on Amazon SageMaker today.

— seb

PS: The team just released this video, check it out!