Tag Archives: launch

Simplified Time-Series Analysis with Amazon CloudWatch Contributor Insights

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/simplified-time-series-analysis-with-amazon-cloudwatch-contributor-insights/

Inspecting multiple log groups and log streams can make it more difficult and time consuming to analyze and diagnose the impact of an issue in real time. What customers are affected? How badly? Are some affected more than others, or are outliers? Perhaps you performed deployment of an update using a staged rollout strategy and now want to know if any customers have hit issues or if everything is behaving as expected for the target customers before continuing further. All of the data points to help answer these questions is potentially buried in a mass of logs which engineers query to get ad-hoc measurements, or build and maintain custom dashboards to help track.

Amazon CloudWatch Contributor Insights, generally available today, is a new feature to help simplify analysis of Top-N contributors to time-series data in CloudWatch Logs that can help you more quickly understand who or what is impacting system and application performance, in real-time, at scale. This saves you time during an operational problem by helping you understand what is contributing to the operational issue and who or what is most affected. Amazon CloudWatch Contributor Insights can also help with ongoing analysis for system and business optimization by easily surfacing outliers, performance bottlenecks, top customers, or top utilized resources, all at a glance. In addition to logs, Amazon CloudWatch Contributor Insights can also be used with other products in the CloudWatch portfolio, including Metrics and Alarms.

Amazon CloudWatch Contributor Insights can analyze structured logs in either JSON or Common Log Format (CLF). Log data can be sourced from Amazon Elastic Compute Cloud (EC2) instances, AWS CloudTrail, Amazon Route 53, Apache Access and Error Logs, Amazon Virtual Private Cloud (VPC) Flow Logs, AWS Lambda Logs, and Amazon API Gateway Logs. You also have the choice of using structured logs published directly to CloudWatch, or using the CloudWatch Agent. Amazon CloudWatch Contributor Insights will evaluate these log events in real-time and display reports that show the top contributors and number of unique contributors in a dataset. A contributor is an aggregate metric based on dimensions contained as log fields in CloudWatch Logs, for example account-id, or interface-id in Amazon Virtual Private Cloud Flow Logs, or any other custom set of dimensions. You can sort and filter contributor data based on your own custom criteria. Report data from Amazon CloudWatch Contributor Insights can be displayed on CloudWatch dashboards, graphed alongside CloudWatch metrics, and added to CloudWatch alarms. For example customers can graph values from two Amazon CloudWatch Contributor Insights reports into a single metric describing the percentage of customers impacted by faults, and configure alarms to alert when this percentage breaches pre-defined thresholds.

Getting Started with Amazon CloudWatch Contributor Insights
To use Amazon CloudWatch Contributor Insights I simply need to define one or more rules. A rule is simply a snippet of data that defines what contextual data to extract for metrics reported from CloudWatch Logs. To configure a rule to identify the top contributors for a specific metric I supply three items of data – the log group (or groups), the dimensions for which the top contributors are evaluated, and filters to narrow down those top contributors. To do this, I head to the Amazon CloudWatch console dashboard and select Contributor Insights from the left-hand navigation links. This takes me to the Amazon CloudWatch Contributor Insights home where I can click Create a rule to get started.

To get started quickly, I can select from a library of sample rules for various services that send logs to CloudWatch Logs. You can see above that there are currently a variety of sample rules for Amazon API Gateway, Amazon Route 53 Query Logs, Amazon Virtual Private Cloud Flow Logs, and logs for container services. Alternatively, I can define my own rules, as I’ll do in the rest of this post.

Let’s say I have a deployed application that is publishing structured log data in JSON format directly to CloudWatch Logs. This application has two API versions, one that has been used for some time and is considered stable, and a second that I have just started to roll out to my customers. I want to know as early as possible if anyone who has received the new version, targeting the new api, is receiving any faults and how many faults are being triggered. My stable api version is sending log data to one log group and my new version is using a different group, so I need to monitor multiple log groups (since I also want to know if anyone is experiencing any error, regardless of version).

The JSON to define my rule, to report on 500 errors coming from any of my APIs, and to use account ID, HTTP method, and resource path as dimensions, is shown below.

{
  "Schema": {
    "Name": "CloudWatchLogRule",
    "Version": 1
  },
  "AggregateOn": "Count",
  "Contribution": {
    "Filters": [
      {
        "Match": "$.status",
        "EqualTo": 500
      }
    ],
    "Keys": [
      "$.accountId",
      "$.httpMethod",
      "$.resourcePath"
    ]
  },
  "LogFormat": "JSON",
  "LogGroupNames": [
    "MyApplicationLogsV*"
  ]
}

I can set up my rule using either the Wizard tab, or I can paste the JSON above into the Rule body field on the Syntax tab. Even though I have the JSON above, I’ll show using the Wizard tab in this post and you can see the completed fields below. When selecting log groups I can either select them from the drop down, if they already exist, or I can use wildcard syntax in the Select by prefix match option (MyApplicationLogsV* for example).

Clicking Create saves the new rule and makes it immediately start processing and analyzing data (unless I elect to create it in disabled state of course). Note that Amazon CloudWatch Contributor Insights processes new log data created once the rule is active, it does not perform historical inspection, so I need to build rules for operational scenarios that I anticipate happening in future.

With the rule in place I need to start generating some log data! To do that I’m going to use a script, written using the AWS Tools for PowerShell, to simulate my deployed application being invoked by a set of customers. Of those customers, a select few (let’s call them the unfortunate ones) will be directed to the new API version which will randomly fail on HTTP POST requests. Customers using the old API version will always succeed. The script, which runs for 5000 iterations, is shown below. The cmdlets being used to work with CloudWatch Logs are the ones with CWL in the name, for example Write-CWLLogEvent.

# Set up some random customer ids, and select a third of them to be our unfortunates
# who will experience random errors due to a bad api update being shipped!
$allCustomerIds = @( 1..15 | % { Get-Random })
$faultingIds = $allCustomerIds | Get-Random -Count 5

# Setup some log groups
$group1 = 'MyApplicationLogsV1'
$group2 = 'MyApplicationLogsV2'
$stream = "MyApplicationLogStream"

# When writing to a log stream we need to specify a sequencing token
$group1Sequence = $null
$group2Sequence = $null

$group1, $group2 | % {
    if (!(Get-CWLLogGroup -LogGroupName $_)) {
        New-CWLLogGroup -LogGroupName $_
        New-CWLLogStream -LogGroupName $_ -LogStreamName $stream
    } else {
        # When the log group and stream exist, we need to seed the sequence token to
        # the next expected value
        $logstream = Get-CWLLogStream -LogGroupName $_ -LogStreamName $stream
        $token = $logstream.UploadSequenceToken
        if ($_ -eq $group1) {
            $group1Sequence = $token
        } else {
            $group2Sequence = $token
        }
    }
}

# generate some log data with random failures for the subset of customers
1..5000 | % {

    Write-Host "Log event iteration $_" # just so we know where we are progress-wise

    $customerId = Get-Random $allCustomerIds

    # first select whether the user called the v1 or the v2 api
    $useV2Api = ((Get-Random) % 2 -eq 1)
    if ($useV2Api) {
        $resourcePath = '/api/v2/some/resource/path/'
        $targetLogGroup = $group2
        $nextToken = $group2Sequence
    } else {
        $resourcePath = '/api/v1/some/resource/path/'
        $targetLogGroup = $group1
        $nextToken = $group1Sequence
    }

    # now select whether they failed or not. GET requests for all customers on
    # all api paths succeed. POST requests to the v2 api fail for a subset of
    # customers.
    $status = 200
    $errorMessage = ''
    if ((Get-Random) % 2 -eq 0) {
        $httpMethod = "GET"
    } else {
        $httpMethod = "POST"
        if ($useV2Api -And $faultingIds.Contains($customerId)) {
            $status = 500
            $errorMessage = 'Uh-oh, something went wrong...'
        }
    }

    # Write an event and gather the sequence token for the next event
    $nextToken = Write-CWLLogEvent -LogGroupName $targetLogGroup -LogStreamName $stream -SequenceToken $nextToken -LogEvent @{
        TimeStamp = [DateTime]::UtcNow
        Message = (ConvertTo-Json -Compress -InputObject @{
            requestId = [Guid]::NewGuid().ToString("D")
            httpMethod = $httpMethod
            resourcePath = $resourcePath
            status = $status
            protocol = "HTTP/1.1"
            accountId = $customerId
            errorMessage = $errorMessage
        })
    }

    if ($targetLogGroup -eq $group1) {
        $group1Sequence = $nextToken
    } else {
        $group2Sequence = $nextToken
    }

    Start-Sleep -Seconds 0.25
}

I start the script running, and with my rule enabled, I start to see failures show up in my graph. Below is a snapshot after several minutes of running the script. I can clearly see a subset of my simulated customers are having issues with HTTP POST requests to the new v2 API.

From the Actions pull down in the Rules panel, I could now go on to create a single metric from this report, describing the percentage of customers impacted by faults, and then configure an alarm on this metric to alert when this percentage breaches pre-defined thresholds.

For the sample scenario outlined here I would use the alarm to halt the rollout of the new API if it fired, preventing the impact spreading to additional customers, while investigation of what is behind the increased faults is performed. Details on how to set up metrics and alarms can be found in the user guide.

Amazon CloudWatch Contributor Insights is available now to users in all commercial AWS Regions, including China and GovCloud.

— Steve

AWS Step Functions support in Visual Studio Code

Post Syndicated from Rob Sutter original https://aws.amazon.com/blogs/compute/aws-step-functions-support-in-visual-studio-code/

The AWS Toolkit for Visual Studio Code has been installed over 115,000 times since launching in July 2019. We are excited to announce toolkit support for AWS Step Functions, enabling you to define, visualize, and create your Step Functions workflows without leaving VS Code.

Version 1.8 of the toolkit provides two new commands in the Command Palette to help you define and visualize your workflows. The toolkit also provides code snippets for seven different Amazon States Language (ASL) state types and additional service integrations to speed up workflow development. Automatic linting detects errors in your state machine as you type, and provides tooltips to help you correct the errors. Finally, the toolkit allows you to create or update Step Functions workflows in your AWS account without leaving VS Code.

Defining a new state machine

To define a new Step Functions state machine, first open the VS Code Command Palette by choosing Command Palette from the View menu. Enter Step Functions to filter the available options and choose AWS: Create a new Step Functions state machine.

Screen capture of the Command Palette in Visual Studio Code with the text ">AWS Step Functions" entered

Creating a new Step Functions state machine in VS Code

A dialog box appears with several options to help you get started quickly. Select Hello world to create a basic example using a series of Pass states.

A screen capture of the Visual Studio Code Command Palette "Select a starter template" dialog with "Hello world" selected

Selecting the “Hello world” starter template

VS Code creates a new Amazon States Language file containing a workflow with examples of the Pass, Choice, Fail, Wait, and Parallel states.

A screen capture of a Visual Studio Code window with a "Hello World" example state machine

The “Hello World” example state machine

Pass states allow you to define your workflow before building the implementation of your logic with Task states. This lets you work with business process owners to ensure you have the workflow right before you start writing code. For more information on the other state types, see State Types in the ASL documentation.

Save your new workflow by choosing Save from the File menu. VS Code automatically applies the .asl.json extension.

Visualizing state machines

In addition to helping define workflows, the toolkit also enables you to visualize your workflows without leaving VS Code.

To visualize your new workflow, open the Command Palette and enter Preview state machine to filter the available options. Choose AWS: Preview state machine graph.

A screen capture of the Visual Studio Code Command Palette with the text ">Preview state machine" entered and the option "AWS: Preview state machine graph" highlighted

Previewing the state machine graph in VS Code

The toolkit renders a visualization of your workflow in a new tab to the right of your workflow definition. The visualization updates automatically as the workflow definition changes.

A screen capture of a Visual Studio Code window with two side-by-side tabs, one with a state machine definition and one with a preview graph for the same state machine

A state machine preview graph

Modifying your state machine definition

The toolkit provides code snippets for 12 different ASL states and service integrations. To insert a code snippet, place your cursor within the States object in your workflow and press Ctrl+Space to show the list of available states.

A screen capture of a Visual Studio Code window with a code snippet insertion dialog showing twelve Amazon States Langauge states

Code snippets are available for twelve ASL states

In this example, insert a newline after the definition of the Pass state, press Ctrl+Space, and choose Map State to insert a code snippet with the required structure for an ASL Map State.

Debugging state machines

The toolkit also includes features to help you debug your Step Functions state machines. Visualization is one feature, as it allows the builder and the product owner to confirm that they have a shared understanding of the relevant process.

Automatic linting is another feature that helps you debug your workflows. For example, when you insert the Map state into your workflow, a number of errors are detected, underlined in red in the editor window, and highlighted in red in the Minimap. The visualization tab also displays an error to inform you that the workflow definition has errors.

A screen capture of a Visual Studio Code window with a tooltip dialog indicating an "Unreachable state" error

A tooltip indicating an “Unreachable state” error

Hovering over an error opens a tooltip with information about the error. In this case, the toolkit is informing you that MapState is unreachable. Correct this error by changing the value of Next in the Pass state above from Hello World Example to MapState. The red underline automatically disappears, indicating the error has been resolved.

To finish reconciling the errors in your workflow, cut all of the following states from Hello World Example? through Hello World and paste into MapState, replacing the existing values of MapState.Iterator.States. The workflow preview updates automatically, indicating that the errors have been resolved. The MapState is indicated by the three dashed lines surrounding most of the workflow.

A Visual Studio Code window displaying two tabs, an updated state machine definition and the automatically-updated preview of the same state machine

Automatically updating the state machine preview after changes

Creating and updating state machines in your AWS account

The toolkit enables you to publish your state machine directly to your AWS account without leaving VS Code. Before publishing a state machine to your account, ensure that you establish credentials for your AWS account for the toolkit.

Creating a state machine in your AWS account

To publish a new state machine to your AWS account, bring up the VS Code Command Palette as before. Enter Publish to filter the available options and choose AWS: Publish state machine to Step Functions.

Screen capture of the Visual Studio Command Palette with the command "AWS: Publish state machine to Step Functions" highlighted

Publishing a state machine to AWS Step Functions

Choose Quick Create from the dialog box to create a new state machine in your AWS account.

Screen Capture from a Visual Studio Code flow to publish a state machine to AWS Step Functions with "Quick Create" highlighted

Publishing a state machine to AWS Step Functions

Select an existing execution role for your state machine to assume. This role must already exist in your AWS account.

For more information on creating execution roles for state machines, please visit Creating IAM Roles for AWS Step Functions.

Screen capture from Visual Studio Code showing a selection execution role dialog with "HelloWorld_IAM_Role" selected

Selecting an IAM execution role for a state machine

Provide a name for the new state machine in your AWS account, for example, Hello-World. The name must be from one to 80 characters, and can use alphanumeric characters, dashes, or underscores.

Screen capture from a Visual Studio Code flow entering "Hello-World" as a state machine name

Naming your state machine

Press the Enter or Return key to confirm the name of your state machine. The Output console opens, and the toolkit displays the result of creating your state machine. The toolkit provides the full Amazon Resource Name (ARN) of your new state machine on completion.

Screen capture from Visual Studio Code showing the successful creation of a new state machine in the Output window

Output of creating a new state machine

You can check creation for yourself by visiting the Step Functions page in the AWS Management Console. Choose the newly-created state machine and the Definition tab. The console displays the definition of your state machine along with a preview graph.

Screen capture of the AWS Management Console showing the newly-created state machine

Viewing the new state machine in the AWS Management Console

Updating a state machine in your AWS account

It is common to change workflow definitions as you refine your application. To update your state machine in your AWS account, choose Quick Update instead of Quick Create. Select your existing workflow.

A screen capture of a Visual Studio Code dialog box with a single state machine displayed and highlighted

Selecting an existing state machine to update

The toolkit displays “Successfully updated state machine” and the ARN of your state machine in the Output window on completion.

Summary

In this post, you learn how to use the AWS Toolkit for VS Code to create and update Step Functions state machines in your local development environment. You discover how sample templates, code snippets, and automatic linting can accelerate your development workflows. Finally, you see how to create and update Step Functions workflows in your AWS account without leaving VS Code.

Install the latest release of the toolkit and start building your workflows in VS Code today.

 

Amazon Detective – Rapid Security Investigation and Analysis

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-detective-rapid-security-investigation-and-analysis/

Almost five years ago, I blogged about a solution that automatically analyzes AWS CloudTrail data to generate alerts upon sensitive API usage. It was a simple and basic solution for security analysis and automation. But demanding AWS customers have multiple AWS accounts, collect data from multiple sources, and simple searches based on regular expressions are not enough to conduct in-depth analysis of suspected security-related events. Today, when a security issue is detected, such as compromised credentials or unauthorized access to a resource, security analysts cross-analyze several data logs to understand the root cause of the issue and its impact on the environment. In-depth analysis often requires scripting and ETL to connect the dots between data generated by multiple siloed systems. It requires skilled data engineers to answer basic questions such as “is this normal?”. Analysts use Security Information and Event Management (SIEM) tools, third-party libraries, and data visualization tools to validate, compare, and correlate data to reach their conclusions. To further complicate the matters, new AWS accounts and new applications are constantly introduced, forcing analysts to constantly reestablish baselines of normal behavior, and to understand new patterns of activities every time they evaluate a new security issue.

Amazon Detective is a fully managed service that empowers users to automate the heavy lifting involved in processing large quantities of AWS log data to determine the cause and impact of a security issue. Once enabled, Detective automatically begins distilling and organizing data from AWS Guard Duty, AWS CloudTrail, and Amazon Virtual Private Cloud Flow Logs into a graph model that summarizes the resource behaviors and interactions observed across your entire AWS environment.

At re:invent 2019, we announced a preview of Amazon Detective. Today, it is our pleasure to announce its availability for all AWS Customers.

Amazon Detective uses machine learning models to produce graphical representations of your account behavior and helps you to answer questions such as “is this an unusual API call for this role?” or “is this spike in traffic from this instance expected?”. You do not need to write code, to configure or to tune your own queries.

To get started with Amazon Detective, I open the AWS Management Console, I type “detective” in the search bar and I select Amazon Detective from the provided results to launch the service. I enable the service and I let the console guide me to configure “member” accounts to monitor and the “master” account in which to aggregate the data. After this one-time setup, Amazon Detective immediately starts analyzing AWS telemetry data and, within a few minutes, I have access to a set of visual interfaces that summarize my AWS resources and their associated behaviors such as logins, API calls, and network traffic. I search for a finding or resource from the Amazon Detective Search bar and, after a short while, I am able to visualize the baseline and current value for a set of metrics.

I select the resource type and ID and start to browse the various graphs.

I can also investigate a AWS Guard Duty finding by using the native integrations within the Guard Duty and AWS Security Hub consoles. I click the “Investigate” link from any finding from AWS Guard Duty and jump directly into a Amazon Detective console that provides related details, context, and guidance to investigate and to respond to the issue. In the example below, Guard Duty reports an unauthorized access that I decide to investigate:

Amazon Detective console opens:

I scroll down the page to check the graph of failed API calls. I click a bar in the graph to get the details, such as the IP addresses where the calls originated:

Once I know the source IP addresses, I click New behavior: AWS role and observe where these calls originated from to compare with the automatically discovered baseline.

Amazon Detective works across your AWS accounts, it is a multi-account solution that aggregates data and findings from up to 1000 AWS accounts into a single security-owned “master” account making it easy to view behavioral patterns and connections across your entire AWS environment.

There are no agents, sensors, or additional software to deploy in order to use the service. Amazon Detective retrieves, aggregates and analyzes data from AWS Guard Duty, AWS CloudTrail and Amazon Virtual Private Cloud Flow Logs. Amazon Detective collects existing logs directly from AWS without touching your infrastructure, thereby not causing any impact to cost or performance.

Amazon Detective can be administered via the AWS Management Console or via the Amazon Detective management APIs. The management APIs enable you to build Amazon Detective into your standard account registration, enablement, and deployment processes.

Amazon Detective is a regional service. I activate the service in every AWS Regions in which I want to analyze findings. All data are processed in the AWS Region where they are generated. Amazon Detective maintains data analytics and log summaries in the behavior graph for a 1-year rolling period from the date of log ingestion. This allows for visual analysis and deep dives over a large data set for a long period of time. When I disable the service, all data is expunged to ensure no data remains.

There are no additional charges or upfront commitments required to use Amazon Detective. We charge per GB of data ingested from AWS AWS CloudTrail, Amazon Virtual Private Cloud Flow Logs, and AWS Guard Duty findings. Amazon Detective offers a 30-day free trial. As usual, check the pricing page for the details.

Amazon Detective is available in all commercial AWS Regions, except China. You can start to use it today.

— seb

New – Use AWS IAM Access Analyzer in AWS Organizations

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-use-aws-iam-access-analyzer-in-aws-organizations/

Last year at AWS re:Invent 2019, we released AWS Identity and Access Management (IAM) Access Analyzer that helps you understand who can access resources by analyzing permissions granted using policies for Amazon Simple Storage Service (S3) buckets, IAM roles, AWS Key Management Service (KMS) keys, AWS Lambda functions, and Amazon Simple Queue Service (SQS) queues.

AWS IAM Access Analyzer uses automated reasoning, a form of mathematical logic and inference, to determine all possible access paths allowed by a resource policy. We call these analytical results provable security, a higher level of assurance for security in the cloud.

Today I am pleased to announce that you can create an analyzer in the AWS Organizations master account or a delegated member account with the entire organization as the zone of trust. Now for each analyzer, you can create a zone of trust to be either a particular account or an entire organization, and set the logical bounds for the analyzer to base findings upon. This helps you quickly identify when resources in your organization can be accessed from outside of your AWS Organization.

AWS IAM Access Analyzer for AWS Organizations – Getting started
You can enable IAM Access Analyzer, in your organization with one click in the IAM Console. Once enabled, IAM Access Analyzer analyzes policies and reports a list of findings for resources that grant public or cross-account access from outside your AWS Organizations in the IAM console and through APIs.

When you create an analyzer on your organization, it recognizes your organization as a zone of trust, meaning all accounts within the organization are trusted to have access to AWS resources. Access analyzer will generate a report that identifies access to your resources from outside of the organization.

For example, if you create an analyzer for your organization then it provides active findings for resource such as S3 buckets in your organization that are accessible publicly or from outside the organization.

When policies change, IAM Access Analyzer automatically triggers a new analysis and reports new findings based on the policy changes. You can also trigger a re-evaluation manually. You can download the details of findings into a report to support compliance audits.

Analyzers are specific to the region in which they are created. You need to create a unique analyzer for each region where you want to enable IAM Access Analyzer.

You can create multiple analyzers for your entire organization in your organization’s master account. Additionally, you can also choose a member account in your organization as a delegated administrator for IAM Access Analyzer. When you choose a member account as the delegated administrator, the member account has a permission to create analyzers within the organization. Additionally individual accounts can create analyzers to identify resources accessible from outside those accounts.

IAM Access Analyzer sends an event to Amazon EventBridge for each generated finding, for a change to the status of an existing finding, and when a finding is deleted. You can monitor IAM Access Analyzer findings with EventBridge. Also, all IAM Access Analyzer actions are logged by AWS CloudTrail and AWS Security Hub. Using the information collected by CloudTrail, you can determine the request that was made to Access Analyzer, the IP address from which the request was made, who made the request, when it was made, and additional details.

Now available!
This integration is available in all AWS Regions where IAM Access Analyzer is available. There is no extra cost for creating an analyzer with organization as the zone of trust. You can learn more through these talks of Dive Deep into IAM Access Analyzer and Automated Reasoning on AWS at AWS re:Invent 2019. Take a look at the feature page and the documentation to learn more.

Please send us feedback either in the AWS forum for IAM or through your usual AWS support contacts.

Channy;

Now Open – Third Availability Zone in the AWS Canada (Central) Region

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/now-open-third-availability-zone-in-the-aws-canada-central-region/

When you start an EC2 instance, or store data in an S3 bucket, it’s easy to underestimate what an AWS Region is. Right now, we have 22 across the world, and while they look like dots on a global map, they are architected to let you run applications and store data with high availability and fault tolerance. In fact, each of our Regions is made up of multiple data centers, which are geographically separated into what we call Availability Zones (AZs).

Today, I am very happy to announce that we added a third AZ to the AWS Canada (Central) Region to support our customer base in Canada.

This third AZ provides customers with additional flexibility to architect scalable, fault-tolerant, and highly available applications, and will support additional AWS services in Canada. We opened the Canada (Central) Region in December 2016, just over 3 years ago, and we’ve more than tripled the number of available services as we bring on this third AZ.

Each AZ is in a separate and distinct geographic location with enough distance to significantly reduce the risk of a single event impacting availability in the Region, yet near enough for business continuity applications that require rapid failover and synchronous replication. For example, our Canada (Central) Region is located in the Montreal area of Quebec, and the upcoming new AZ will be on the mainland more than 45 kms/28 miles away from the next-closest AZ as the crow flies.

Where we place our Regions and AZs is a deliberate and thoughtful process that takes into account not only latency or distance, but also risk profiles. To keep the risk profile low, we look at decades of data related to floods and other environmental factors before we settle on a location. Montreal was heavily impacted in 1998 by a massive ice storm that crippled the power grid and brought down more than 1,000 transmission towers, leaving four million people in neighboring provinces and some areas of New York and Maine without power. In order to ensure that AWS infrastructure can withstand inclement weather such as this, half of the AZs interconnections use underground cables and are out of the impact of potential ice storms. In this way, every AZ is connected to the other two AZs by at least one 100% underground fiber path.

We’re excited to bring a new AZ to Canada to serve our incredible customers in the region. Here are some examples from different industries, courtesy of my colleagues in Canada:

Healthcare – AlayaCare delivers cloud-based software to home care organizations across Canada and all over the world. As a home healthcare technology company, they need in-country data centers to meet regulatory requirements.

Insurance – Aviva is delivering a world-class digital experience to its insurance clients in Canada and the expansion of the AWS Region is welcome as they continue to move more of their applications to the cloud.

E-LearningD2L leverages various AWS Regions around the world, including Canada to deliver a seamless experience for their clients. They have been on AWS for more than four years, and recently completed an all-in migration.

With this launch, AWS has now 70 AZs within 22 geographic Regions around the world, plus 5 new regions coming. We are continuously looking at expanding our infrastructure footprint globally, driven largely by customer demand.

To see how we use AZs in Amazon, have look at this article on Static stability using Availability Zones by Becky Weiss and Mike Furr. It’s part of the Amazon Builders’ Library, a place where we share what we’ve learned over the years.

For more information on our global infrastructure, and the custom hardware we use, check out this interactive map.

Danilo


Une troisième zone de disponibilité pour la Région AWS Canada (Centre) est lancée

Lorsque vous lancez une instance EC2, ou que vous stockez vos données dans Amazon S3, il est facile de sous-estimer l’étendue d’une région infonuagique AWS. À l’heure actuelle, nous avons 22 régions dans le monde. Bien que ces dernières ne ressemblent qu’à des petits points sur une grande carte, elles sont conçues pour vous permettre de lancer des applications et de stocker des données avec une grande disponibilité et une tolérance aux pannes. En fait, chacune de nos régions comprend plusieurs centres de données distincts, regroupés dans ce que nous appelons des zones de disponibilités.

Aujourd’hui, je suis très heureux d’annoncer que nous avons ajouté une troisième zone de disponibilité à la Région AWS Canada (Centre) afin de répondre à la demande croissante de nos clients canadiens.

Cette troisième zone de disponibilité offre aux clients une souplesse additionnelle, leur permettant de concevoir des applications évolutives, tolérantes et hautement disponibles. Cette zone de disponibilité permettra également la prise en charge d’un plus grand nombre de services AWS au Canada. Nous avons ouvert la région infonuagique en décembre 2016, il y a un peu plus de trois ans, et nous avons plus que triplé le nombre de services disponibles en lançant cette troisième zone.

Chaque zone de disponibilité AWS se situe dans un lieu géographique séparé et distinct, suffisamment éloignée pour réduire le risque qu’un seul événement puisse avoir une incidence sur la disponibilité dans la région, mais assez rapproché pour permettre le bon fonctionnement d’applications de continuité d’activités qui nécessitent un basculement rapide et une réplication synchrone. Par exemple, notre Région Canada (Centre) se situe dans la région du grand Montréal, au Québec. La nouvelle zone de disponibilité sera située à plus de 45 km à vol d’oiseau de la zone de disponibilité la plus proche.

Définir l’emplacement de nos régions et de nos zones de disponibilité est un processus délibéré et réfléchi, qui tient compte non seulement de la latence/distance, mais aussi des profils de risque. Par exemple, nous examinons les données liées aux inondations et à d’autres facteurs environnementaux sur des décennies avant de nous installer à un endroit. Ceci nous permet de maintenir un profil de risque faible. En 1998, Montréal a été lourdement touchée par la tempête du verglas, qui a non seulement paralysé le réseau électrique et engendré l’effondrement de plus de 1 000 pylônes de transmission, mais qui a également laissé quatre millions de personnes sans électricité dans les provinces avoisinantes et certaines parties dans les états de New York et du Maine. Afin de s’assurer que l’infrastructure AWS résiste à de telles intempéries, la moitié des interconnexions câblées des zones de disponibilité d’AWS sont souterraines, à l’abri des tempêtes de verglas potentielles par exemple. Ainsi, chaque zone de disponibilité est reliée aux deux autres zones par au moins un réseau de fibre entièrement souterrain.

Nous nous réjouissons d’offrir à nos clients canadiens une nouvelle zone de disponibilité pour la région. Voici quelques exemples clients de différents secteurs, gracieuseté de mes collègues canadiens :

SantéAlayaCare fournit des logiciels de santé à domicile basés sur le nuage à des organismes de soins à domicile canadiens et partout dans le monde. Pour une entreprise de technologie de soins à domicile, le fait d’avoir des centres de données au pays est essentiel et lui permet de répondre aux exigences réglementaires.

AssuranceAviva offre une expérience numérique de classe mondiale à ses clients du secteur de l’assurance au Canada. L’expansion de la région AWS est bien accueillie alors qu’ils poursuivent la migration d’un nombre croissant de leurs applications vers l’infonuagique.

Apprentissage en ligneD2L s’appuie sur diverses régions dans le monde, dont celle au Canada, pour offrir une expérience homogène à ses clients. Ils sont sur AWS depuis plus de quatre ans et ont récemment effectué une migration complète.

Avec ce lancement, AWS compte désormais 70 zones de disponibilité dans 22 régions géographiques au monde – et cinq nouvelles régions à venir. Nous sommes continuellement à la recherche de moyens pour étendre notre infrastructure à l’échelle mondiale, entre autres en raison de la demande croissante des clients.

Pour comprendre comment nous utilisons les zones de disponibilité chez Amazon, consultez cet article sur la stabilité statique à l’aide des zones de disponibilité par Becky Weiss et Mike Furr. Ce billet se retrouve dans la bibliothèque des créateurs d’Amazon, un lieu où nous partageons ce que nous avons appris au fil des années.

Pour plus d’informations sur notre infrastructure mondiale et le matériel informatique personnalisé que nous utilisons, consultez cette carte interactive.

Danilo

New – Low-Cost HDD Storage Option for Amazon FSx for Windows File Server

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-low-cost-hdd-storage-option-for-amazon-fsx-for-windows-file-server/

You can use Amazon FSx for Windows File Server to create file systems that can be accessed from a wide variety of sources and that use your existing Active Directory environment to authenticate users. Last year we added a ton of features including Self-Managed Directories, Native Multi-AZ File Systems, Support for SQL Server, Fine-Grained File Restoration, On-Premises Access, a Remote Management CLI, Data Deduplication, Programmatic File Share Configuration, Enforcement of In-Transit Encryption, and Storage Quotas.

New HDD Option
Today we are adding a new HDD (Hard Disk Drive) storage option to Amazon FSx for Windows File Server. While the existing SSD (Solid State Drive) storage option is designed for the highest performance latency-sensitive workloads like databases, media processing, and analytics, HDD storage is designed for a broad spectrum of workloads including home directories, departmental shares, and content management systems.

Single-AZ HDD storage is priced at $0.013 per GB-month and Multi-AZ HDD storage is priced at $0.025 per GB-month (this makes Amazon FSx for Windows File Server the lowest cost file storage for Windows applications and workloads in the cloud). Even better, if you use this option in conjunction with Data Deduplication and use 50% space savings as a reasonable reference point, you can achieve an effective cost of $0.0065 per GB-month for a single-AZ file system and $0.0125 per GB-month for a multi-AZ file system.

You can choose the HDD option when you create a new file system:

If you have existing SSD-based file systems, you can create new HDD-based file systems and then use AWS DataSync or robocopy to move the files. Backups taken from newly created SSD or HDD file systems can be restored to either type of storage, and with any desired level of throughput capacity.

Performance and Caching
The HDD storage option is designed to deliver 12 MB/second of throughput per TiB of storage, with the ability to handle bursts of up to 80 MB/second per TiB of storage. When you create your file system, you also specify the throughput capacity:

The amount of throughput that you provision also controls the size of a fast, in-memory cache for your file share; higher levels of throughput come with larger amounts of cache. As a result, Amazon FSx for Windows File Server file systems can be provisioned so as to be able to provide over 3 GB/s of network throughput and hundreds of thousands of network IOPS, even with HDD storage. This will allow you to create cost-effective file systems that are able to handle many different use cases, including those where a modest subset of a large amount of data is accessed frequently. To learn more, read Amazon FSx for Windows File Server Performance.

Now Available
HDD file systems are available in all regions where Amazon FSx for Windows File Server is available and you can start creating them today.

Jeff;

BuildforCOVID19 Global Online Hackathon

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/buildforcovid19-global-online-hackathon/

The COVID-19 Global Hackathon is an opportunity for builders to create software solutions that drive social impact with the aim of tackling some of the challenges related to the current coronavirus (COVID-19) pandemic.

We’re encouraging YOU – builders around the world – to #BuildforCOVID19 using technologies of your choice across a range of suggested themes and challenge areas, some of which have been sourced through health partners like the World Health Organization. The hackathon welcomes locally and globally focused solutions and is open to all developers.

AWS is partnering with technology companies like Facebook, Giphy, Microsoft, Pinterest, Slack, TikTok, Twitter, and WeChat to support this hackathon. We will be providing technical mentorship and credits for all participants.

Join BuildforCOVID19 and chat with fellow participants and AWS mentors in the COVID19 Global Hackathon Slack channel.

Jeff;

Working From Home? Here’s How AWS Can Help

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/working-from-home-heres-how-aws-can-help/

Just a few weeks and so much has changed. Old ways of living, working, meeting, greeting, and communicating are gone for a while. Friendly handshakes and warm hugs are not healthy or socially acceptable at the moment.

My colleagues and I are aware that many people are dealing with changes in their work, school, and community environments. We’re taking measures to support our customers, communities, and employees to help them to adjust and deal with the situation, and will continue to do more.

Working from Home
With people in many cities and countries now being asked to work or learn from home, we believe that some of our services can help to make the transition from the office or the classroom to the home just a bit easier. Here’s an overview of our solutions:

Amazon WorkSpaces lets you launch virtual Windows and Linux desktops that can be accessed anywhere and from any device. These desktops can be used for remote work, remote training, and more.

Amazon WorkDocs makes it easy for you to collaborate with others, also from anywhere and on any device. You can create, edit, share, and review content, all stored centrally on AWS.

Amazon Chime supports online meetings with up to 100 participants (growing to 250 later this month), including chats and video calls, all from a single application.

Amazon Connect lets you set up a call or contact center in the cloud, with the ability to route incoming calls and messages to tens of thousands of agents. You can use this to provide emergency information or personalized customer service, while the agents are working from home.

Amazon AppStream lets you deliver desktop applications to any computer. You can deliver enterprise, educational, or telemedicine apps at scale, including those that make use of GPUs for computation or 3D rendering.

AWS Client VPN lets you set up secure connections to your AWS and on-premises networks from anywhere. You can give your employees, students, or researchers the ability to “dial in” (as we used to say) to your existing network.

Some of these services have special offers designed to make it easier for you to get started at no charge; others are already available to you under the AWS Free Tier. You can learn more on the home page for each service, and on our new Remote Working & Learning page.

You can sign up for and start using these services without talking to us, but we are here to help if you need more information or need some help in choosing the right service(s) for your needs. Here are some points of contact:

If you are already an AWS customer, your Technical Account Manager (TAM) and Solutions Architect (SA) will be happy to help.

Some Useful Content
I am starting a collection of other AWS-related content that will help you use these services and work-from-home as efficiently as possible. Here’s what I have so far:

If you create something similar, share it with me and I’ll add it to my list.

Please Stay Tuned
This is, needless to say, a dynamic and unprecedented situation and we are all learning as we go.

I do want you to know that we’re doing our best to help. If there’s something else that you need, please do not hesitate to reach out. Go through your normal AWS channels first, but contact me if you are in a special situation and I’ll do my best!

Jeff;

 

Now Available: Amazon ElastiCache Global Datastore for Redis

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/now-available-amazon-elasticache-global-datastore-for-redis/

In-memory data stores are widely used for application scalability, and developers have long appreciated their benefits for storing frequently accessed data, whether volatile or persistent. Systems like Redis help decouple databases and backends from incoming traffic, shedding most of the traffic that would had otherwise reached them, and reducing application latency for users.

Obviously, managing these servers is a critical task, and great care must be taken to keep them up and running no matter what. In a previous job, my team had to move a cluster of physical cache servers across hosting suites: one by one, they connected them to external batteries, unplugged external power, unracked them, and used an office trolley (!) to roll them to the other suite where they racked them again! It happened without any service interruption, but we all breathed a sigh of relief once this was done… Lose cache data on a high-traffic platform, and things get ugly. Fast. Fortunately, cloud infrastructure is more flexible! To help minimize service disruption should an incident occur, we have added many high-availability features to Amazon ElastiCache, our managed in-memory data store for Memcached and Redis: cluster mode, multi-AZ with automatic failover, etc.

As Redis is often used to serve low latency traffic to global users, customers have told us that they’d love to be able to replicate Amazon ElastiCache clusters across AWS regions. We listened to them, got to work, and today, we’re very happy to announce that this replication capability is now available for Redis clusters.

Introducing Amazon ElastiCache Global Datastore For Redis
In a nutshell, Amazon ElastiCache Global Datastore for Redis lets you replicate a cluster in one region to clusters in up to two other regions. Customers typically do this in order to:

  • Bring cached data closer to your users, in order to reduce network latency and improve application responsiveness.
  • Build disaster recovery capabilities, should a region be partially or totally unavailable.

Setting up a global datastore is extremely easy. First, you pick a cluster to be the primary cluster receiving writes from applications: this can either be a new cluster, or an existing cluster provided that it runs Redis 5.0.6 or above. Then, you add up to two secondary clusters in other regions which will receive updates from the primary.

This setup is available for all Redis configurations except single node clusters: of course, you can convert a single node cluster to a replication group cluster, and then use it as a primary cluster.

Last but not least, clusters that are part of a global datastore can be modified and resized as usual (adding or removing nodes, changing node type, adding or removing shards, adding or removing replica nodes).

Let’s do a quick demo.

Replicating a Redis Cluster Across Regions
Let me show you how to build from scratch a three-cluster global datastore: the primary cluster will be located in the us-east-1 region, and the two secondary clusters will be located in the us-west-1 and us-west-2 regions. For the sake of simplicity, I’ll use the same default configuration for all clusters: three cache.r5.large nodes, multi-AZ, one shard.

Heading out to the AWS Console, I click on ‘Global Datastore’, and then on ‘Create’ to create my global datastore. I’m asked if I’d like to create a new cluster supporting the datastore, or if I’d rather use an existing cluster. I go for the former, and create a cluster named global-ds-1-useast1.

I click on ‘Next’, and fill in details for a secondary cluster hosted in the us-west-1 region. I unimaginatively name it global-ds-1-us-west1.

Then, I add another secondary cluster in the us-west-2 region, named global-ds-1-uswest2: I go to ‘Global Datastore’, click on ‘Add Region’, and fill in cluster details.

A little while later, all three clusters are up, and have been associated to the global datastore.

Using the redis-cli client running on an Amazon Elastic Compute Cloud (EC2) instance hosted in the us-east-1 region, I can quickly connect to the cluster endpoint and check that it’s indeed operational.

[us-east-1-instance] $ redis-cli -h $US_EAST_1_CLUSTER_READWRITE
> ping
PONG
> set paris france
OK
> set berlin germany
OK
> set london uk
OK
> keys *
1) "london"
2) "berlin"
3) "paris"
> get paris
"france"

This looks fine. Using an EC2 instance hosted in the us-west-1 region, let’s now check that the data we stored in the primary cluster has been replicated to the us-west-1 secondary cluster.

[us-west-1-instance] $ redis-cli -h $US_WEST_1_CLUSTER_READONLY
> keys *
1) "london"
2) "berlin"
3) "paris"
> get paris
"france"

Nice. Now let’s add some more data on the primary cluster…

> hset Parsifal composer "Richard Wagner" date 1882 acts 3 language "German"
> hset DonGiovanni composer "W.A. Mozart" date 1787 acts 2 language "Italian"
> hset Tosca composer "Giacomo Puccini" date 1900 acts 3 language "Italian"

…and check as quickly as possible on the secondary cluster.

> keys *
1) "DonGiovanni"
2) "london"
3) "berlin"
4) "Parsifal"
5) "Tosca"
6) "paris"
> hget Parsifal composer
"Richard Wagner"

That was fast: by the time I switched to the other terminal and ran the command, the new data was already there. That’s not really surprising since the typical network latency for cross region traffic ranges from 60 milliseconds to 200 milliseconds depending on regions.

Now, what would happen if something went wrong with our primary cluster hosted in us-east-1? Well, we could easily promote one of the secondary clusters to full read/write capabilities.

For good measure, I also remove the us-east-1 cluster from the global datastore. Once this is complete, the global datastore looks like this.

Now, using my EC2 instance in the us-west-1 region, and connecting to the read/write endpoint of my cluster, I add more data…

[us-west-1-instance] $ redis-cli -h $US_WEST_1_CLUSTER_READWRITE
> hset Lohengrin composer "Richard Wagner" date 1850 acts 3 language "German"

… and check that it’s been replicated to the us-west-2 cluster.

[us-west-2-instance] $ redis-cli -h $US_WEST_2_CLUSTER_READONLY
> hgetall Lohengrin
1) "composer"
2) "Richard Wagner"
3) "date"
4) "1850"
5) "acts"
6) "3"
7) "language"
8) "German"

It’s all there. Global datastores make it really easy to replicate Amazon ElastiCache data across regions!

Now Available!
This new global datastore feature is available today in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London). Please give it a try and send us feedback, either on the AWS forum for Amazon ElastiCache, or through your usual AWS support contacts.

Julien;

Bottlerocket – Open Source OS for Container Hosting

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/bottlerocket-open-source-os-for-container-hosting/

It is safe to say that our industry has decided that containers are now the chosen way to package and scale applications. Our customers are making great use of Amazon ECS and Amazon Elastic Kubernetes Service, with over 80% of all cloud-based containers running on AWS.

Container-based environments lend themselves to easy scale-out, and customers can run host environments that encompass hundreds or thousands of instances. At this scale, several challenges arise with the host operating system. For example:

Security – Installing extra packages simply to satisfy dependencies can increase the attack surface.

Updates – Traditional package-based update systems and mechanisms are complex and error prone, and can have issues with dependencies.

Overhead – Extra, unnecessary packages consume disk space and compute cycles, and also increase startup time.

Drift – Inconsistent packages and configurations can damage the integrity of a cluster over time.

Introducing Bottlerocket
Today I would like to tell you about Bottlerocket, a new Linux-based open source operating system that we designed and optimized specifically for use as a container host.

Bottlerocket reflects much of what we have learned over the years. It includes only the packages that are needed to make it a great container host, and integrates with existing container orchestrators. It supports Docker image and images that conform to the Open Container Initiative (OCI) image format.

Instead of a package update system, Bottlerocket uses a simple, image-based model that allows for a rapid & complete rollback if necessary. This removes opportunities for conflicts and breakage, and makes it easier for you to apply fleet-wide updates with confidence using orchestrators such as EKS.

In addition to the minimal package set, Bottlerocket uses a file system that is primarily read-only, and that is integrity-checked at boot time via dm-verity. SSH access is discouraged, and is available only as part of a separate admin container that you can enable on an as-needed basis and then use for troubleshooting purposes.

Try it Out
We’re launching a public preview of Bottlerocket today. You can follow the steps in QUICKSTART to set up an EKS cluster, and you can take a look at the GitHub repo. Try it out, report bugs, send pull requests, and let us know what you think!

Jeff;

 

Host Your Apps with AWS Amplify Console from the AWS Amplify CLI

Post Syndicated from Brandon West original https://aws.amazon.com/blogs/aws/host-your-apps-with-aws-amplify-console-from-the-aws-amplify-cli/

Have you tried out AWS Amplify and AWS Amplify Console yet? In my opinion, they provide one of the fastest ways to get a new web application from idea to prototype on AWS. So what are they? AWS Amplify is an opinionated framework for building modern applications, with a toolchain for easily adding services like authentication (via Amazon Cognito) or storage (via Amazon Simple Storage Service (S3)) or GraphQL APIs, all via a command-line interface. AWS Amplify Console makes continuous deployment and hosting for your modern web apps easy. It supports hosting the frontend and backend assets for single page app (SPA) frameworks including React, Angular, Vue.js, Ionic, and Ember. It also supports static site generators like Gatsby, Eleventy, Hugo, VuePress, and Jekyll.

With today’s launch, hosting options available from the AWS Amplify CLI now include Amplify Console in addition to S3 and Amazon CloudFront. By using Amplify Console, you can take advantage of features like continuous deployment, instant cache invalidation, custom redirects, and simple configuration of custom domains.

Initializing an Amplify App

Let’s take a look at a quick example. We’ll be deploying a static site demo of Amazon Transcribe. I’ve already got the AWS Command Line Interface (CLI) installed, as well as the AWS Amplify CLI. I’ve forked and then cloned the sample code to my local machine. In the following gif, you can see the initialization process for an AWS Amplify app. (I sped things up a little for the gif. It might take a few seconds for your app to be created.)

Terminal session showing the "amplify init" workflow

Now that I’ve got my app initialized, I can add additional services. Let’s add some hosting via AWS Amplify Console. After choosing Amplify Console for hosting, I can pick manual deployment or continuous deployment using a git-based workflow.

Continuous Deployment

First, I’m going to set up continuous deployment so that changes to our git repo will trigger a build and deploy.

A screenshot of a terminal session adding Amplify Console to an Amplify project

The workflow for configuring continuous deployment requires a quick browser session. First, I select our git provider. The forked repo is on GitHub, so I need to authorize Amplify Console to use my GitHub account.

Screenshot of git provider selection

Once a provider is authorized, I choose the repo and branch to watch for changes.

Screenshot of repo and branch selection

AWS Amplify Console auto-detected the correct build settings, based on the contents of package.json.

Screenshot of build settings

Once I’ve confirmed the settings, the initial build and deploy will start. Then any changes to the selected git branch will result in additional builds and deploys. Now I need to finish the workflow in the CLI, and I the need the ARN of the new Amplify Console app for that. In the browser, under App Settings and then General, I copy the ARN and then paste it into my terminal, and check the status.

A screenshot of a terminal window where the app ARN is being set

A quick check of the URL in my browser confirms that the app has been successfully deployed.

A screenshot of the sample app we deployed in this post

Manual Deploys

Manual deploys with Amplify Console also provide a bunch of useful features. The CLI can now manage front-end environments, making it easy to add a test or dev environment. It’s also easy to add URL redirects and rewrites, or add a username/password via HTTP Basic Auth.

Configuring manual deploys is straightforward. Just set your environment name. When it’s time to deploy, run amplify publishand the build scripts defined during the initialization of the project will be run. The generated artifact will then be uploaded automatically.

A screenshot of a terminal window where manual deploys are configured

With manual deployments, you can set up multiple frontend environments (e.g. dev and prod) directly from the CLI. To create a new dev environment, run amplify env add (name it dev) and amplify publish. This will create a second frontend environment in Amplify Console. To view all your frontend and backend environments, run amplify console from the CLI to open your Amplify Console app.

Ever since using AWS Amplify Console for the first time a few weeks ago, it has become my go-to way to deploy applications, especially static sites. I’m excited to see the simplicity of hosting with AWS Amplify Console extended to the Amplify CLI, and I hope you are too. Happy building!

— Brandon

AWS Named as a Leader in Gartner’s Magic Quadrant for Cloud AI Developer Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-named-as-a-leader-in-gartners-magic-quadrant-for-cloud-ai-developer-services/

Last week I spoke to executives from a large AWS customer and had an opportunity to share aspects of the Amazon culture with them. I was able to talk to them about our Leadership Principles and our Working Backwards model. They asked, as customers often do, about where we see the industry in the next 5 or 10 years. This is a hard question to answer, because about 90% of our product roadmap is driven by requests from our customers. I honestly don’t know where the future will take us, but I do know that it will help our customers to meet their goals and to deliver on their vision.

Magic Quadrant for Cloud AI Developer Services
It is always good to see that our hard work continues to delight our customers, and it is also good to be recognized by Gartner and other leading analysts. Today I am happy to share that AWS has secured the top-right corner of Gartner’s Magic Quadrant for Cloud AI Developer Services, earning highest placement for Ability to Execute and furthest to the right for Completeness of Vision:

You can read the full report to learn more (registration is required).

Keep the Cat Out
As a simple yet powerful example of the power of the AWS AI & ML services, check out Ben Hamm’s DeepLens-powered cat door:

AWS AI & ML Services
Building on top of the AWS compute, storage, networking, security, database, and analytics services, our lineup of AI and ML offerings are designed to serve newcomers, experts, and everyone in-between. Let’s take a look at a few of them:

Amazon SageMaker – Gives developers and data scientists the power to build, train, test, tune, deploy, and manage machine learning models. SageMaker provides a complete set of machine learning components designed to reduce effort, lower costs, and get models into production as quickly as possible:

Amazon Kendra – An accurate and easy-to-use enterprise search service that is powered by machine learning. Kendra makes content from multiple, disparate sources searchable with powerful natural language queries:

Amazon CodeGuru – This service provides automated code reviews and makes recommendations that can improve application performance by identifying the most expensive lines of code. It has been trained on hundreds of thousands of internal Amazon projects and on over 10,000 open source projects on GitHub.

Amazon Textract – This service extracts text and data from scanned documents, going beyond traditional OCR by identifying the contents of fields in forms and information stored in tables. Powered by machine learning, Textract can handle virtually any type of document without the need for manual effort or custom code:

Amazon Personalize – Based on the same technology that is used at Amazon.com, this service provides real-time personalization and recommendations. To learn more, read Amazon Personalize – Real-Time Personalization and Recommendation for Everyone.

Time to Learn
If you are ready to learn more about AI and ML, check out the AWS Ramp-Up Guide for Machine Learning:

You should also take a look at our Classroom Training in Machine Learning and our library of Digital Training in Machine Learning.

Jeff;

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Get to know the latest AWS Heroes, including the first IoT Heroes!

Post Syndicated from Ross Barich original https://aws.amazon.com/blogs/aws/get-to-know-the-latest-aws-heroes-including-the-first-iot-heroes/

The AWS Heroes program recognizes and honors individuals who are prominent leaders in local communities, known for sharing AWS knowledge and facilitating peer-to-peer learning in a variety of ways. The AWS Heroes program grows just as the enthusiasm for all things AWS grows in communities around the world, and there are now AWS Heroes in 35 countries.

Today we are thrilled to introduce the newest AWS Heroes, including the first Heroes in Bosnia, Indonesia, Nigeria, and Sweden, as well as the first IoT Heroes:

 

Joshua Arvin Lat – National Capital Region, Philippines

Machine Learning Hero Joshua Arvin Lat is the CTO of Complete Business Online, Insites, and Jepto. He has achieved 9 AWS Certifications, and has participated and contributed as a certification Subject Matter Expert to help update the AWS Certified Machine Learning – Specialty exam during the Item Development Workshops. He has been serving as one of the core leaders of the AWS User Group Philippines for the past 4-5 years and also shares knowledge at several international AWS conferences and including AWS Summit Singapore – TechFest, and AWS Community Day – Melbourne.

 

 

 

 

Nofar Asselman – Tel Aviv, Israel

Community Hero Nofar Asselman is the Head of Business Development at Epsagon – an automated tracing platform for cloud microservices, where she initiated Epsagon’s partnership with AWS. Nofar is a key figure at the AWS Partner Community and founded the first-ever AWS Partners Meetup Group. Nofar is passionate about her work with AWS cloud communities, organizes meetups regularly, and participates in conferences, events and user groups. She loves sharing insights and best practices about her AWS experiences in blog posts on Medium.

 

 

 

 

Filipe Barretto – Rio de Janeiro, Brazil

Community Hero Filipe Barretto is one of the founders of Solvimm, an AWS Consulting Partner since 2013. He organizes the AWS User Group in Rio de Janeiro, Brazil, promoting talks, hands-on labs and study groups for AWS Certifications. He also frequently speaks at universities, introducing students to Cloud Computing and AWS services. He actively participates in other AWS User Groups in Brazil, working to build a strong and bigger community in the country, and, when possible, with AWS User Groups in other Latin American countries.

 

 

 

 

Stephen Borsay – Portland, USA

IoT Hero Stephen Borsay is a Degreed Computer Engineer and electronic hobbyist with a passion to make IoT and embedded systems understandable and enjoyable to enthusiasts of all experience levels. Stephen authors community IoT projects, as well as develops online teaching materials focused on AWS IoT to solve problems for both professional developers and casual IoT enthusiasts. He founded the Digital Design meetup group in Portland, Oregon which holds regular meetings focusing on hands-on IoT training. He regularly posts IoT tutorials for Hackster.io and you can find his online AWS IoT training courses on YouTube and Udemy.

 

 

 

Ernest Chiang – Taipei City, Taiwan

Community Hero Ernest Chiang, also known as Deng-Wei Chiang, started his AWS journey in 2008. He has been passionate about bridging AWS technology with business through AWS related presentations at local meet-ups, conferences, and online blog posts. Since 2011, many AWS services have been adopted, across AWS Global and China regions, under Ernest’s leadership as the Director of Product & Technology Integration of PAFERS Tech.

 

 

 

 

 

Don Coleman – Philadelphia, USA

IoT Hero Don Coleman is the Chief Innovation Officer at Chariot Solutions, where he builds software that leverages a wide range of AWS services. His experience building IoT projects enables him to share knowledge and lead workshops on solving IoT challenges using AWS. He also enjoys speaking at conferences about devices and technology, discussing things like NFC, Bluetooth Low Energy, LoRaWAN, and AWS IoT.

 

 

 

 

 

Ken Collins – Norfolk, USA

Serverless Hero Ken Collins is a Staff Engineer at Custom Ink, focusing on DevOps and their Ecommerce Platform with an emphasis on emerging opportunities. With a love for the Ruby programming language and serverless, Ken continues his open source Rails work by focusing on using Rails with AWS Lambda using a Ruby gem called Lamby. Recently he wrote an ActiveRecord adapter to take advantage of Aurora Serverless with Rails on Lambda.

 

 

 

 

 

Ewere Diagboya – Lagos, Nigeria

Community Hero Ewere Diagboya started building desktop and web apps using PHP and VB as a software engineer in junior high school. He started his Cloud journey with AWS at Terragon Group, where he grew into the DevOps and Infrastructure Lead. Later he collaborated to speak at the first ever AWS Nigeria Meetup and was the only Nigerian representative at AWS Johannesburg Loft in 2019. He is the co-founder of DevOps Nigeria, shares videos on YouTube showcasing AWS technologies, and has a blog on Medium, called MyCloudSeries.

 

 

 

 

Dzenan Dzevlan – Mostar, Bosnia and Herzegovina

Community Hero Dzenan Dzevlan is a Cloud and DevOps expert at TN-TECH and has been an AWS user since 2011. In 2016, Dzenan founded AWS User Group Bosnia and helped it grow to three user groups with more than 600 members. This AWS community is now the largest IT community in Bosnia. As a part of his activities, he runs online meetups, a YouTube channel, and the sqlheisenberg.com blog (in Bosnian language) to help people in the Balkans region achieve their AWS certification and start working with AWS.

 

 

 

 

Ben Ellerby – London, United Kingdom

Serverless Hero Ben Ellerby is VP of Engineering for Theodo and a dedicated member of the Serverless community. He is the editor of Serverless Transformation: a blog, newsletter & podcast sharing tools, techniques and use cases for all things Serverless. Ben speaks about serverless at conferences and events around the world. In addition to speaking, he co-organizes and supports serverless events including the Serverless User Group in London and ServerlessDays London.

 

 

 

 

 

Gunnar Grosch – Karlstad, Sweden

Serverless Hero Gunnar Grosch is an evangelist at Opsio based in Sweden. With a focus on building reliable and robust serverless applications, Gunnar has been one of the driving forces in creating techniques and tools for using chaos engineering in serverless. He regularly and passionately speaks at events on these and other serverless topics around the world. Gunnar is also deeply involved in the community by organizing AWS User Groups and Serverless Meetups in the Nordics, as well as being an organizer of ServerlessDays Stockholm and AWS Community Day Nordics. A variety of his contributions can be found on his personal website.

 

 

 

Scott Liao – New Taipei City, Taiwan

Community Hero Scott Liao is a DevOps Engineer and Manager at 104 Corp. His work is predominantly focused on Data Center and AWS Cloud solution architecture. He is interested in building hyper-scale DevOps environments for containers using AWS CloudFormation, CDK, Terraform, and various open-source tools. Scott has spoken regularly as AWS-focused events including AWS User Groups, Cloud Edge Summit Taipei, DevOpsDays Taipei, and other conferences. He also shares his expertise to writing, by producing written content for blogs and IT magazines in Taiwan.

 

 

 

 

Austin Loveless – Denver, USA

Community Hero Austin Loveless is a Cloud Architect at Photobucket and Founder of the AWSMeetupGroup. He travels around the country, teaching people of all skill levels about AWS Cloud Technologies. He live-streams all his events on YouTube. He partners with large software companies (AWS, MongoDB, Confluent, Galvanize, Flatiron School) to help grow the meetup group and teach more people. Austin also routinely blogs on Medium under the handle AWSMeetupGroup.

 

 

 

 

Efi Merdler-Kravitz – Tel Aviv, Israel

Serverless Hero Efi Merdler-Kravitz is Director of Engineering at Lumigo.io, a monitoring and debugging platform for AWS serverless applications built on a 100% serverless backend. As an early and enthusiastic adopter of serverless technology, Efi has been racking up the air miles as a frequent speaker at serverless events around the globe, and writes regularly on the topic for the Lumigo blog. Efi began on his journey into serverless as head of engineering at Coneuron, building its entire stack on Lambda, S3, API Gateway, and Firebase, while perfecting the art of helping developers transition to a serverless mindset.

 

 

 

Dhaval Nagar – Surat, India

Serverless Hero Dhaval Nagar is the founder and director of cloud consulting firm AppGambit based in India. He thinks that serverless is not just another method but a big paradigm shift in modern computing that will have a major impact on future technologies. Dhaval has been building on AWS since early 2015. Coincidentally, the first service that he picked on AWS was Lambda. He has 11 AWS Certifications, is a regular speaker at AWS user groups and conferences, and frequently writes on his Medium blog. He runs the Surat AWS User Group and Serverless Group and has organized over 20 meetups since it started in 2018.

 

 

 

Tomasz Ptak – London, United Kingdom

Machine Learning Hero Tomasz Ptak is a software engineer with a focus on tackling technical debt, transforming legacy products to maintainable projects and delivering a Developer experience that enables teams to achieve their objectives. He was a participant in the AWS DeepRacer League, a winner in Virtual League’s September race and a 2019 season finalist. He joined the AWS DeepRacer Community on day one to become one of its leaders. He runs the community blog, the knowledge base and maintains a DeepRacer log analysis tool.

 

 

 

 

Mike Rahmati – Sydney, Australia

Community Hero Mike Rahmati is Co-Founder and CTO of Cloud Conformity (acquired by Trend Micro), a leader in public cloud infrastructure security and compliance monitoring, where he helps organizations design and build cloud solutions that are Well-Architected at all times. As an active community member, Mike has designed thousands of best practices for AWS, and contributed to a number of open source AWS projects including Cloud Conformity Auto Remediation using AWS Serverless.

 

 

 

 

Namrata Shah (Nam) – New York, USA

Community Hero Nam Shah is a dynamic passionate technical leader based in the New York/New Jersey Area focused on custom application development and cloud architecture. She has over twenty years of professional information technology consulting experience delivering complex systems. Nam loves to share her technical knowledge and frequently posts AWS videos on her YouTube Channel and occasionally posts AWS courses on Udemy.

 

 

 

 
 

Yan So – Seoul, South Korea

Machine Learning Hero Yan So is a senior data scientist who possesses a variety of experience dealing with business issues by utilizing big data and machine learning. He was a co-founder of the Data Science Group of the AWS Korea Usergroup (AWSKRUG) and hosted over 30 meetups and AI/ML hands-on labs since 2017. He regularly speaks on interesting topics such as Amazon SageMaker GroundTruth on AWS Community Day, Zigzag’s Data Analytics Platform at the AWS Summit Seoul, and a recommendation engine on Amazon Personalize in AWS Retail & CPG Day 2019.
 

 

 

 

Steve Teo – Singapore

Community Hero Steve Teo has been serving the AWS User Group Singapore Community since 2017, which has over 5000 members. Having benefited from Meetups at the start of his career, he makes it his personal mission to pay it forward and build the community so that others might reap the benefits and contribute back. The community in Singapore has grown to have monthly meetups and now includes sub-chapters such as the Enterprise User Group, as well as Cloud Seeders, a member-centric Cloud Learning Community for Women, Built by Women. Steve also serves as a speaker in AWS APAC Community Conferences, where he shares on his Speakerdeck.
 

 

 

 

Hein Tibosch – Bali, Indonesia

IoT Hero Hein Tibosch is a skilled software developer, specializing in embedded applications and working as an independent at his craft for over 17 years. Hein is exemplary in his community contributions for FreeRTOS, as an active committer to the FreeRTOS project and the most active customer on the FreeRTOS Community Forums. Over the last 8 years, Hein’s contributions to FreeRTOS have made a significant impact on the successful adoption of FreeRTOS by embedded developers of all technical levels and backgrounds.
 

 

 

 
You can learn all about the AWS Heroes and connect with a Hero near you by visiting the AWS Hero website.

Ross;

Now available in Amazon Transcribe: Automatic Redaction of Personally Identifiable Information

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/now-available-in-amazon-transcribe-automatic-redaction-of-personally-identifiable-information/

Launched at AWS re:Invent 2017, Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for AWS customers to add speech-to-text capabilities to their applications. At the time of writing, Transcribe supports 31 languages, 6 of which can be transcribed in real-time.

A popular use case for Transcribe is the automatic transcription of customer calls (call centers, telemarketing, etc.), in order to build data sets for downstream analytics and natural language processing tasks, such as sentiment analysis. Thus, any Personally Identifiable Information (PII) should be removed to protect privacy, and comply with local laws and regulations.

As you can imagine, doing this manually is quite tedious, time-consuming, and error-prone, which is why we’re extremely happy to announce that Amazon Transcribe now supports automatic redaction of PII.

Introducing Content Redaction in Amazon Transcribe
If instructed to do so, Transcribe will automatically identify the following pieces of PII:

  • Social Security Number,
  • Credit card/Debit card number,
  • Credit card/Debit card expiration date,
  • Credit card/Debit card CVV code,
  • Bank account number,
  • Bank routing number,
  • Debit/Credit card PIN,
  • Name,
  • Email address,
  • Phone number (10 digits),
  • Mailing address.

They will be replaced with a ‘[PII]’ tag in the transcribed text. You also get a redaction confidence score (instead of the usual ASR score), as well as start and end timestamps. These timestamps will help you locate PII in your audio files for secure storage and sharing, or for additional audio processing to redact it at the source.

This feature is extremely easy to use, let’s do a quick demo.

Redacting Personal Information with Amazon Transcribe
First, I’ve recorded a short sound file full of personal information (of course, it’s all fake). I’m using the mp3 format here, but we recommend that you use lossless formats like FLAC or WAV for maximum accuracy.

Then, I upload this file to an S3 bucket using the AWS CLI.

$ aws s3 cp julien.mp3 s3://jsimon-transcribe-us-east-1

The next step is to transcribe this sound file using the StartTranscriptionJob API: why not use the AWS SDK for PHP this time?

<?php
require 'aws.phar';

use Aws\TranscribeService\TranscribeServiceClient;

$client = new TranscribeServiceClient([
    'profile' => 'default',
    'region' => 'us-east-1',
    'version' => '2017-10-26'
]);

$result = $client->startTranscriptionJob([
    'LanguageCode' => 'en-US',
    'Media' => [
        'MediaFileUri' => 's3://jsimon-transcribe-us-east-1/julien.mp3',
    ],
    'MediaFormat' => 'mp3',
    'OutputBucketName' => 'jsimon-transcribe-us-east-1',
    'ContentRedaction' => [
        'RedactionType' => 'PII',
        'RedactionOutput' => 'redacted'
    ],
    'TranscriptionJobName' => 'redaction'
]);
?>

A single API call is really all it takes. The RedactionOutput parameter lets me control whether I want both the full and the redacted output, or just the redacted output. I go for the latter. Now, let’s run this script.

$ php transcribe.php

Immediately, I can see the job running in the Transcribe console.

I could also use the GetTranscriptionJob and ListTranscriptionJobs APIs to check that content redaction has been applied. Once the job is complete, I simply fetch the transcription from my S3 bucket.

$ aws s3 cp s3://jsimon-transcribe-us-east-1/redacted-redactiontest.json .

The transcription is a JSON document containing detailed information about each word. Here, I’m only interested in the full transcript, so I use a nice open source tool called jq to filter the document.

$ cat redacted-redactiontest.json| jq '.results.transcripts'
[{
"transcript": "Good morning, everybody. My name is [PII], and today I feel like sharing a whole lot of personal information with you. Let's start with my Social Security number [PII]. My credit card number is [PII] And my C V V code is [PII] My bank account number is [PII] My email address is [PII], and my phone number is [PII]. Well, I think that's it. You know a whole lot about me. And I hope that Amazon transcribe is doing a good job at redacting that personal information away. Let's check."
}]

Well done, Amazon Transcribe. My privacy is safe.

Now available!
The content redaction feature is available for US English in the following regions:

  • US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), AWS GovCloud (US-West),
  • Canada (Central), South America (São Paulo),
  • Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt),
  • Middle East (Bahrain),
  • Asia Pacific (Mumbai), Asia Pacific (Hong Kong), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo).

Take a look at the pricing page, give the feature a try, and please send us feedback either in the AWS forum for Amazon Transcribe or through your usual AWS support contacts.

– Julien

 

 

 

Amazon FSx for Lustre Update: Persistent Storage for Long-Term, High-Performance Workloads

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-fsx-for-lustre-persistent-storage/

Last year I wrote about Amazon FSx for Lustre and told you how our customers can use it to create pebibyte-scale, highly parallel POSIX-compliant file systems that serve thousands of simultaneous clients driving millions of IOPS (Input/Output Operations per Second) with sub-millisecond latency.

As a managed service, Amazon FSx for Lustre makes it easy for you to launch and run the world’s most popular high-performance file system. Our customers use this service for workloads where speed matters, including machine learning, high performance computing (HPC), and financial modeling.

Today we are enhancing Amazon FSx for Lustre by giving you the ability to create high-performance file systems that are durable and highly available, with three performance tiers, and a new, second-generation scratch file system that is designed to provide better support for spiky workloads.

Recent Updates
Before I dive in to today’s news, let’s take a look at some of the most recent updates that we have made to the service:

Data Repository APIs – This update introduced a set of APIs that allow you to easily export files from FSx to S3, including the ability to initiate, monitor, and cancel the transfer of changed files to S3. To learn more, read New Enhancements for Moving Data Between Amazon FSx for Lustre and Amazon S3.

SageMaker Integration – This update gave you the ability to use data stored on an Amazon FSx for Lustre file system as training data for an Amazon SageMaker model. You can train your models using vast amounts of data without first moving it to S3.

ParallelCluster Integration – This update let you create an Amazon FSx for Lustre file system when you use AWS ParallelCluster to create an HPC cluster, with the option to use an existing file system as well.

EKS Integration – This update let you use the new AWS FSx Container Storage Interface (CSI) driver to access Amazon FSx for Lustre file systems from your Amazon EKS clusters.

Smaller File System Sizes – This update let you create 1.2 TiB and 2.4 TiB Lustre file systems, in addition to the original 3.6 TiB.

CloudFormation Support – This update let you use AWS CloudFormation templates to deploy stacks that use Amazon FSx for Lustre file systems. To learn more, check out AWS::FSx::FileSystem LustreConfiguration.

SOC Compliance – This update announced that Amazon FSx for Lustre can now be used with applications that are subject to Service Organization Control (SOC) compliance. To learn more about this and other compliance programs, take a look at AWS Services in Scope by Compliance Program.

Amazon Linux Support – This update allowed EC2 instances running Amazon Linux or Amazon Linux 2 to access Amazon FSx for Lustre file systems.

Client Repository – You can now make of use Lustre clients that are compatible with recent versions of Ubuntu, Red Hat Enterprise Linux, and CentOS. To learn more, read Installing the Lustre Client.

New Persistent & Scratch Deployment Options
We originally launched the service to target high-speed short-term processing of data, and as a result until today FSx for Lustre provided scratch file systems which are ideal for temporary storage and shorter-term processing of data — Data is not replicated and does not persist if a file server fails. We’re now expanding beyond short-term processing by launching persistent file systems, designed for longer-term storage and workloads, where data is replicated and file servers are replaced if they fail.

In addition to this new deployment option, we are also launching a second-generation scratch file system that is designed to provide better support for spiky workloads, with the ability to provide burst throughput up to 6x higher than the baseline. Like the first-generation scratch file system, this one is great for temporary storage and short-term data processing.

Here is a table that will help you to chose between the deployment options:

PersistentScratch 2Scratch 1
API Name
PERSISTENT_1SCRATCH_2SCRATCH_1
Storage ReplicationSame AZNoneNone
Aggregated Throughput
(Per TiB of Provisioned Capacity)
50 MB/s, 100 MB/s, 200 MB/s200 MB/s, Burst to 1,200 MB/s200 MB/s
IOPSMillionsMillionsMillions
LatencySub-millisecond, higher varianceSub-millisecond, very low varianceSub-millisecond, very low variance
Expected Workload LifetimeDays, Weeks, MonthsHours, Days, WeeksHours, Days, Weeks
Encryption at RestCustomer-managed or FSx-managed keysFSx-managed keysFSx-managed keys
Encryption In TransitYes, when accessed from supported EC2 instances in these regions.Yes, when accessed from supported EC2 instances in these regions.No
Initial Storage Allocation
1.2 TiB, 2.4 TiB, and increments of 2.4 TiB1.2 TiB, 2.4 TiB, and increments of 2.4 TiB1.2 TiB, 2.4 TiB, 3.6 TiB
Additional Storage Allocation2.4 TiB2.4 TiB3.6 TiB

Creating a Persistent File System
I can create a file system that uses the persistent deployment option using the AWS Management Console, AWS Command Line Interface (CLI) (create-file-system), a CloudFormation template, or the FSx for Lustre APIs (CreateFileSystem). I’ll use the console:

Then I mount it like any other file system, and access it as usual.

Things to Know
Here are a couple of things to keep in mind:

Lustre Client – You will need to use an AMI (Amazon Machine Image) that includes the Lustre client. You can use the latest Amazon Linux AMI, or you can create your own.

S3 Export – Both options allow you to export changes to S3 using the CreateDataRepositoryTask function. This allows you to meet stringent Recovery Point Objectives (RPOs) while taking advantage of the fact that S3 is designed to deliver eleven 9’s of durability.

Available Now
Persistent file systems are available in all AWS regions. Scratch 2 file systems are available in all commercial AWS regions with the exception of Europe (Stockholm).

Pricing is based on the performance tier that you choose and the amount of storage that you provision; see the Amazon FSx for Lustre Pricing page for more info.

Jeff;

Savings Plan Update: Save Up to 17% On Your Lambda Workloads

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/savings-plan-update-save-up-to-17-on-your-lambda-workloads/

Late last year I wrote about Savings Plans, and showed you how you could use them to save money when you make a one or three year commitment to use a specified amount (measured in dollars per hour) of Amazon Elastic Compute Cloud (EC2) or AWS Fargate. Savings Plans give you the flexibility to change compute services, instance types, operating systems, and regions while accessing compute power at a lower price.

Now for Lambda
Today I am happy to be able to tell you that Compute Savings Plans now apply to the compute time consumed by your AWS Lambda functions, with savings of up to 17%. If you are already using one or more Savings Plans to save money on your server-based processing, you can enjoy the cost savings while modernizing your applications and taking advantage of a multitude of powerful Lambda features including a simple programming model, automatic function scaling, Step Functions, and more! If your use case includes a constant level of function invocation for microservices, you should be able to make great use of Compute Savings Plans.

AWS Cost Explorer will now take Lambda usage in to account when it recommends a Savings Plan. I open AWS Cost Explorer, then click Recommendations within Savings Plans, then review the recommendations. As I am doing this, I can alter the term, payment option, and the time window that is used to make the recommendations:

When I am ready to proceed, I click Add selected Savings Plan(s) to cart, and then View cart to review my selections and submit my order:

The Savings Plan becomes active right away. I can use Cost Explorer’s Utilization and Coverage reports to verify that I am making good use of my plans. The Savings Plan Utilization report shows the percentage of savings plan commitment that is being used to realize savings on compute usage:

The Coverage report shows the percentage of Savings Plan commitment that is covered by Savings Plans for the selected time period:

When the coverage is less than 100% for an extended period of time, I should think about buying another plan.

Things to Know
Here are a couple of things to know:

Discount Order – If you are using two or more compute services, the plans are applied in order of highest to lowest discount percentage.

Applicability – The discount applies duration (both on demand and provisioned concurrency), and provisioned concurrency charges. It does not apply to Lambda requests.

Available Now
If you already own a Savings Plan or two and are using Lambda, you will receive a discount automatically (unless you are at 100% utilization with EC2 and Fargate).

If you don’t own a plan and are using Lambda, buy a plan today!

Jeff;

 

New – Multi-Attach for Provisioned IOPS (io1) Amazon EBS Volumes

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/new-multi-attach-for-provisioned-iops-io1-amazon-ebs-volumes/

Starting today, customers running Linux on Amazon Elastic Compute Cloud (EC2) can take advantage of new support for attaching Provisioned IOPS (io1) Amazon Elastic Block Store (EBS) volumes to multiple EC2 instances. Each EBS volume, when configured with the new Multi-Attach option, can be attached to a maximum of 16 EC2 instances in a single Availability Zone. Additionally, each Nitro-based EC2 instance can support the attachment of multiple Multi-Attach enabled EBS volumes. Multi-Attach capability makes it easier to achieve higher availability for applications that provide write ordering to maintain storage consistency.

Applications can attach Multi-Attach volumes as non-boot data volumes, with full read and write permission. Snapshots can be taken of volumes configured for Multi-Attach, just as with regular volumes, but additionally the snapshot can be initiated from any instance that the volume is attached to, and Multi-Attach volumes also support encryption. Multi-Attach enabled volumes can be monitored using Amazon CloudWatch metrics, and to monitor performance per instance, you can use the Linux iostat tool.

Getting Started with Multi-Attach EBS Volumes
Configuring and using Multi-Attach volumes is a simple process for new volumes using either the AWS Command Line Interface (CLI) or the AWS Management Console. In a simple example for this post I am going to create a volume, configured for Multi-Attach, and attach it to two Linux EC2 instances. From one instance I will write a simple text file, and from the other instance I will read the contents. Let’s get started!

In the AWS Management Console I first navigate to the EC2 homepage, select Volumes from the navigation panel and then click Create Volume. Choosing Provisioned IOPS SSD (io1) for Volume Type, I enter my desired size and IOPS and then check the Multi-Attach option.

To instead do this using the AWS Command Line Interface (CLI) I simply use the ec2 create-volume command, with the --multi-attach-enabled option, as shown below.

aws ec2 create-volume --volume-type io1 --multi-attach-enabled --size 4 --iops 100 --availability-zone us-east-1a

I can verify that Multi-Attach is enabled on my volume from the Description tab when the volume is selected. The volume table also contains a column, Multi-Attach Enabled that displays a simple ‘yes/no’ value, enabling me to check if Multi-Attach is enabled across multiple volumes at a glance.

With the volume created and ready for use, I next launch two T3 EC2 instances running Linux. Remember, Multi-Attach needs an AWS Nitro System based instance type and the instances have to be created in the same Availability Zone as my volume. My instances are running Amazon Linux 2, and have been placed into the us-east-1a Availability Zone, matching the placement of my new Multi-Attach enabled volume.

Once the instances are running, it’s time to attach my volume to both of them. I click Volumes from the EC2 dashboard, then select the Multi-Attach volume I created. From the Actions menu, I click Attach Volume. In the screenshot below you can see that I have already attached the volume to one instance, and am attaching to the second.

If I’m using the AWS Command Line Interface (CLI) to attach the volume, I make use of the ec2 attach-volume command, as I would for any other volume type:

aws ec2 attach-volume --device /dev/sdf --instance-id i-0c44a... --volume-id vol-012721da...

For a given volume, the AWS Management Console shows me which instances it is attached to, or those currently being attached, when I select the volume:

With the volume attached to both instances, let’s make use of it with a simple test. Selecting my first instance in the Instances view of the EC2 dashboard, I click Connect and then open a shell session onto the instance using AWS Systems Manager‘s Session Manager. Following the instructions here, I created a file system on the new volume attached as /dev/sdf, mounted it as /data, and using vi I write some text to a file.

sudo mkfs -t xfs /dev/sdf
sudo mkdir /data
sudo mount /dev/sdf /data
cd /data
sudo vi file.txt

Selecting my second instance in the AWS Management Console, I repeat the connection steps. I don’t need to create a file system this time but I do again mount the /dev/sdf volume as /data (although I could use a different mount point if I chose). On changing directory to /data, I see that the file I wrote from my first instance exists, and contains the text I expect.

Creating and working with Multi-Attach volumes is simple! Just remember, they need to be attached to and be in the same Availability Zone as the instances they are to be attached to. This post obviously made use of a simple use case, but for any real-world application usage you might also want to consider implementing some form of write ordering, so as to ensure consistency is maintained.

Using Delete-on-Termination with Multi-Attach Volumes
If you prefer to make use of the option to delete attached volumes on EC2 instance termination then we recommend you have a consistent setting of the option across all of the instances that a Multi-Attach volume is attached to – use either all delete, or all retain, to allow for predictable termination behavior. If you attach the volume to a set of instances that have differing values for Delete-on-Termination then deletion of the volume depends on whether the last instance to detach is set to delete or not. A consistent setting obviously avoids any doubt!

Availability
For more information see the Amazon Elastic Block Store (EBS) technical documentation. Multi-Attach for Provisioned IOPS (io1) volumes on Amazon Elastic Block Store (EBS) is available today at no extra charge to customers in the US East (N. Virginia & Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Seoul) regions.

— Steve

New Desktop Client for AWS Client VPN

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-vpn-client/

We launched AWS Client VPN last year so that you could use your OpenVPN-based clients to securely access your AWS and on-premises networks from anywhere (read Introducing AWS Client VPN to Securely Access AWS and On-Premises Resources to learn more). As a refresher, this is a fully-managed elastic VPN service that scales the number of connections up and down according to demand. It allows you to provide easy connectivity to your workforce and your business partners, along with the ability to monitor and manage all of the connections from one console. You can create Client VPN endpoints, associate them with the desired VPC subnets, and set up authorization rules to enable your users to access the desired cloud resources.

 

New Desktop Client for AWS Client VPN
Today we are making it even easier for you to connect your Windows and MacOS clients to AWS, with the launch of the desktop client by AWS. These applications can be installed on your desktop or laptop, and support mutual authentication, username/password via Active Directory, and the use of Multi-Factor Authentication (MFA). After you use the client to establish a VPN connection, the desktop or laptop is effectively part of the configured VPC, and can access resources as allowed by the authorization rules.

The client applications are available at no charge, and can be used to establish connections to any AWS region where you have an AWS Client VPN endpoint. You can currently create these endpoints in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions.

Jeff;

AWS DataSync Update – Support for Amazon FSx for Windows File Server

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-datasync-update-support-for-amazon-fsx-for-windows-file-server/

AWS DataSync helps you to move large amounts of data into and out of the AWS Cloud. As I noted in New – AWS DataSync – Automated and Accelerated Data Transfer, our customers use DataSync for their large-scale migration, upload & process, archiving, and backup/DR use cases.

Amazon FSx for Windows File Server gives you network file storage that is fully compatible with your existing Windows applications and environments (read New – Amazon FSx for Windows File Server – Fast, Fully Managed, and Secure to learn more). It includes a very wide variety of enterprise-ready features including native multi-AZ file systems, support for SQL Server, data deduplication, quotas, and the ability to force the use of in-transit encryption. Our customers use Amazon FSx for Windows File Server to lift-and-shift their Windows workloads to the cloud, where they can benefit from consistent sub-millsecond performance and high throughput.

Inside AWS DataSync
The DataSync agent is deployed as a VM within your existing on-premises or cloud-based environment so that it can access your NAS or file system via NFS or SMB. The agent uses a robust, highly-optimized data transfer protocol to move data back and forth at up to 10 times the speed of open source data transfer solutions.

DataSync can be used for a one-time migration-style transfer, or it can be invoked on a periodic, incremental basis for upload & process, archiving, and backup/DR purposes. Our customers use DataSync for transfer operations that encompass hundreds of terabytes of data and millions of files.

Since the launch of DataSync in November 2018, we have made several important updates and changes to DataSync including:

68% Price Reduction – We reduced the data transfer charge to $0.0125 per gigabyte.

Task Scheduling – We gave you the ability to schedule data transfer tasks using the AWS Management Console or the AWS Command Line Interface (CLI), with hourly, daily, and weekly options:

Additional Region Support – We recently made DataSync available in the Europe (Stockholm), South America (São Paulo), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), and AWS GovCloud (US-East) Regions, bringing the total list of supported regions to 20.

EFS-to-EFS Transfer – We added support for file transfer between a pair of Amazon Elastic File System (EFS) file systems.

Filtering for Data Transfers – We gave you the ability to use file path and object key filters to control the data transfer operation:

SMB File Share Support – We added support for file transfer between a pair of SMB file shares.

S3 Storage Class Support – We gave you the ability to choose the S3 Storage Class when transferring data to an S3 bucket.

FSx for Windows Support
Today I am happy to announce that we are giving you the ability to use DataSync to transfer data to and from Amazon FSx for Windows File Server file systems. You can configure these file systems as DataSync Locations and then reference them in your DataSync Tasks.

After I choose the desired FSx for Windows file system, I supply a user name and password, and enter the name of the Windows domain for authentication:

Then I create a task that uses one of my existing SMB shares as a source, and the FSx for Windows file system as a destination. I give my task a name (MyTask), and configure any desired options:

I can set up filtering and use a schedule:

I have many scheduling options; here are just a few:

If I don’t use a schedule, I can simply click Start to run my task on an as-needed basis:

When I do this, I have the opportunity to review and refine the settings for the task:

The task starts within seconds, and I can watch the data transfer and throughput metrics in the console:

In addition to the console-based access that I just showed you, you can also use the DataSync API and the DataSync CLI to create tasks (CreateTask), start them (StartTaskExecution), check on task status (DescribeTaskExecution) and much more.

Available Now
This important new feature is available now and you can start using it today!

Jeff;

New – T3 Instances on Dedicated Single-Tenant Hardware

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-t3-instances-on-dedicated-single-tenant-hardware/

T3 instances use a burst pricing model that allows you to host general purpose workloads at low cost, with access to sustainable, full-core performance when needed. You can choose from seven different sizes and receive an assured baseline amount of processing power, courtesy of custom high frequency Intel® Xeon® Scalable Processors.

Our customers use them to host many different types of production and development workloads including microservices, small and medium databases, and virtual desktops. Some of our customers launch large fleets of T3 instances and use them to test applications in a wide range of conditions, environments, and configurations.

We launched the first EC2 Dedicated Instances way back in 2011. Dedicated Instances run on single-tenant hardware, providing physical isolation from instances that belong to other AWS accounts. Our customers use Dedicated Instances to further their compliance goals (PCI, SOX, FISMA, and so forth), and also use them to run software that is subject to license or tenancy restrictions.

Dedicated T3
Today I am pleased to announce that we are now making all seven sizes (t3.nano through t3.2xlarge) of T3 instances available in dedicated form, in 14 regions.You can now save money by using T3 instances to run workloads that require the use of dedicated hardware, while benefiting from access to the AVX-512 instructions and other advanced features of the latest generation of Intel® Xeon® Scalable Processors.

Just like the existing T3 instances, the dedicated T3 instances are powered by the Nitro system, and launch with Unlimited bursting enabled. They use ENA networking and offer up to 5 Gbps of network bandwidth.

You can launch dedicated T3 instances using the EC2 API, the AWS Management Console:

The AWS Command Line Interface (CLI):

$ aws ec2 run-instances --placement Tenancy=dedicated ...

or via a CloudFormation template (set tenancy to dedicated in your Launch Template).

Now Available
Dedicated T3 instances are available in the US East (N. Virginia), US East (Ohio), US West (N. California), South America (São Paulo), Canada (Central), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Mumbai), and Asia Pacific (Seoul) Regions.

You can purchase the instances in On-Demand or Reserved Instance form. There is an additional fee of $2 per hour when at least one Dedicated Instance of any type is running in a region, and $0.05 per hour when you you burst above the baseline performance for an extended period of time.

Jeff;