All posts by Jeff Barr

New – Low-Cost HDD Storage Option for Amazon FSx for Windows File Server

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-low-cost-hdd-storage-option-for-amazon-fsx-for-windows-file-server/

You can use Amazon FSx for Windows File Server to create file systems that can be accessed from a wide variety of sources and that use your existing Active Directory environment to authenticate users. Last year we added a ton of features including Self-Managed Directories, Native Multi-AZ File Systems, Support for SQL Server, Fine-Grained File Restoration, On-Premises Access, a Remote Management CLI, Data Deduplication, Programmatic File Share Configuration, Enforcement of In-Transit Encryption, and Storage Quotas.

New HDD Option
Today we are adding a new HDD (Hard Disk Drive) storage option to Amazon FSx for Windows File Server. While the existing SSD (Solid State Drive) storage option is designed for the highest performance latency-sensitive workloads like databases, media processing, and analytics, HDD storage is designed for a broad spectrum of workloads including home directories, departmental shares, and content management systems.

Single-AZ HDD storage is priced at $0.013 per GB-month and Multi-AZ HDD storage is priced at $0.025 per GB-month (this makes Amazon FSx for Windows File Server the lowest cost file storage for Windows applications and workloads in the cloud). Even better, if you use this option in conjunction with Data Deduplication and use 50% space savings as a reasonable reference point, you can achieve an effective cost of $0.0065 per GB-month for a single-AZ file system and $0.0125 per GB-month for a multi-AZ file system.

You can choose the HDD option when you create a new file system:

If you have existing SSD-based file systems, you can create new HDD-based file systems and then use AWS DataSync or robocopy to move the files. Backups taken from newly created SSD or HDD file systems can be restored to either type of storage, and with any desired level of throughput capacity.

Performance and Caching
The HDD storage option is designed to deliver 12 MB/second of throughput per TiB of storage, with the ability to handle bursts of up to 80 MB/second per TiB of storage. When you create your file system, you also specify the throughput capacity:

The amount of throughput that you provision also controls the size of a fast, in-memory cache for your file share; higher levels of throughput come with larger amounts of cache. As a result, Amazon FSx for Windows File Server file systems can be provisioned so as to be able to provide over 3 GB/s of network throughput and hundreds of thousands of network IOPS, even with HDD storage. This will allow you to create cost-effective file systems that are able to handle many different use cases, including those where a modest subset of a large amount of data is accessed frequently. To learn more, read Amazon FSx for Windows File Server Performance.

Now Available
HDD file systems are available in all regions where Amazon FSx for Windows File Server is available and you can start creating them today.

Jeff;

BuildforCOVID19 Global Online Hackathon

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/buildforcovid19-global-online-hackathon/

The COVID-19 Global Hackathon is an opportunity for builders to create software solutions that drive social impact with the aim of tackling some of the challenges related to the current coronavirus (COVID-19) pandemic.

We’re encouraging YOU – builders around the world – to #BuildforCOVID19 using technologies of your choice across a range of suggested themes and challenge areas, some of which have been sourced through health partners like the World Health Organization. The hackathon welcomes locally and globally focused solutions and is open to all developers.

AWS is partnering with technology companies like Facebook, Giphy, Microsoft, Pinterest, Slack, TikTok, Twitter, and WeChat to support this hackathon. We will be providing technical mentorship and credits for all participants.

Join BuildforCOVID19 and chat with fellow participants and AWS mentors in the COVID19 Global Hackathon Slack channel.

Jeff;

Working From Home? Here’s How AWS Can Help

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/working-from-home-heres-how-aws-can-help/

Just a few weeks and so much has changed. Old ways of living, working, meeting, greeting, and communicating are gone for a while. Friendly handshakes and warm hugs are not healthy or socially acceptable at the moment.

My colleagues and I are aware that many people are dealing with changes in their work, school, and community environments. We’re taking measures to support our customers, communities, and employees to help them to adjust and deal with the situation, and will continue to do more.

Working from Home
With people in many cities and countries now being asked to work or learn from home, we believe that some of our services can help to make the transition from the office or the classroom to the home just a bit easier. Here’s an overview of our solutions:

Amazon WorkSpaces lets you launch virtual Windows and Linux desktops that can be accessed anywhere and from any device. These desktops can be used for remote work, remote training, and more.

Amazon WorkDocs makes it easy for you to collaborate with others, also from anywhere and on any device. You can create, edit, share, and review content, all stored centrally on AWS.

Amazon Chime supports online meetings with up to 100 participants (growing to 250 later this month), including chats and video calls, all from a single application.

Amazon Connect lets you set up a call or contact center in the cloud, with the ability to route incoming calls and messages to tens of thousands of agents. You can use this to provide emergency information or personalized customer service, while the agents are working from home.

Amazon AppStream lets you deliver desktop applications to any computer. You can deliver enterprise, educational, or telemedicine apps at scale, including those that make use of GPUs for computation or 3D rendering.

AWS Client VPN lets you set up secure connections to your AWS and on-premises networks from anywhere. You can give your employees, students, or researchers the ability to “dial in” (as we used to say) to your existing network.

Some of these services have special offers designed to make it easier for you to get started at no charge; others are already available to you under the AWS Free Tier. You can learn more on the home page for each service, and on our new Remote Working & Learning page.

You can sign up for and start using these services without talking to us, but we are here to help if you need more information or need some help in choosing the right service(s) for your needs. Here are some points of contact:

If you are already an AWS customer, your Technical Account Manager (TAM) and Solutions Architect (SA) will be happy to help.

Some Useful Content
I am starting a collection of other AWS-related content that will help you use these services and work-from-home as efficiently as possible. Here’s what I have so far:

If you create something similar, share it with me and I’ll add it to my list.

Please Stay Tuned
This is, needless to say, a dynamic and unprecedented situation and we are all learning as we go.

I do want you to know that we’re doing our best to help. If there’s something else that you need, please do not hesitate to reach out. Go through your normal AWS channels first, but contact me if you are in a special situation and I’ll do my best!

Jeff;

 

Bottlerocket – Open Source OS for Container Hosting

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/bottlerocket-open-source-os-for-container-hosting/

It is safe to say that our industry has decided that containers are now the chosen way to package and scale applications. Our customers are making great use of Amazon ECS and Amazon Elastic Kubernetes Service, with over 80% of all cloud-based containers running on AWS.

Container-based environments lend themselves to easy scale-out, and customers can run host environments that encompass hundreds or thousands of instances. At this scale, several challenges arise with the host operating system. For example:

Security – Installing extra packages simply to satisfy dependencies can increase the attack surface.

Updates – Traditional package-based update systems and mechanisms are complex and error prone, and can have issues with dependencies.

Overhead – Extra, unnecessary packages consume disk space and compute cycles, and also increase startup time.

Drift – Inconsistent packages and configurations can damage the integrity of a cluster over time.

Introducing Bottlerocket
Today I would like to tell you about Bottlerocket, a new Linux-based open source operating system that we designed and optimized specifically for use as a container host.

Bottlerocket reflects much of what we have learned over the years. It includes only the packages that are needed to make it a great container host, and integrates with existing container orchestrators. It supports Docker image and images that conform to the Open Container Initiative (OCI) image format.

Instead of a package update system, Bottlerocket uses a simple, image-based model that allows for a rapid & complete rollback if necessary. This removes opportunities for conflicts and breakage, and makes it easier for you to apply fleet-wide updates with confidence using orchestrators such as EKS.

In addition to the minimal package set, Bottlerocket uses a file system that is primarily read-only, and that is integrity-checked at boot time via dm-verity. SSH access is discouraged, and is available only as part of a separate admin container that you can enable on an as-needed basis and then use for troubleshooting purposes.

Try it Out
We’re launching a public preview of Bottlerocket today. You can follow the steps in QUICKSTART to set up an EKS cluster, and you can take a look at the GitHub repo. Try it out, report bugs, send pull requests, and let us know what you think!

Jeff;

 

AWS Named as a Leader in Gartner’s Magic Quadrant for Cloud AI Developer Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-named-as-a-leader-in-gartners-magic-quadrant-for-cloud-ai-developer-services/

Last week I spoke to executives from a large AWS customer and had an opportunity to share aspects of the Amazon culture with them. I was able to talk to them about our Leadership Principles and our Working Backwards model. They asked, as customers often do, about where we see the industry in the next 5 or 10 years. This is a hard question to answer, because about 90% of our product roadmap is driven by requests from our customers. I honestly don’t know where the future will take us, but I do know that it will help our customers to meet their goals and to deliver on their vision.

Magic Quadrant for Cloud AI Developer Services
It is always good to see that our hard work continues to delight our customers, and it is also good to be recognized by Gartner and other leading analysts. Today I am happy to share that AWS has secured the top-right corner of Gartner’s Magic Quadrant for Cloud AI Developer Services, earning highest placement for Ability to Execute and furthest to the right for Completeness of Vision:

You can read the full report to learn more (registration is required).

Keep the Cat Out
As a simple yet powerful example of the power of the AWS AI & ML services, check out Ben Hamm’s DeepLens-powered cat door:

AWS AI & ML Services
Building on top of the AWS compute, storage, networking, security, database, and analytics services, our lineup of AI and ML offerings are designed to serve newcomers, experts, and everyone in-between. Let’s take a look at a few of them:

Amazon SageMaker – Gives developers and data scientists the power to build, train, test, tune, deploy, and manage machine learning models. SageMaker provides a complete set of machine learning components designed to reduce effort, lower costs, and get models into production as quickly as possible:

Amazon Kendra – An accurate and easy-to-use enterprise search service that is powered by machine learning. Kendra makes content from multiple, disparate sources searchable with powerful natural language queries:

Amazon CodeGuru – This service provides automated code reviews and makes recommendations that can improve application performance by identifying the most expensive lines of code. It has been trained on hundreds of thousands of internal Amazon projects and on over 10,000 open source projects on GitHub.

Amazon Textract – This service extracts text and data from scanned documents, going beyond traditional OCR by identifying the contents of fields in forms and information stored in tables. Powered by machine learning, Textract can handle virtually any type of document without the need for manual effort or custom code:

Amazon Personalize – Based on the same technology that is used at Amazon.com, this service provides real-time personalization and recommendations. To learn more, read Amazon Personalize – Real-Time Personalization and Recommendation for Everyone.

Time to Learn
If you are ready to learn more about AI and ML, check out the AWS Ramp-Up Guide for Machine Learning:

You should also take a look at our Classroom Training in Machine Learning and our library of Digital Training in Machine Learning.

Jeff;

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Amazon FSx for Lustre Update: Persistent Storage for Long-Term, High-Performance Workloads

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-fsx-for-lustre-persistent-storage/

Last year I wrote about Amazon FSx for Lustre and told you how our customers can use it to create pebibyte-scale, highly parallel POSIX-compliant file systems that serve thousands of simultaneous clients driving millions of IOPS (Input/Output Operations per Second) with sub-millisecond latency.

As a managed service, Amazon FSx for Lustre makes it easy for you to launch and run the world’s most popular high-performance file system. Our customers use this service for workloads where speed matters, including machine learning, high performance computing (HPC), and financial modeling.

Today we are enhancing Amazon FSx for Lustre by giving you the ability to create high-performance file systems that are durable and highly available, with three performance tiers, and a new, second-generation scratch file system that is designed to provide better support for spiky workloads.

Recent Updates
Before I dive in to today’s news, let’s take a look at some of the most recent updates that we have made to the service:

Data Repository APIs – This update introduced a set of APIs that allow you to easily export files from FSx to S3, including the ability to initiate, monitor, and cancel the transfer of changed files to S3. To learn more, read New Enhancements for Moving Data Between Amazon FSx for Lustre and Amazon S3.

SageMaker Integration – This update gave you the ability to use data stored on an Amazon FSx for Lustre file system as training data for an Amazon SageMaker model. You can train your models using vast amounts of data without first moving it to S3.

ParallelCluster Integration – This update let you create an Amazon FSx for Lustre file system when you use AWS ParallelCluster to create an HPC cluster, with the option to use an existing file system as well.

EKS Integration – This update let you use the new AWS FSx Container Storage Interface (CSI) driver to access Amazon FSx for Lustre file systems from your Amazon EKS clusters.

Smaller File System Sizes – This update let you create 1.2 TiB and 2.4 TiB Lustre file systems, in addition to the original 3.6 TiB.

CloudFormation Support – This update let you use AWS CloudFormation templates to deploy stacks that use Amazon FSx for Lustre file systems. To learn more, check out AWS::FSx::FileSystem LustreConfiguration.

SOC Compliance – This update announced that Amazon FSx for Lustre can now be used with applications that are subject to Service Organization Control (SOC) compliance. To learn more about this and other compliance programs, take a look at AWS Services in Scope by Compliance Program.

Amazon Linux Support – This update allowed EC2 instances running Amazon Linux or Amazon Linux 2 to access Amazon FSx for Lustre file systems.

Client Repository – You can now make of use Lustre clients that are compatible with recent versions of Ubuntu, Red Hat Enterprise Linux, and CentOS. To learn more, read Installing the Lustre Client.

New Persistent & Scratch Deployment Options
We originally launched the service to target high-speed short-term processing of data, and as a result until today FSx for Lustre provided scratch file systems which are ideal for temporary storage and shorter-term processing of data — Data is not replicated and does not persist if a file server fails. We’re now expanding beyond short-term processing by launching persistent file systems, designed for longer-term storage and workloads, where data is replicated and file servers are replaced if they fail.

In addition to this new deployment option, we are also launching a second-generation scratch file system that is designed to provide better support for spiky workloads, with the ability to provide burst throughput up to 6x higher than the baseline. Like the first-generation scratch file system, this one is great for temporary storage and short-term data processing.

Here is a table that will help you to chose between the deployment options:

PersistentScratch 2Scratch 1
API Name
PERSISTENT_1SCRATCH_2SCRATCH_1
Storage ReplicationSame AZNoneNone
Aggregated Throughput
(Per TiB of Provisioned Capacity)
50 MB/s, 100 MB/s, 200 MB/s200 MB/s, Burst to 1,200 MB/s200 MB/s
IOPSMillionsMillionsMillions
LatencySub-millisecond, higher varianceSub-millisecond, very low varianceSub-millisecond, very low variance
Expected Workload LifetimeDays, Weeks, MonthsHours, Days, WeeksHours, Days, Weeks
Encryption at RestCustomer-managed or FSx-managed keysFSx-managed keysFSx-managed keys
Encryption In TransitYes, when accessed from supported EC2 instances in these regions.Yes, when accessed from supported EC2 instances in these regions.No
Initial Storage Allocation
1.2 TiB, 2.4 TiB, and increments of 2.4 TiB1.2 TiB, 2.4 TiB, and increments of 2.4 TiB1.2 TiB, 2.4 TiB, 3.6 TiB
Additional Storage Allocation2.4 TiB2.4 TiB3.6 TiB

Creating a Persistent File System
I can create a file system that uses the persistent deployment option using the AWS Management Console, AWS Command Line Interface (CLI) (create-file-system), a CloudFormation template, or the FSx for Lustre APIs (CreateFileSystem). I’ll use the console:

Then I mount it like any other file system, and access it as usual.

Things to Know
Here are a couple of things to keep in mind:

Lustre Client – You will need to use an AMI (Amazon Machine Image) that includes the Lustre client. You can use the latest Amazon Linux AMI, or you can create your own.

S3 Export – Both options allow you to export changes to S3 using the CreateDataRepositoryTask function. This allows you to meet stringent Recovery Point Objectives (RPOs) while taking advantage of the fact that S3 is designed to deliver eleven 9’s of durability.

Available Now
Persistent file systems are available in all AWS regions. Scratch 2 file systems are available in all commercial AWS regions with the exception of Europe (Stockholm).

Pricing is based on the performance tier that you choose and the amount of storage that you provision; see the Amazon FSx for Lustre Pricing page for more info.

Jeff;

Savings Plan Update: Save Up to 17% On Your Lambda Workloads

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/savings-plan-update-save-up-to-17-on-your-lambda-workloads/

Late last year I wrote about Savings Plans, and showed you how you could use them to save money when you make a one or three year commitment to use a specified amount (measured in dollars per hour) of Amazon Elastic Compute Cloud (EC2) or AWS Fargate. Savings Plans give you the flexibility to change compute services, instance types, operating systems, and regions while accessing compute power at a lower price.

Now for Lambda
Today I am happy to be able to tell you that Compute Savings Plans now apply to the compute time consumed by your AWS Lambda functions, with savings of up to 17%. If you are already using one or more Savings Plans to save money on your server-based processing, you can enjoy the cost savings while modernizing your applications and taking advantage of a multitude of powerful Lambda features including a simple programming model, automatic function scaling, Step Functions, and more! If your use case includes a constant level of function invocation for microservices, you should be able to make great use of Compute Savings Plans.

AWS Cost Explorer will now take Lambda usage in to account when it recommends a Savings Plan. I open AWS Cost Explorer, then click Recommendations within Savings Plans, then review the recommendations. As I am doing this, I can alter the term, payment option, and the time window that is used to make the recommendations:

When I am ready to proceed, I click Add selected Savings Plan(s) to cart, and then View cart to review my selections and submit my order:

The Savings Plan becomes active right away. I can use Cost Explorer’s Utilization and Coverage reports to verify that I am making good use of my plans. The Savings Plan Utilization report shows the percentage of savings plan commitment that is being used to realize savings on compute usage:

The Coverage report shows the percentage of Savings Plan commitment that is covered by Savings Plans for the selected time period:

When the coverage is less than 100% for an extended period of time, I should think about buying another plan.

Things to Know
Here are a couple of things to know:

Discount Order – If you are using two or more compute services, the plans are applied in order of highest to lowest discount percentage.

Applicability – The discount applies duration (both on demand and provisioned concurrency), and provisioned concurrency charges. It does not apply to Lambda requests.

Available Now
If you already own a Savings Plan or two and are using Lambda, you will receive a discount automatically (unless you are at 100% utilization with EC2 and Fargate).

If you don’t own a plan and are using Lambda, buy a plan today!

Jeff;

 

New Desktop Client for AWS Client VPN

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-vpn-client/

We launched AWS Client VPN last year so that you could use your OpenVPN-based clients to securely access your AWS and on-premises networks from anywhere (read Introducing AWS Client VPN to Securely Access AWS and On-Premises Resources to learn more). As a refresher, this is a fully-managed elastic VPN service that scales the number of connections up and down according to demand. It allows you to provide easy connectivity to your workforce and your business partners, along with the ability to monitor and manage all of the connections from one console. You can create Client VPN endpoints, associate them with the desired VPC subnets, and set up authorization rules to enable your users to access the desired cloud resources.

 

New Desktop Client for AWS Client VPN
Today we are making it even easier for you to connect your Windows and MacOS clients to AWS, with the launch of the desktop client by AWS. These applications can be installed on your desktop or laptop, and support mutual authentication, username/password via Active Directory, and the use of Multi-Factor Authentication (MFA). After you use the client to establish a VPN connection, the desktop or laptop is effectively part of the configured VPC, and can access resources as allowed by the authorization rules.

The client applications are available at no charge, and can be used to establish connections to any AWS region where you have an AWS Client VPN endpoint. You can currently create these endpoints in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions.

Jeff;

AWS DataSync Update – Support for Amazon FSx for Windows File Server

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-datasync-update-support-for-amazon-fsx-for-windows-file-server/

AWS DataSync helps you to move large amounts of data into and out of the AWS Cloud. As I noted in New – AWS DataSync – Automated and Accelerated Data Transfer, our customers use DataSync for their large-scale migration, upload & process, archiving, and backup/DR use cases.

Amazon FSx for Windows File Server gives you network file storage that is fully compatible with your existing Windows applications and environments (read New – Amazon FSx for Windows File Server – Fast, Fully Managed, and Secure to learn more). It includes a very wide variety of enterprise-ready features including native multi-AZ file systems, support for SQL Server, data deduplication, quotas, and the ability to force the use of in-transit encryption. Our customers use Amazon FSx for Windows File Server to lift-and-shift their Windows workloads to the cloud, where they can benefit from consistent sub-millsecond performance and high throughput.

Inside AWS DataSync
The DataSync agent is deployed as a VM within your existing on-premises or cloud-based environment so that it can access your NAS or file system via NFS or SMB. The agent uses a robust, highly-optimized data transfer protocol to move data back and forth at up to 10 times the speed of open source data transfer solutions.

DataSync can be used for a one-time migration-style transfer, or it can be invoked on a periodic, incremental basis for upload & process, archiving, and backup/DR purposes. Our customers use DataSync for transfer operations that encompass hundreds of terabytes of data and millions of files.

Since the launch of DataSync in November 2018, we have made several important updates and changes to DataSync including:

68% Price Reduction – We reduced the data transfer charge to $0.0125 per gigabyte.

Task Scheduling – We gave you the ability to schedule data transfer tasks using the AWS Management Console or the AWS Command Line Interface (CLI), with hourly, daily, and weekly options:

Additional Region Support – We recently made DataSync available in the Europe (Stockholm), South America (São Paulo), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), and AWS GovCloud (US-East) Regions, bringing the total list of supported regions to 20.

EFS-to-EFS Transfer – We added support for file transfer between a pair of Amazon Elastic File System (EFS) file systems.

Filtering for Data Transfers – We gave you the ability to use file path and object key filters to control the data transfer operation:

SMB File Share Support – We added support for file transfer between a pair of SMB file shares.

S3 Storage Class Support – We gave you the ability to choose the S3 Storage Class when transferring data to an S3 bucket.

FSx for Windows Support
Today I am happy to announce that we are giving you the ability to use DataSync to transfer data to and from Amazon FSx for Windows File Server file systems. You can configure these file systems as DataSync Locations and then reference them in your DataSync Tasks.

After I choose the desired FSx for Windows file system, I supply a user name and password, and enter the name of the Windows domain for authentication:

Then I create a task that uses one of my existing SMB shares as a source, and the FSx for Windows file system as a destination. I give my task a name (MyTask), and configure any desired options:

I can set up filtering and use a schedule:

I have many scheduling options; here are just a few:

If I don’t use a schedule, I can simply click Start to run my task on an as-needed basis:

When I do this, I have the opportunity to review and refine the settings for the task:

The task starts within seconds, and I can watch the data transfer and throughput metrics in the console:

In addition to the console-based access that I just showed you, you can also use the DataSync API and the DataSync CLI to create tasks (CreateTask), start them (StartTaskExecution), check on task status (DescribeTaskExecution) and much more.

Available Now
This important new feature is available now and you can start using it today!

Jeff;

New – T3 Instances on Dedicated Single-Tenant Hardware

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-t3-instances-on-dedicated-single-tenant-hardware/

T3 instances use a burst pricing model that allows you to host general purpose workloads at low cost, with access to sustainable, full-core performance when needed. You can choose from seven different sizes and receive an assured baseline amount of processing power, courtesy of custom high frequency Intel® Xeon® Scalable Processors.

Our customers use them to host many different types of production and development workloads including microservices, small and medium databases, and virtual desktops. Some of our customers launch large fleets of T3 instances and use them to test applications in a wide range of conditions, environments, and configurations.

We launched the first EC2 Dedicated Instances way back in 2011. Dedicated Instances run on single-tenant hardware, providing physical isolation from instances that belong to other AWS accounts. Our customers use Dedicated Instances to further their compliance goals (PCI, SOX, FISMA, and so forth), and also use them to run software that is subject to license or tenancy restrictions.

Dedicated T3
Today I am pleased to announce that we are now making all seven sizes (t3.nano through t3.2xlarge) of T3 instances available in dedicated form, in 14 regions.You can now save money by using T3 instances to run workloads that require the use of dedicated hardware, while benefiting from access to the AVX-512 instructions and other advanced features of the latest generation of Intel® Xeon® Scalable Processors.

Just like the existing T3 instances, the dedicated T3 instances are powered by the Nitro system, and launch with Unlimited bursting enabled. They use ENA networking and offer up to 5 Gbps of network bandwidth.

You can launch dedicated T3 instances using the EC2 API, the AWS Management Console:

The AWS Command Line Interface (CLI):

$ aws ec2 run-instances --placement Tenancy=dedicated ...

or via a CloudFormation template (set tenancy to dedicated in your Launch Template).

Now Available
Dedicated T3 instances are available in the US East (N. Virginia), US East (Ohio), US West (N. California), South America (São Paulo), Canada (Central), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Mumbai), and Asia Pacific (Seoul) Regions.

You can purchase the instances in On-Demand or Reserved Instance form. There is an additional fee of $2 per hour when at least one Dedicated Instance of any type is running in a region, and $0.05 per hour when you you burst above the baseline performance for an extended period of time.

Jeff;

CloudEndure Highly Automated Disaster Recovery – 80% Price Reduction

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/cloudendure-highly-automated-disaster-recovery-80-price-reduction/

AWS acquired CloudEndure last year. After the acquisition we began working with our new colleagues to integrate their products into the AWS product portfolio.

CloudEndure Disaster Recovery is designed to help you minimize downtime and data loss. It continuously replicates the contents of your on-premises, virtual, or cloud-based systems to a low-cost staging area in the AWS region of your choice, within the confines of your AWS account:

The block-level replication encompasses essentially every aspect of the protected system including the operating system, configuration files, databases, applications, and data files. CloudEndure Disaster Recovery can replicate any database or application that runs on supported versions of Linux or Windows, and is commonly used with Oracle and SQL Server, as well as enterprise applications such as SAP. If you do an AWS-to-AWS replication, the AWS environment within a specified VPC is replicated; this includes the VPC itself, subnets, security groups, routes, ACLs, Internet Gateways, and other items.

Here are some of the most popular and interesting use cases for CloudEndure Disaster Recovery:

On-Premises to Cloud Disaster Recovery -This model moves your secondary data center to the AWS Cloud without downtime or performance impact. You can improve your reliability, availability, and security without having to invest in duplicate hardware, networking, or software.

Cross-Region Disaster Recovery – If your application is already on AWS, you can add an additional layer of cost-effective protection and improve your business continuity by setting up cross-region disaster recovery. You can set up continuous replication between regions or Availability Zones and meet stringent RPO (Recovery Point Objective) or RTO (Recovery Time Objective) requirements.

Cross-Cloud Disaster Recovery – If you run workloads on other clouds, you can increase your overall resilience and meet compliance requirements by using AWS as your DR site. CloudEndure Disaster Recovery will replicate and recover your workloads, including automatic conversion of your source machines so that they boot and run natively on AWS.

80% Price Reduction
Recovery is quick and robust, yet cost-effective. In fact, we are reducing the price for CloudEndure Disaster Recovery by about 80% today, making it more cost-effective than ever: $0.028 per hour, or about $20 per month per server.

If you have tried to implement a DR solution in the traditional way, you know that it requires a costly set of duplicate IT resources (storage, compute, and networking) and software licenses. By replicating your workloads into a low-cost staging area in your preferred AWS Region, CloudEndure Disaster Recovery reduces compute costs by 95% and eliminates the need to pay for duplicate OS and third-party application licenses.

To learn more, watch the Disaster Recovery to AWS Demo Video:

After that, be sure to visit the new CloudEndure Disaster Recovery page!

Jeff;

Urgent & Important – Rotate Your Amazon RDS, Aurora, and DocumentDB Certificates

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/urgent-important-rotate-your-amazon-rds-aurora-and-documentdb-certificates/

You may have already received an email or seen a console notification, but I don’t want you to be taken by surprise!

Rotate Now
If you are using Amazon Aurora, Amazon Relational Database Service (RDS), or Amazon DocumentDB and are taking advantage of SSL/TLS certificate validation when you connect to your database instances, you need to download & install a fresh certificate, rotate the certificate authority (CA) for the instances, and then reboot the instances.

If you are not using SSL/TLS connections or certificate validation, you do not need to make any updates, but I recommend that you do so in order to be ready in case you decide to use SSL/TLS connections in the future. In this case, you can use a new CLI option that rotates and stages the new certificates but avoids a restart.

The new certificate (CA-2019) is available as part of a certificate bundle that also includes the old certificate (CA-2015) so that you can make a smooth transition without getting into a chicken and egg situation.

What’s Happening?
The SSL/TLS certificates for RDS, Aurora, and DocumentDB expire and are replaced every five years as part of our standard maintenance and security discipline. Here are some important dates to know:

September 19, 2019 – The CA-2019 certificates were made available.

January 14, 2020 – Instances created on or after this date will have the new (CA-2019) certificates. You can temporarily revert to the old certificates if necessary.

February 5 to March 5, 2020 – RDS will stage (install but not activate) new certificates on existing instances. Restarting the instance will activate the certificate.

March 5, 2020 – The CA-2015 certificates will expire. Applications that use certificate validation but have not been updated will lose connectivity.

How to Rotate
Earlier this month I created an Amazon RDS for MySQL database instance and set it aside in preparation for this blog post. As you can see from the screen shot above, the RDS console lets me know that I need to perform a Certificate update.

I visit Using SSL/TLS to Encrypt a Connection to a DB Instance and download a new certificate. If my database client knows how to handle certificate chains, I can download the root certificate and use it for all regions. If not, I download a certificate that is specific to the region where my database instance resides. I decide to download a bundle that contains the old and new root certificates:

Next, I update my client applications to use the new certificates. This process is specific to each app and each database client library, so I don’t have any details to share.

Once the client application has been updated, I change the certificate authority (CA) to rds-ca-2019. I can Modify the instance in the console, and select the new CA:

I can also do this via the CLI:

$ aws rds modify-db-instance --db-instance-identifier database-1 \
  --ca-certificate-identifier rds-ca-2019

The change will take effect during the next maintenance window. I can also apply it immediately:

$ aws rds modify-db-instance --db-instance-identifier database-1 \
  --ca-certificate-identifier rds-ca-2019 --apply-immediately

After my instance has been rebooted (either immediately or during the maintenance window), I test my application to ensure that it continues to work as expected.

If I am not using SSL and want to avoid a restart, I use --no-certificate-rotation-restart:

$ aws rds modify-db-instance --db-instance-identifier database-1 \
  --ca-certificate-identifier rds-ca-2019 --no-certificate-rotation-restart

The database engine will pick up the new certificate during the next planned or unplanned restart.

I can also use the RDS ModifyDBInstance API function or a CloudFormation template to change the certificate authority.

Once again, all of this must be completed by March 5, 2020 or your applications may be unable to connect to your database instance using SSL or TLS.

Things to Know
Here are a couple of important things to know:

Amazon Aurora ServerlessAWS Certificate Manager (ACM) is used to manage certificate rotations for this database engine, and no action is necessary.

Regions – Rotation is needed for database instances in all commercial AWS regions except Asia Pacific (Hong Kong), Middle East (Bahrain), and China (Ningxia).

Cluster Scaling – If you add more nodes to an existing cluster, the new nodes will receive the CA-2019 certificate if one or more of the existing nodes already have it. Otherwise, the CA-2015 certificate will be used.

Learning More
Here are some links to additional information:

Jeff;

 

Amazon at CES 2020 – Connectivity & Mobility

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-at-ces-2020-connectivity-mobility/

The Consumer Electronics Show (CES) starts tomorrow. Attendees will have the opportunity to learn about the latest and greatest developments in many areas including 5G, IoT, Advertising, Automotive, Blockchain, Health & Wellness, Home & Family, Immersive Entertainment, Product Design & Manufacturing, Robotics & Machine Intelligence, and Sports.

Amazon at CES
If you will be traveling to Las Vegas to attend CES, I would like to invite you to visit the Amazon Automotive exhibit in the Las Vegas Convention Center. Come to booth 5616 to learn about our work to help auto manufacturers and developers create the next generation of software-defined vehicles:

As you might know, this industry is working to reinvent itself, with manufacturers expanding from designing & building vehicles to a more expansive vision that encompasses multiple forms of mobility.

At the booth, you will find multiple demos that are designed to show you what is possible when you mashup vehicles, connectivity, software, apps, sensors, and machine learning in new ways.

Cadillac Customer Journey – This is an interactive, immersive demo of a data-driven shopping experience to engage customers at every touchpoint. Powered by ZeroLight and running on AWS, the demo uses 3D imagery that is generated in real time on GPU-equipped EC2 instances.

Future Mobility – This demo uses the Alexa Auto SDK and several AWS Machine Learning services to create an interactive in-vehicle assistant. It stores driver profiles in the cloud, and uses Amazon Rekognition to load the proper profile for the driver. Machine learning is used to detect repeated behaviors, such as finding the nearest coffee shop each morning.

Rivian Alexa – This full-vehicle demo showcases the deep Alexa Auto SDK integration that Rivian is using to control core vehicle functions on their upcoming R1T Electric Truck.

Smart Home / Garage – This demo ensemble showcases several of the Alexa home-to-car and car-to-home integrations, and features multiple Amazon & Alexa offerings including Amazon Pay, Fire TV, and Ring.

Karma Automotive / Blackberry QNX – Built on AWS IoT and machine learning inference models developed using Amazon SageMaker, this demo includes two use cases. The first one shows how data from Karma‘s fleet of electric vehicles is used to predict the battery state of health. The second one shows how cloud-trained models run at the edge (in the vehicle) to detect gestures that control vehicle functions.

Accenture Personalized Connected Vehicle Adventure – This demo shows how identity and personalization can be used to create unique transportation experiences. The journeys are customized using learned preferences and contextual data gathered in real time, powered by Amazon Personalize.

Accenture Data Monetization – This demo tackles data monetization while preserving customer privacy. Built around a data management reference architecture that uses Amazon QLDB and AWS Data Exchange, the demo enables consent and value exchange, with a focus on insights, predictions, and recommendations.

Denso Connected Vehicle Reference System – CVRS is an intelligent, end-to-end mobility service built on the AWS Connected Vehicle Solution. It uses a layered architecture that combines edge and cloud components, to allow mobility service providers to build innovative products without starting from scratch.

WeRide – This company runs a fleet of autonomous vehicles in China. The ML training to support the autonomy runs on AWS, as does the overall fleet management system. The demo shows how the AWS cloud supports their connected & autonomous fleet.

Dell EMC / National Instruments – This jointly developed demo focuses on the Hardware-in-Loop phase of autonomous vehicle development, where actual vehicle hardware running in real-world conditions is used.

Unity – This demo showcases a Software-in-Loop autonomous vehicle simulation built with Unity. An accurate, photorealistic representation of Berlin, Germany is used, with the ability to dynamically vary parameters such as time, weather, and scenery. Using the Unity Simulation framework and AWS, 100 permutations of each scene are generated and used as training data in parallel.

Get in Touch
If you are interested in learning more about any of these demos or if you are ready to build a connected or autonomous vehicle solution of your own, please feel free to contact us.

Jeff;

Celebrating AWS Community Leaders at re:Invent 2019

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/celebrating-aws-community-leaders-at-reinvent-2019/

Even though cloud computing is a global phenomenon, location still matters when it comes to community. For example, customers regularly tell me that they are impressed by the scale, enthusiasm, and geographic reach of the AWS User Group Community. We are deeply appreciative of the work that our user group and community leaders do.

Each year, leaders of local communities travel to re:Invent in order to attend a series of events designed to meet their unique needs. They attend an orientation session, learn about We Power Tech (“Building a future of tech that is diverse, inclusive and accessible”), watch the keynotes, and participate in training sessions as part of a half-day AWS Community Leader workshop. After re:Invent wraps up, they return to their homes and use their new knowledge and skills to do an even better job of creating and sharing technical content and of nurturing their communities.

Community Leadership Grants
In order to make it possible for more community leaders to attend and benefit from re:Invent, we launched a grant program in 2018. The grants covered registration, housing, and flights and were awarded to technologists from emerging markets and underrepresented communities.

Several of the recipients went on to become AWS Heroes, and we decided to expand the program for 2019. We chose 17 recipients from 14 countries across 5 continents, with an eye toward recognizing those who are working to build inclusive AWS communities. Additionally, We Power Tech launched a separate Grant Program with Project Alloy to help support underrepresented technologists in the first five years of their careers to attend re:Invent by covering conference registration, hotel, and airfare. In total, there were 102 grantees from 16 countries.

The following attendees received Community Leadership Grants and were able to attend re:Invent:

Ahmed Samir – Riyadh, KSA (LinkedIn, Twitter) – Ahmed is a co-organizer of the AWS Riyadh User Group. He is well known for his social media accounts in which he translates all AWS announcements to Arabic.

Veronique Robitaille – Valencia, Spain (LinkedIn, Twitter) – Veronique is an SA certified cloud consultant in Valencia, Spain. She is the co organizer of the AWS User Group in Valencia, and also translates AWS content into Spanish.

Dzenana Dzevlan – Mostar, Bosnia (LinkedIn) – Dzenana is an electrical engineering masters student at the University of Sarajevo, and a co-organizer of the AWS User Group in Bosnia-Herzegovina.

Magdalena Zawada – Warsaw, Poland (LinkedIn) – Magdalena is a cloud consultant and co-organizer of the AWS User Group Poland.

Hiromi Ito – Osaka, Japan (Twitter) – Hiromi runs IT communities for women in Japan and elsewhere in Asia, and also contributes to JAWS-UG in Kansai. She is the founder of the Asian Woman’s Association Meetup in Singapore.

Lena Taupier – Columbus, Ohio, USA (LinkedIn) – Lena co-organizes the Columbus AWS Meetup, was on the organizing team for the 2018 and 2019 Midwest / Chicago AWS Community Days, and delivered a lightning talk on “Building Diverse User Communities” at re:Invent.

Victor Perez – Panama City, Panama (LinkedIn) – Victor founded the AWS Panama User Group after deciding that he wanted to make AWS Cloud the new normal for the country. He also created the AWS User Group Caracas.

Hiro Nishimura – New York, USA (LinkedIn, Twitter) – Hiro is an educator at heart. She founded AWS Newbies to teach beginners about AWS, and worked with LinkedIn to create video courses to introduce cloud computing to non-engineers.

Sridevi Murugayen –  Chennai, India (LinkedIn) – Sridevi is a core leader of AWS Community Day Chennai. She managed a diversity session at the Community Day, and is a regular presenter and participant in the AWS Chennai User Group.

Sukanya Mandal – Mumbai, India (LinkedIn) – Sukanya leads the PyData community in Mumbai, and also contributes to the AWS User Group there. She talked about “ML for IoT at the Edge” at the AWS Developer Lounge in the re:Invent 2019 Expo Hall.

Seohyun Yoon – Seoul, Korea (LinkedIn) – Seohyun is a founding member of the student division of the AWS Korea Usergroup (AUSG), one of the youngest active AWS advocates in Korea, and served as a judge for the re:Invent 2019 Non-Profit Hackathon for Good. Check out her hands-on AWS lab guides!

Farah Clara Shinta Rachmady – Jakarta, Indonesia (LinkedIn, Twitter) – Farah nurtures AWS Indonesia and other technical communities in Indonesia, and also organizes large-scale events & community days.

Sandy Rodríguez – Mexico City, Mexico (LinkedIn) – Sandy co-organized the AWS Mexico City User Group and focuses on making events great for attendees. She delivered a 20-minute session in the AWS Village Theater at re:Invent 2019. Her work is critical to the growth of the AWS community in Mexico.

Vanessa Alves dos Santos – São Paulo, Brazil (LinkedIn) – Vanessa is a powerful AWS advocate within her community. She helped to plan AWS Community Days Brazil and the AWS User Group in São Paulo.

The following attendees were chosen for grants, but were not able to attend due to issues with travel visas:

Ayeni Oluwakemi – Lagos, Nigeria (LinkedIn, Twitter) – Ayeni is the founder of the AWS User Group in Lagos, Nigeria. She is the organizer of AWSome Day in Nigeria, and writes for the Cloud Guru Blog.

Ewere Diagboya – Lagos, Nigeria (LinkedIn, Twitter) – Ewere is one of our most active advocates in Nigeria. He is very active in the DevOps and Cloud Computing community as educator, and also organizes the DevOps Nigeria Meetup.

Minh Ha – Hanoi, Vietnam – Minh grows the AWS User Group Vietnam by organizing in-person meetups and online events. She co-organized AWS Community Day 2018, runs hackathons, and co-organized SheCodes Vietnam.

Jeff;

 

AWS Links & Updates – Monday, December 9, 2019

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-links-updates-monday-december-9-2019/

With re:Invent 2019 behind me, I have a fairly light blogging load for the rest of the month. I do, however, have a collection of late-breaking news and links that I want to share while they are still hot out of the oven!

AWS Online Tech Talks for December – We have 18 tech talks scheduled for the remainder of the month. You can lean about Running Kubernetes on AWS Fargate, What’s New with AWS IoT, Transforming Healthcare with AI, and much more!

AWS Outposts: Ordering and Installation Overview – This video walks you through the process of ordering and installing an Outposts rack. You will learn about the physical, electrical, and network requirements, and you will get to see an actual install first-hand.

NFL Digital Athlete – We have partnered with the NFL to use data and analytics to co-develop the Digital Athlete, a platform that aims to improve player safety & treatment, and to predict & prevent injury. Watch the video in this tweet to learn more:

AWS JPL Open Source Rover Challenge – Build and train a reinforcement learning (RL) model on AWS to autonomously drive JPL’s Open-Source Rover between given locations in a simulated Mars environment with the least amount of energy consumption and risk of damage. To learn more, visit the web site or watch the Launchpad Video.

Map for Machine Learning on AWS – My colleague Julien Simon created an awesome map that categories all of the ML and AI services. The map covers applied ML, SageMaker’s built-in environments, ML for text, ML for any data, ML for speech, ML for images & video, fraud detection, personalization & recommendation, and time series. The linked article contains a scaled-down version of the image; the original version is best!

Verified Author Badges for Serverless App Repository – The authors of applications in the Serverless Application Repository can now apply for a Verified Author badge that will appear next to the author’s name on the application card and the detail page.

Cloud Innovation Centers – We announced that we will open three more Cloud Innovation Centers in 2020 (one in Australia and two in Bahrain), bringing the global total to eleven.

Machine Learning Embark – This new program is designed to help companies transform their development teams into machine learning practitioners. It is based on our own internal experience, and will help to address and overcome common challenges in the machine learning journey. Read the blog post to learn more.

Enjoy!

Jeff;

Check out The Amazon Builders’ Library – This is How We Do It!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/check-out-the-amazon-builders-library-this-is-how-we-do-it/

Amazon customers often tell us that they want to know more about how we build and run our business. On the retail side, they tour Amazon Fulfillment Centers and see how we we organize our warehouses. Corporate customers often ask about our Leadership Principles, and sometimes adopt (and then adapt) them for their own use. I regularly speak with customers in our Executive Briefing Center (EBC), and talk to them about working backwards, PRFAQs, narratives, bar-raising, accepting failure as part of long-term success, and our culture of innovation.

The same curiosity that surrounds our business surrounds our development culture. We are often asked how we design, build, measure, run, and scale the hardware and software systems that underlie Amazon.com, AWS, and our other businesses.

New Builders’ Library
Today I am happy to announce The Amazon Builders’ Library. We are launching with a collection of detailed articles that will tell you exactly how we build and run our systems, each one written by the senior technical leaders who have deep expertise in that part of our business.

This library is designed to give you direct access to the theory and the practices that underlie our work. Students, developers, dev managers, architects, and CTOs will all find this content to be helpful. This is the content that is “not sold in stores” and not taught in school!

The library is organized by category:

Architecture – The design decisions that we make when designing a cloud service that help us to optimize for security, durability, high availability, and performance.

Software Delivery & Operations – The process of releasing new software to the cloud and maintaining health & high availability thereafter.

Inside the Library
I took a quick look at two of the articles while writing this post, and learned a lot!

Avoiding insurmountable queue backlogs – Principal Engineer David Yanacek explores the ins and outs of message queues, exploring the benefits and the risks, including many of the failure modes that can arise. He talks about how queues are used to power AWS Lambda and AWS IoT Core, and describes the sophisticated strategies that are used to maintain responsive and to implement (in his words) “magical resource isolation.” David shares multiple patterns that are used to create asynchronous multitenant systems that are resilient, including use of multiple queues, shuffle sharding, delay queues, back-pressure, and more.

Challenges with distributed systems – Senior Principal Engineer Jacob Gabrielson discusses they many ways that distributed systems can fail. After defining three distinct types (offline, soft real-time, and hard real-time) of systems, he uses an analogy with Bizarro to explain why hard real-time systems are (again, in his words) “frankly, a bit on the evil side.” Building on an example based on Pac-Man, he adds some request/reply communication and enumerates all of the ways that it can succeed or fail. He discussed fate sharing and how it can be used to reduce the number of test cases, and also talks about many of the other difficulties that come with testing distributed systems.

These are just two of the articles; be sure to check out the entire collection.

More to Come
We’ve got a lot more content in the pipeline, and we are also interested in your stories. Please feel free to leave feedback on this post, and we’ll be in touch.

Jeff;

 

AWS Launches & Previews at re:Invent 2019 – Wednesday, December 4th

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-launches-previews-at-reinvent-2019-wednesday-december-4th/

Here’s what we announced today:

Amplify DataStore – This is a persistent, on-device storage repository that will help you to synchronize data across devices and to handle offline operations. It can be used as a standalone local datastore for web and mobile applications that have no connection to the cloud or an AWS account. When used with a cloud backend, it transparently synchronizes data with AWS AppSync.

Amplify iOS and Amplify Android – These open source libraries enable you can build scalable and secure mobile applications. You can easily add analytics, AI/ML, API (GraphQL and REST), datastore, and storage functionality to your mobile and web applications. The use case-centric libraries provide a declarative interface that enables you to programmatically apply best practices with abstractions. The libraries, along with the Amplify CLI, a toolchain to create, integrate, and manage the cloud services used by your applications, are part of the Amplify Framework.

Amazon Neptune Workbench – You can now query your graphs from within the Neptune Console using either Gremlin or SPARQL queries. You get a fully managed, interactive development environment that supports live code and narrative text within Jupyter notebooks. In addition to queries, the notebooks support bulk loading, query planning, and query profiling. To get started, visit the Neptune Console.

Amazon Chime Meetings App for Slack – This new app allows Slack users to start and join Amazon Chime online meetings from their Slack workspace channels and conversations. Slack users that are new to Amazon Chime will be auto-registered with Chime when they use the app for the first time, and can get access to all of the benefits of Amazon Chime meetings from their Slack workspace. Administrators of Slack workspaces can install the Amazon Chime Meetings App for Slack from the Slack App Directory. To learn more, visit this blog post.

HTTP APIs for Amazon API Gateway in Preview – This is a new API Gateway feature that will let you build cost-effective, high-performance RESTful APIs for serverless workloads using Lambda functions and other services with an HTTP endpoint. HTTP APIs are optimized for performance—they offer the core functionality of API Gateway at a cost savings of up to 70% compared to REST APIs in API Gateway. You will be able to create routes that map to multiple disparate backends, define & apply authentication and authorization to routes, set up rate limiting, and use custom domains to route requests to the APIs. Visit this blog post to get started.

Windows gMSA Support in ECS – Amazon Elastic Container Service (ECS) now supports Windows group Managed Service Account (gMSA), a new capability that allows you to authenticate and authorize your ECS-powered Windows containers with network resources using an Active Directory (AD). You can now easily use Integrated Windows Authentication with your Windows containers on ECS to secure services.

Jeff;

 

AWS Launches & Previews at re:Invent 2019 – Tuesday, December 3rd

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-launches-previews-at-reinvent-2019-tuesday-december-3rd/

Whew, what a day. This post contains a summary of the announcements that we made today.

Launch Blog Posts
Here are detailed blog posts for the launches:

Other Launches
Here’s an overview of some launches that did not get a blog post. I’ve linked to the What’s New or product information pages instead:

EBS-Optimized Bandwidth Increase – Thanks to improvements to the Nitro system, all newly launched C5/C5d/C5n/C5dn, M5/M5d/M5n/M5dn, R5/R5d/R5n/R5dn, and P3dn instances will support 36% higher EBS-optimized instance bandwidth, up to 19 Gbps. In addition newly launched High Memory instances (6, 9, 12 TB) will also support 19 Gbps of EBS-optimized instance bandwidth, a 36% increase from 14Gbps. For details on each size, read more about Amazon EBS-Optimized Instances.

EC2 Capacity Providers – You will have additional control over how your applications use compute capacity within EC2 Auto Scaling Groups and when using AWS Fargate. You get an abstraction layer that lets you make late binding decisions on capacity, including the ability to choose how much Spot capacity that you would like to use. Read the What’s New to learn more.

Previews
Here’s an overview of the previews that we revealed today, along with links that will let you sign up and/or learn more (most of these were in Andy’s keynote):

AWS Wavelength – AWS infrastructure deployments that embed AWS compute and storage services within the telecommunications providers’ datacenters at the edge of the 5G network to provide developers the ability to build applications that serve end-users with single-digit millisecond latencies. You will be able to extend your existing VPC to a Wavelength Zone and then make use of EC2, EBS, ECS, EKS, IAM, CloudFormation, Auto Scaling, and other services. This low-latency access to AWS will enable the next generation of mobile gaming, AR/VR, security, and video processing applications. To learn more, visit the AWS Wavelength page.

Amazon Managed Apache Cassandra Service (MCS) – This is a scalable, highly available, and managed Apache Cassandra-compatible database service. Amazon Managed Cassandra Service is serverless, so you pay for only the resources you use and the service automatically scales tables up and down in response to application traffic. You can build applications that serve thousands of requests per second with virtually unlimited throughput and storage. To learn more, read New – Amazon Managed Apache Cassandra Service (MCS).

Graviton2-Powered EC2 Instances – New Arm-based general purpose, compute-optimized, and memory-optimized EC2 instances powered by the new Graviton2 processor. The instances offer a significant performance benefit over the 5th generation (M5, C5, and R5) instances, and also raise the bar on security. To learn more, read Coming Soon – Graviton2-Powered General Purpose, Compute-Optimized, & Memory-Optimized EC2 Instances.

AWS Nitro EnclavesAWS Nitro Enclaves will let you create isolated compute environments to further protect and securely process highly sensitive data such as personally identifiable information (PII), healthcare, financial, and intellectual property data within your Amazon EC2 instances. Nitro Enclaves uses the same Nitro Hypervisor technology that provides CPU and memory isolation for EC2 instances. To learn more, visit the Nitro Enclaves page. The Nitro Enclaves preview is coming soon and you can sign up now.

Amazon Detective – This service will help you to analyze and visualize security data at scale. You will be able to quickly identify the root causes of potential security issues or suspicious activities. It automatically collects log data from your AWS resources and uses machine learning, statistical analysis, and graph theory to build a linked set of data that will accelerate your security investigation. Amazon Detective can scale to process terabytes of log data and trillions of events. Sign up for the Amazon Detective Preview.

Amazon Fraud Detector – This service makes it easy for you to identify potential fraud that is associated with online activities. It uses machine learning and incorporates 20 years of fraud detection expertise from AWS and Amazon.com, allowing you to catch fraud faster than ever before. You can create a fraud detection model with a few clicks, and detect fraud related to new accounts, guest checkout, abuse of try-before-you-buy, and (coming soon) online payments. To learn more, visit the Amazon Fraud Detector page.

Amazon Kendra – This is a highly accurate and easy to use enterprise search service that is powered by machine learning. It supports natural language queries and will allow users to discover information buried deep within your organization’s vast content stores. Amazon Kendra will include connectors for popular data sources, along with an API to allow data ingestion from other sources. You can access the Kendra Preview from the AWS Management Console.

Contact Lens for Amazon Connect – This is a set of analytics capabilities for Amazon Connect that use machine learning to understand sentiment and trends within customer conversations in your contact center. Once enabled, specified calls are automatically transcribed using state-of-the-art machine learning techniques, fed through a natural language processing engine to extract sentiment, and indexed for searching. Contact center supervisors and analysts can look for trends, compliance risks, or contacts based on specific words and phrases mentioned in the call to effectively train agents, replicate successful interactions, and identify crucial company and product feedback. Sign up for the Contact Lens for Amazon Connect Preview.

Amazon Augmented AI (A2I) – This service will make it easy for you to build workflows that use a human to review low-confidence machine learning predictions. The service includes built-in workflows for common machine learning use cases including content moderation (via Amazon Rekognition) and text extraction (via Amazon Textract), and also allows you to create your own. You can use a pool of reviewers within your own organization, or you can access the workforce of over 500,000 independent contractors who are already performing machine learning tasks through Amazon Mechanical Turk. You can also make use of workforce vendors that are pre-screened by AWS for quality and adherence to security procedures. To learn more, read about Amazon Augmented AI (Amazon A2I), or visit the A2I Console to get started.

Amazon CodeGuru – This ML-powered service provides code reviews and application performance recommendations. It helps to find the most expensive (computationally speaking) lines of code, and gives you specific recommendations on how to fix or improve them. It has been trained on best practices learned from millions of code reviews, along with code from thousands of Amazon projects and the top 10,000 open source projects. It can identify resource leaks, data race conditions between concurrent threads, and wasted CPU cycles. To learn more, visit the Amazon CodeGuru page.

Amazon RDS Proxy – This is a fully managed database proxy that will help you better scale applications, including those built on modern serverless architectures, without worrying about managing connections and connection pools, while also benefiting from faster failover in the event of a database outage. It is highly available and deployed across multiple AZs, and integrates with IAM and AWS Secrets Manager so that you don’t have to embed your database credentials in your code. Amazon RDS Proxy is fully compatible with MySQL protocol and requires no application change. You will be able to create proxy endpoints and start using them in minutes. To learn more, visit the RDS Proxy page.

Jeff;

New – AWS Step Functions Express Workflows: High Performance & Low Cost

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-step-functions-express-workflows-high-performance-low-cost/

We launched AWS Step Functions at re:Invent 2016, and our customers took to the service right away, using them as a core element of their multi-step workflows. Today, we see customers building serverless workflows that orchestrate machine learning training, report generation, order processing, IT automation, and many other multi-step processes. These workflows can run for up to a year, and are built around a workflow model that includes checkpointing, retries for transient failures, and detailed state tracking for auditing purposes.

Based on usage and feedback, our customers really like the core Step Functions model. They love the declarative specifications and the ease with which they can build, test, and scale their workflows. In fact, customers like Step Functions so much that they want to use them for high-volume, short-duration use cases such as IoT data ingestion, streaming data processing, and mobile application backends.

New Express Workflows
Today we are launching Express Workflows as an option to the existing Standard Workflows. The Express Workflows use the same declarative specification model (the Amazon States Language) but are designed for those high-volume, short-duration use cases. Here’s what you need to know:

Triggering – You can use events and read/write API calls associated with a long list of AWS services to trigger execution of your Express Workflows.

Execution Model – Express Workflows use an at-least-once execution model, and will not attempt to automatically retry any failed steps, but you can use Retry and Catch, as described in Error Handling. The steps are not checkpointed, so per-step status information is not available. Successes and failures are logged to CloudWatch Logs, and you have full control over the logging level.

Workflow Steps – Express Workflows support many of the same service integrations as Standard Workflows, with the exception of Activity Tasks. You can initiate long-running services such as AWS Batch, AWS Glue, and Amazon SageMaker, but you cannot wait for them to complete.

Duration – Express Workflows can run for up to five minutes of wall-clock time. They can invoke other Express or Standard Workflows, but cannot wait for them to complete. You can also invoke Express Workflows from Standard Workflows, composing both types in order to meet the needs of your application.

Event Rate – Express Workflows are designed to support a per-account invocation rate greater than 100,000 events per second. Accounts are configured for 6,000 events per second by default and we will, as usual, raise it on request.

Pricing – Standard Workflows are priced based on the number of state transitions. Express Workflows are priced based on the number of invocations and a GB/second charge based on the amount of memory used to track the state of the workflow during execution. While the pricing models are not directly comparable, Express Workflows will be far more cost-effective at scale. To learn more, read about AWS Step Functions Pricing.

As you can see, most of what you already know about Standard Workflows also applies to Express Workflows! You can replace some of your Standard Workflows with Express Workflows, and you can use Express Workflows to build new types of applications.

Using Express Workflows
I can create an Express Workflow and attach it to any desired events with just a few minutes of work. I simply choose the Express type in the console:

Then I define my state machine:

I configure the CloudWatch logging, and add a tag:

Now I can attach my Express Workflow to my event source. I open the EventBridge Console and create a new rule:

I define a pattern that matches PutObject events on a single S3 bucket:

I select my Express Workflow as the event target, add a tag, and click Create:

The particular event will occur only if I have a CloudTrail trail that is set up to record object-level activity:

Then I upload an image to my bucket, and check the CloudWatch Logs group to confirm that my workflow ran as expected:

As a more realistic test, I can upload several hundred images at once and confirm that my Lambda functions are invoked with high concurrency:

I can also use the new Monitoring tab in the Step Functions console to view the metrics that are specific to the state machine:

Available Now
You can create and use AWS Step Functions Express Workflows today in all AWS Regions!

Jeff;

New – Programmatic Access to EBS Snapshot Content

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-programmatic-access-to-ebs-snapshot-content/

EBS Snapshots are really cool! You can create them interactively from the AWS Management Console:

You can create them from the Command Line (create-snapshot) or by making a call to the CreateSnapshot function, and you can use the Data Lifecycle Manager (DLM) to set up automated snapshot management.

All About Snapshots
The snapshots are stored in Amazon Simple Storage Service (S3), and can be used to quickly create fresh EBS volumes as needed. The first snapshot of a volume contains a copy of every 512K block on the volume. Subsequent snapshots contain only the blocks that have changed since the previous snapshot. The incremental nature of the snapshots makes them very cost-effective, since (statistically speaking) many of the blocks on an EBS volume do not change all that often.

Let’s look at a quick example. Suppose that I create and format an EBS volume with 8 blocks (this is smaller than the allowable minimum size, but bear with me), copy some files to it, and then create my first snapshot (Snap1). The snapshot contains all of the blocks, and looks like this:

Then I add a few more files, delete one, and create my second snapshot (Snap2). The snapshot contains only the blocks that were modified after I created the first one, and looks like this:

I make a few more changes, and create a third snapshot (Snap3):

Keep in mind that the relationship between directories, files, and the underlying blocks is controlled by the file system, and is generally quite complex in real-world situations.

Ok, so now I have three snapshots, and want to use them to create a new volume. Each time I create a snapshot of an EBS volume, an internal reference to the previous snapshot is created. This allows CreateVolume to find the most recent copy of each block, like this:

EBS manages all of the details for me behind the scenes. For example, if I delete Snap2, the copy of Block 0 in the snapshot also deleted since the copy in Snap3 is newer, but the copy of Block 4 in Snap2 becomes part of Snap3:

By the way, the chain of backward references (Snap3 to Snap1, or Snap3 to Snap2 to Snap1) is referred to as the lineage of the set of snapshots.

Now that I have explained all this, I should also tell you that you generally don’t need to know this, and can focus on creating, using, and deleting snapshots!

However…

Access to Snapshot Content
Today we are introducing a new set of functions that provide you with access to the snapshot content, as described above. These functions are designed for developers of backup/recovery, disaster recovery, and data management products & services, and will allow them to make their offerings faster and more cost-effective.

The new functions use a block index (0, 1, 2, and so forth), to identify a particular 512K block within a snapshot. The index is returned in the form of an encrypted token, which is meaningful only to the GetSnapshotBlock function. I have represented these tokens as T0, T1, and so forth below. The functions currently work on blocks of 512K bytes, with plans to support more block sizes in the future.

Here are the functions:

ListSnapshotBlocks – Identifies all of the blocks in a given snapshot as encrypted tokens. For Snap1, it would return [T0, T1, T2, T3, T4, T5, T6, T7] and for Snap2 it would return [T0, T4].

GetSnapshotBlock – Returns the content of a block. If the block is part of an encrypted snapshot, it will be returned in decrypted form.

ListChangedBlocks – Returns the list of blocks that have changed between two snapshots in a lineage, again as encrypted tokens. For Snap2 it would return [T0, T4] and for Snap3 it would return [T0, T5].

Like I said, these functions were built to address one specialized yet very important use case. Having said that, I am now highly confident that new and unexpected ones will pop up within 48 hours (feel free to share them with me)!

Available Now
The new functions are available now and you can start using them today in the US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Singapore), and Asia Pacific (Tokyo) Regions; they will become available in the remaining regions in the next few weeks. There is a charge for calls to the List and Get functions, and the usual KMS charges will apply when you call GetSnapshotBlock to access a block that is part of an encrypted snapshot.

Jeff;