All posts by Jeff Barr

New – Offline Tape Migration Using AWS Snowball Edge

Post Syndicated from Jeff Barr original

Over the years, we have given you a succession of increasingly powerful tools to help you migrate your data to the AWS Cloud. Starting with AWS Import/Export back in 2009, followed by Snowball in 2015, Snowmobile and Snowball Edge in 2016, and Snowcone in 2020, each new device has given you additional features to simplify and expedite the migration process. All of the devices are designed to operate in environments that suffer from network constraints such as limited bandwidth, high connections costs, or high latency.

Offline Tape Migration
Today, we are taking another step forward by making it easier for you to migrate data stored offline on physical tapes. You can get rid of your large and expensive storage facility, send your tape robots out to pasture, and eliminate all of the time & effort involved in moving archived data to new formats and mediums every few years, all while retaining your existing tape-centric backup & recovery utilities and workflows.

This launch brings a tape migration capability to AWS Snowball Edge devices, and allows you to migrate up to 80 TB of data per device, making it suitable for your petabyte-scale migration efforts. Tapes can be stored in the Amazon S3 Glacier Flexible Retrieval or Amazon S3 Glacier Deep Archive storage classes, and then accessed from on-premises and cloud-based backup and recovery utilities.

Back in 2013 I showed you how to Create a Virtual Tape Library Using the AWS Storage Gateway. Today’s launch builds on that capability in two different ways. First, you create a Virtual Tape Library (VTL) on a Snowball Edge and copy your physical tapes to it. Second, after your tapes are in the cloud, you create a VTL on a Storage Gateway and use it to access your virtual tapes.

Getting Started
To get started, I open the Snow Family Console and create a new job. Then I select Import virtual tapes into AWS Storage Gateway and click Next:

Then I go through the remainder of the ordering sequence (enter my shipping address, name my job, choose a KMS key, and set up notification preferences), and place my order. I can track the status of the job in the console:

When my device arrives I tell the somewhat perplexed delivery person about data transfer, carry it down to my basement office, and ask Luna to check it out:

Back in the Snow Family console, I download the manifest file and copy the unlock code:

I connect the Snowball Edge to my “corporate” network:

Then I install AWS OpsHub for Snow Family on my laptop, power on the Snowball Edge, and wait for it to obtain & display an IP address:

I launch OpsHub, sign in, and accept the default name for my device:

I confirm that OpsHub has access to my device, and that the device is unlocked:

I view the list of services running on the device, and note that Tape Gateway is not running:

Before I start Tape Gateway, I create a Virtual Network Interface (VNI):

And then I start the Tape Gateway service on the Snow device:

Now that the service is running on the device, I am ready to create the Storage Gateway. I click Open Storage Gateway console from within OpsHub:

I select Snowball Edge as my host platform:

Then I give my gateway a name (MyTapeGateway), select my backup application (Veeam Backup & Replication in this case), and click Activate Gateway:

Then I configure CloudWatch logging:

And finally, I review the settings and click Finish to activate my new gateway:

The activation process takes a few minutes, just enough time to take Luna for a quick walk. When I return, the console shows that the gateway is activated and running, and I am all set:

Creating Tapes
The next step is to create some virtual tapes. I click Create tapes and enter the requested information, including the pool (Deep Archive or Glacier), and click Create tapes:

The next step is to copy data from my physical tapes to the Snowball Edge. I don’t have a data center and I don’t have any tapes, so I can’t show you how to do this part. The data is stored on the device, and my Internet connection is used only for management traffic between the Snowball Edge device and AWS. To learn more about this part of the process, check out our new animated explainer.

After I have copied the desired tapes to the device, I prepare it for shipment to AWS. I make sure that all of the virtual tapes in the Storage Gateway Console have the status In Transit to VTS (Virtual Tape Shelf), and then I power down the device.

The display on the device updates to show the shipping address, and I wait for the shipping company to pick up the device.

When the device arrives at AWS, the virtual tapes are imported, stored in the S3 storage class associated with the pool that I chose earlier, and can be accessed by retrieving them using an online tape gateway. The gateway can be deployed as a virtual machine or a hardware appliance.

Now Available
You can use AWS Snowball Edge for offline tape migration in the US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Sydney) Regions. Start migrating petabytes of your physical tape data to AWS, today!


New – Amazon FSx for OpenZFS

Post Syndicated from Jeff Barr original

Last month, my colleague Bill Vass said that we are “slowly adding additional file systems” to Amazon FSx. I’d question Bill’s definition of slow, given that his team has launched Amazon FSx for Lustre, Amazon FSx for Windows File Server, and Amazon FSx for NetApp ONTAP in less than three years.

Amazon FSx for OpenZFS
Today I am happy to announce Amazon FSx for OpenZFS, the newest addition to the Amazon FSx family. Just like the other members of the family, this new addition lets you use a popular file system without having to deal with hardware provisioning, software configuration, patching, backups, and the like. You can create a file system in minutes and begin to enjoy the benefits of OpenZFS right away: transparent compression, continuous integrity verification, snapshots, and copy-on-write. Even better, you get all of these benefits without having to develop the specialized expertise that has traditionally been needed to set up and administer OpenZFS.

FSx for OpenZFS is powered by the AWS Graviton family processors and AWS SRD (Scalable Reliable Datagram) Networking, and can deliver up to 1 million IOPS with latencies of 100-200 microseconds, along with up to 4 GB/second of uncompressed throughput, up to 12 GB/second of compressed throughput, and up to 12.5 GB/second throughput to cached data. FSx for OpenZFS supports the OpenZFS Adaptive Replacement Cache (ARC) and uses memory in the file server to provide faster performance. It also supports advanced NFS performance features such as session trunking and NFS delegation, allowing you to get very high throughput and IOPS from a single client, while still safely caching frequently accessed data on the client side.

FSx for OpenZFS volumes can be accessed from cloud or on-premises Linux, MacOS, and Windows clients via industry-standard NFS protocols (v3, v4, v4.1, and v4.2). Cloud clients can be Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (EKS) clusters, Amazon WorkSpaces virtual desktops, and VMware Cloud on AWS. Your data is stored in encrypted form and replicated within an AWS Availability Zone, with components replaced automatically and transparently as necessary.

You can use FSx for OpenZFS to address your highly demanding machine learning, EDA (Electronic Design Automation), media processing, financial analytics, code repository, DevOps, and web content management workloads. With performance that is close to local storage, FSx for OpenZFS is great for these and other latency-sensitive workloads that manipulate and sequentially access many small files. Finally, because you can create, mount, use, and delete file systems as needed, you can now use OpenZFS in a dynamic, agile fashion.

Using Amazon FSx for OpenZFS
I can create an OpenZFS file system using the AWS Management Console, CLI, APIs, or AWS CloudFormation. From the FSx Console I click Create file system and choose Amazon FSx for OpenZFS:

I can choose Quick create (and use recommended best-practice configurations), or Standard create (and set all of the configuration options myself). I’ll take the easy route and use the recommended best practices to get started. I enter a name (Jeff-OpenZFS) select the amount of SSD storage that I need, choose a VPC & subnet, and click Next:

The console shows me that I can edit many of the attributes of my file system later if necessary. I review the settings and click Create file system:

My file system is ready within a minute or two, and I click Attach to get the proper commands to mount it to my client:

To be more precise, I am mounting the root volume (/fsx) of my file system. Once it is mounted, I can use it as I would any other file system. After I add some files to it, I can use the Action menu in the console to create a backup:

I can restore the backup to a new file system:

As I noted earlier, each file system can deliver up to 4 gigabytes per second of throughput for uncompressed data. I can look at total throughput and other metrics in the console:

I can set throughput capacity of each volume when I create it, and then change it later if necessary:

Changes take effect within minutes. The file system remains active and mounted while the change is put into effect, but some operations may pause momentarily:

A single OpenZFS file system can contain multiple volumes, each with separate quotas (overall volume storage, per-user storage, and per-group storage) and compression settings. When I use the quick create option a root volume named fsx is created for me; I can click Create volume to create more volumes at any time:

The new volume exists within the namespace hierarchy of the parent, and can be mounted separately or accessed from the parent.

Things to Know
Here are a couple of quick facts and to wrap up this post:

Pricing – Pricing is based on the provisioned storage capacity, throughput, and IOPS.

Regions – Amazon FSx for OpenZFS is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Canada (Central), Asia Pacific (Tokyo), and Europe (Frankfurt) Regions.

In the Works – We are working on additional features including storage scaling, IOPS scaling, a high availability option and another storage class.

Now Available
Amazon FSx for OpenZFS is available now and you can start using it today!


AWS Nitro SSD – High Performance Storage for your I/O-Intensive Applications

Post Syndicated from Jeff Barr original

We love to solve difficult problems for our customers! As you have seen through the years, innovation at AWS takes many forms, and encompasses both hardware and software.

One of my favorite examples of customer-driven innovation is AWS Nitro System, which I first wrote about back in mid-2018. In that post I told you how Nitro System would allow us to innovate more quickly than ever, with the goal of creating instances that would run even more types of workloads. I also shared the basic building blocks, as they existed at that time, including Nitro Cards to accelerate and offload network and storage I/O, the Nitro Security Chip to monitor and protect hardware resources, and the Nitro Hypervisor to manage memory and CPU allocation with very low overhead.

Today I would like to tell you about one more building block!

For decades, traditional hard drives (sometimes jokingly referred to as spinning rust) were the primary block storage devices. Today, while spinning rust still has its place, most high-performance storage is based on more modern Solid State Drives (SSD). Open up an SSD and you will find lots of flash memory and a firmware-driven processor that manages access to the memory and supports higher-level functions such as block mapping, encryption, caching, wear leveling, and so forth.

The scale of the AWS Cloud and the range of customer use cases that it supports gives us some valuable insights into the ways that today’s applications, database engines, and operating systems make use of block storage. As a result, after delivering several generations of EC2 instances we saw an opportunity to do better. Our goal was to allow I/O-intensive workloads (relational databases, NoSQL databases, data warehouses, search engines, and analytics engines to name a few) to run faster and with more predictable performance.

Today I would like to tell you about the AWS Nitro SSD. The first generation of these devices were used to power io2 Block Express EBS volumes, and allow us to give you EBS volumes with lots of IOPS, plenty of throughput, and a maximum volume size of 64 TiB. The Im4gn and Is4gen instances that I wrote about earlier today make use of the second generation of AWS Nitro SSDs, as will many future EC2 instances, including the I4i instances that we preannounced today.

The AWS Nitro SSDs are designed to be installed and to operate at cloud scale. While this sounds like a simple exercise in manufacturing and installing more devices, the reality is a lot more complex and a lot more interesting. As I noted earlier, the firmware inside of each device is responsible for implementing many lower-level functions. As our customers push the devices to their limits, they expect us to be able to diagnose and resolve any performance inconsistencies they observe. Building our own devices allows us to design in operational telemetry and diagnostics, along with mechanisms that enable us to install firmware updates at cloud scale & at cloud speed. Taking this even further, we developed our own code to manage the instance-level storage in order to further improve the reliability and debug-ability, and to deliver consistent performance.

On the performance side, our deep understanding of cloud workloads led us to engineer the devices so that they can deliver maximum performance under a sustained, continuous load. SSDs are built from fast, dense flash memory. Due to the characteristics of this semiconductor memory, each cell can only be written, erased, and then rewritten a limited number of times. In order to make the devices last as long as possible, the firmware is responsible for a process known as wear leveling. I don’t understand the details, but I assume that this includes some sort of mapping from logical block numbers to physical cells in a way that evens out the number of cycles over time. There’s some housekeeping (a form of garbage collection) involved in this process, and garden-variety SSDs can slow down (creating latency spikes) at unpredictable times when dealing with a barrage of writes. We also took advantage of our database expertise and built a very sophisticated, power-fail-safe journal-based database into the SSD firmware.

The second generation of AWS Nitro SSDs were designed to avoid latency spikes and deliver great I/O performance on real-world workloads. Our benchmarks show instances that use the AWS Nitro SSDs, such as the new Im4gn and Is4gen, deliver 75% lower latency variability than I3 instances, giving you more consistent performance.

Putting all of this together, there’s a very tight, rapidly rotating flywheel in action here because the team that builds the Nitro SSDs is part of the AWS storage team, and also has operational responsibilities. Like all teams at AWS, they watch the metrics day-in and day-out, and can efficiently deploy new firmware using a CI/CD model.

Join the Team
As is always the case, there’s always more innovation ahead, and we have some awesome positions on the teams that design the AWS Nitro SSDs. For example:


New Storage-Optimized Amazon EC2 Instances (Im4gn and Is4gen) Powered by AWS Graviton2 Processors

Post Syndicated from Jeff Barr original

EC2 storage-optimized instances are designed to deliver high disk I/O performance, and plenty of storage. Our customers use them to host high-performance real-time databases, distributed file systems, data warehouses, key-value stores, and more. Over the years we have released multiple generations of storage-optimized instances including the HS1 (2012) , D2 (2015), I2 (2013) , I3 (2017), I3en (2019), and D3/D3en (2020).

As I look back on all of these launches, it is interesting to see how we continue to provide an ever-increasing set of options that make each successive generation an even better fit for the diverse (and also ever-increasing) needs of our customers. HS1 instances were available in just one size, D2 and I2 in four, I3 in six, and I3en in eight. These instances give our customers the freedom to choose the size that best meets their current needs while also giving them room to scale up or down if those needs happen to change.

Im4gn and Is4gen
Today I am happy to introduce the two newest families of storage-optimized instances, Im4gn and Is4gen, powered by Graviton2 processors. Both instances offer up to 30 TB of NVMe storage using AWS Nitro SSD devices that are custom-built by AWS. As part of our drive to innovate on behalf of our customers, we turned our attention to storage and designed devices that were optimized to support high-speed access to large amounts of data. The AWS Nitro SSDs reduce I/O latency by up to 60% and also reduce latency variability by up to 75% when compared to the third generation of storage-optimized instances. As a result you get faster and more predictable performance for your I/O-intensive EC2 workloads.

Im4gn instances are a great fit for applications that require large amounts of dense SSD storage and high compute performance, but are not especially memory intensive such as social games, session storage, chatbots, and search engines. Here are the specs:

Instance Name vCPUs
Memory Local NVMe Storage
(AWS Nitro SSD)
Read Throughput
(128 KB Blocks)
EBS-Optimized Bandwidth Network Bandwidth
im4gn.large 2 8 GiB 937 GB 250 MB/s Up to 9.5 Gbps Up to 25 Gbps
im4gn.xlarge 4 16 GiB 1.875 TB 500 MB/s Up to 9.5 Gbps Up to 25 Gbps
im4gn.2xlarge 8 32 GiB 3.75 TB 1 GB/s Up to 9.5 Gbps Up to 25 Gbps
im4gn.4xlarge 16 64 GiB 7.5 TB 2 GB/s 9.5 Gbps 25 Gbps
im4gn.8xlarge 32 128 GiB 15 TB
(2 x 7.5 TB)
4 GB/s 19 Gbps 50 Gbps
im4gn.16xlarge 64 256 GiB 30 TB
(4 x 7.5 TB)
8 GB/s 38 Gbps 100 Gbps

Im4gn instances provide up to 40% better price performance and up to 44% lower cost per TB of storage compared to I3 instances. The new instances are available in the AWS US West (Oregon), US East (Ohio), US East (N. Virginia), and Europe (Ireland) Regions as On-Demand, Spot, Savings Plan, and Reserved instances.

Is4gen instances are a great fit for applications that do large amounts of random I/O to large amounts of SSD storage. This includes shared file systems, stream processing, social media monitoring, and streaming platforms, all of which can use the increased storage density to retain more data locally. Here are the specs:

Instance Name vCPUs
Memory Local NVMe Storage
(AWS Nitro SSD)
Read Throughput
(128 KB Blocks)
EBS-Optimized Bandwidth Network Bandwidth
is4gen.medium 1 6 GiB 937 GB 250 MB/s Up to 9.5 Gbps Up to 25 Gbps
is4gen.large 2 12 GiB 1.875 TB 500 MB/s Up to 9.5 Gbps Up to 25 Gbps
is4gen.xlarge 4 24 GiB 3.75 TB 1 GB/s Up to 9.5 Gbps Up to 25 Gbps
is4gen.2xlarge 8 48 GiB 7.5 TB 2 GB /s Up to 9.5 Gbps Up to 25 Gbps
is4gen.4xlarge 16 96 GiB 15 TB
(2 x 7.5 TB)
4 GB/s 9.5 Gbps 25 Gbps
is4gen.8xlarge 32 192 GiB 30 TB
(4 x 7.5 TB)
8 GB/s 19 Gbps 50 Gbps

Is4gen instances provide 15% lower cost per TB of storage and up to 48% better compute performance compared to I3en instances. The new instances are available in the AWS US West (Oregon), US East (Ohio), US East (N. Virginia), and Europe (Ireland) Regions as On-Demand, Spot, Savings Plan, and Reserved instances.

Available Now
As I never get tired of saying, these new instances are available now and you can start using them today. You can use Amazon Linux 2, Ubuntu 18.04.05 (and newer), Red Hat Enterprise Linux 8.0, and SUSE Enterprise Server 15 (and newer) AMIs, along with the container-optimized ECS and EKS AMIs. Learn more about the Im4gn and Is4gen instances.


PS – As of this launch twelve EC2 instance types are now powered by Graviton2 processors! To learn more, visit the Graviton2 page.

New – AWS Outposts Servers in Two Form Factors

Post Syndicated from Jeff Barr original

AWS Outposts gives you on-premises compute and storage that is monitored and managed by AWS, and controlled by the same, familiar AWS APIs. You may already know about the AWS Outposts rack, which occupies a full 42U rack.

Last year I told you that we were working on new sizes of Outposts suitable for locations such as branch offices, factories, retail stores, health clinics, hospitals, and cell sites that are space-constrained and need access to low-latency compute capacity. Today we are launching three AWS Outposts servers, all powered by AWS Nitro System and with your choice of x86 or Arm/Graviton2 processors. Here’s an overview:

Name/Rack Size/Catalog ID
EC2 Instance Capacity
Processor / Architecture
vCPUs Memory
Local NVMe
SSD Storage
Outposts 1U
c6gd.16xlarge Graviton2 / Arm 64 128 GiB 3.8 TB
( 2x 1.9 TB)
Outposts 2U
c6id.16xlarge Intel Ice Lake / x86 64 128 GiB 3.8 TB
(2 x 1.9 TB)
Outposts 2U
c6id.32xlarge Intel Ice Lake / x86 128 256 GiB 7.6 TB
(4 x 1.9 GB)

You can create VPC subnets on each Outpost, and you can launch Amazon Elastic Compute Cloud (Amazon EC2) instances from EBS-backed AMIs in the parent region. The c6gd.16xlarge model supports six instance sizes, as follows:

Instance Name vCPUs Memory Local Storage
c6gd.large 2 4 GiB 118 GB
c6gd.xlarge 4 8 GiB 237 GB
c6gd.2xlarge 8 16 GiB 474 GB
c6gd.4xlarge 16 32 GiB 950 GB
c6gd.8xlarge 32 64 GiB 1.9 TB
c6gd.16xlarge 64 128 GiB 3.8 TB

The c6id.16xlarge model supports all but the largest of the following instance sizes, and the c6id.32xlarge supports all of them:

Instance Name vCPUs Memory Local Storage
c6id.large 2 4 GiB 118 GB
c6id.xlarge 4 8 GiB 237 GB
c6id.2xlarge 8 16 GiB 474 GB
c6id.4xlarge 16 32 GiB 950 GB
c6id.8xlarge 32 64 GiB 1.9 TB
c6id.16xlarge 64 128 GiB 3.8 TB
c6id.32xlarge 128 256 GiB 7.6 TB

Within each of your Outposts servers, you can launch any desired mix of instance sizes as long as you remain within the overall processing and storage available. You can create Amazon Elastic Container Service (Amazon ECS) clusters (Amazon Elastic Kubernetes Service (EKS) is coming soon) , and the code you run on-premises can make use of the entire lineup of services in the AWS Cloud.

Each Outposts server connects to the cloud via the public Internet or across a private AWS Direct Connect line. Additionally, each Outpost server supports a Local Network Interface (LNI) that provides a Level 2 presence on your local network for AWS service endpoints.

Outposts servers incorporate many powerful Nitro features including high speed networking and enhanced security. The security model is locked-down and prevents administrative access, preventing tampering or human error. Additionally, data at rest is protected by a NIST-compliant physical security key.

While I was writing this post, I stopped in to say hello to the design and development team, and met with my colleague Bianca Nagy to learn more about the Outposts server:

Ordering Outposts Servers
Let’s walk through the process of ordering an Outposts server from the AWS Management Console. I visit the AWS Outposts Console, make sure that I am in the desired AWS Region, and click Place order to get started:

I click Servers, and then choose the desired configuration. I pick the c6gd.16xlarge, and click Next to proceed:

Then I create a new Outpost:

And a new Site:

Then I review my payment options and select my shipping address:

On the next page I review all of my options, click Place order, and await delivery:

In general, we expect to be able to deliver Outposts servers in two to six weeks, starting in the first quarter of 2022. After you receive yours, you or a member of your IT team can mount it in a 19″ rack or position it on a flat surface, cable it to power and networking, and power the device on. You then use a set of temporary AWS credentials to confirm the identity of the device, and to verify that the device is able to use DHCP to obtain an IP address. Once the device has established connectivity to the designated AWS parent region, we will finalize the provisioning of EC2 instance capacity and make it available to you.

After that, you are ready to launch instances and to deploy your on-premises applications.

We will monitor hardware performance and will contact you if your device is in need of maintenance. We will ship a replacement device for arrival within 2 business days. You can migrate your workloads to a redundant device, and use tracking information & notifications to track delivery status. When the replacement arrives, you install it and then destroy the physical security key in the old one before shipping it back to AWS.

Outposts API Update
We are also enhancing the Outposts API as part of this launch. Here are some of the new functions:

ListCatalogItem – Get a list of items in the Outposts catalog, with optional filtering by EC2 family or supported storage options.

GetCatalogItem – Get full information about a single item in the Outposts catalog.

GetSiteAddress – Get the physical address of a site where an Outposts rack or server is installed.

You can use the information returned by GetCatalogItem to place an order that contains the desired quantity of one or more catalog items.

Things to Know
Here are a couple of important things to know about Outposts servers:

Availability – Outposts servers are available for order to most locations where Outposts racks are available (currently 23 regions and 49 countries), with more to follow in 2022.

Ordering at Scale – I showed you the console-based ordering process above, and also gave you a glimpse at the Outposts API. If you need hundreds or thousands of devices, get in touch and we will give you a template that you can fill in and then upload.

re:Invent 2021 Outposts Server Selfie Challenge
If you attend AWS re:Invent, be sure to visit the AWS Hybrid kiosk in the AWS Booth (#1719) to see the new Outposts Servers up close and personal. While you are there, take a fun & creative selfie, tag it with #AWSOutposts & #AWSPromotion, and share it on Twitter. I will post my three favorites at the end of the show!


Join the Preview – Amazon EC2 C7g Instances Powered by New AWS Graviton3 Processors

Post Syndicated from Jeff Barr original

We announced the first generation AWS-designed Graviton processor in late 2018, and followed it up with the second generation Graviton2 a year later. Today, AWS customers make use of twelve different Graviton2-powered instances including the new X2gd instances that are designed for memory-intensive workloads. All Graviton processors include dedicated cores & caches for each vCPU, along with additional security features courtesy of AWS Nitro System; the Graviton2 processors add support for always-on memory encryption.

C7g in the Works
I am thrilled to tell you about our upcoming C7g instances. Powered by new Graviton3 processors, these instances are going to be a great match for your compute-intensive workloads: HPC, batch processing, electronic design automation (EDA), media encoding, scientific modeling, ad serving, distributed analytics, and CPU-based machine learning inferencing.

While we are still optimizing these instances, it is clear that the Graviton3 is going to deliver amazing performance. In comparison to the Graviton2, the Graviton3 will deliver up to 25% more compute performance and up to twice as much floating point & cryptographic performance. On the machine learning side, Graviton3 includes support for bfloat16 data and will be able to deliver up to 3x better performance.

Graviton3 processors also include a new pointer authentication feature that is designed to improve security. Before return addresses are pushed on to the stack, they are first signed with a secret key and additional context information, including the current value of the stack pointer. When the signed addresses are popped off the stack, they are validated before being used. An exception is raised if the address is not valid, thereby blocking attacks that work by overwriting the stack contents with the address of harmful code. We are working with operating system and compiler developers to add additional support for this feature, so please get in touch if this is of interest to you.

C7g instances will be available in multiple sizes (including bare metal), and are the first in the cloud industry to be equipped with DDR5 memory. In addition to drawing less power, this memory delivers 50% higher bandwidth than the DDR4 memory used in the current generation of EC2 instances.

On the network side, C7g instances will offer up to 30 Gbps of network bandwidth and Elastic Fabric Adapter (EFA) support.

Join the Preview
We are now running a preview of the C7g instances so that you can be among the first to experience all of this power. Sign up now, take an instance for a spin, and let me know what you think!


New – Use Amazon S3 Event Notifications with Amazon EventBridge

Post Syndicated from Jeff Barr original

We launched Amazon EventBridge in mid-2019 to make it easy for you to build powerful, event-driven applications at any scale. Since that launch, we have added several important features including a Schema Registry, the power to Archive and Replay Events, support for Cross-Region Event Bus Targets, and API Destinations to allow you to send events to any HTTP API. With support for a very long list of destinations and the ability to do pattern matching, filtering, and routing of events, EventBridge is an incredibly powerful and flexible architectural component.

S3 Event Notifications
Today we are making it even easier for you to use EventBridge to build applications that react quickly and efficiently to changes in your S3 objects. This is a new, “directly wired” model that is faster, more reliable, and more developer-friendly than ever. You no longer need to make additional copies of your objects or write specialized, single-purpose code to process events.

At this point you might be thinking that you already had the ability to react to changes in your S3 objects, and wondering what’s going on here. Back in 2014 we launched S3 Event Notifications to SNS Topics, SQS Queues, and Lambda functions. This was (and still is) a very powerful feature, but using it at enterprise-scale can require coordination between otherwise-independent teams and applications that share an interest in the same objects and events. Also, EventBridge can already extract S3 API calls from CloudTrail logs and use them to do pattern matching & filtering. Again, very powerful and great for many kinds of apps (with a focus on auditing & logging), but we always want to do even better.

Net-net, you can now configure S3 Event Notifications to directly deliver to EventBridge! This new model gives you several benefits including:

Advanced Filtering – You can filter on many additional metadata fields, including object size, key name, and time range. This is more efficient than using Lambda functions that need to make calls back to S3 to get additional metadata in order to make decisions on the proper course of action. S3 only publishes events that match a rule, so you save money by only paying for events that are of interest to you.

Multiple Destinations – You can route the same event notification to your choice of 18 AWS services including Step Functions, Kinesis Firehose, Kinesis Data Streams, and HTTP targets via API Destinations. This is a lot easier than creating your own fan-out mechanism, and will also help you to deal with those enterprise-scale situations where independent teams want to do their own event processing.

Fast, Reliable Invocation – Patterns are matched (and targets are invoked) quickly and directly. Because S3 provides at-least-once delivery of events to EventBridge, your applications will be more reliable.

You can also take advantage of other EventBridge features, including the ability to archive and then replay events. This allows you to reprocess events in case of an error or if you add a new target to an event bus.

Getting Started
I can get started in minutes. I start by enabling EventBridge notifications on one of my S3 buckets (jbarr-public in this case). I open the S3 Console, find my bucket, open the Properties tab, scroll down to Event notifications, and click Edit:

I select On, click Save changes, and I’m ready to roll:

Now I use the EventBridge Console to create a rule. I start, as usual, by entering a name and a description:

Then I define a pattern that matches the bucket and the events of interest:

One pattern can match one or more buckets and one or more events; the following events are supported:

  • Object Created
  • Object Deleted
  • Object Restore Initiated
  • Object Restore Completed
  • Object Restore Expired
  • Object Tags Added
  • Object Tags Deleted
  • Object ACL Updated
  • Object Storage Class Changed
  • Object Access Tier Changed

Then I choose the default event bus, and set the target to an SNS topic (BucketAction) which publishes the messages to my Amazon email address:

I click Create, and I am all set. To test it out, I simply upload some files to my bucket and await the messages:

The message contains all of the interesting and relevant information about the event, and (after some unquoting and formatting), looks like this:

    "version": "0",
    "id": "2d4eba74-fd51-3966-4bfa-b013c9da8ff1",
    "detail-type": "Object Created",
    "source": "aws.s3",
    "account": "348414629041",
    "time": "2021-11-13T00:00:59Z",
    "region": "us-east-1",
    "resources": [
    "detail": {
        "version": "0",
        "bucket": {
            "name": "jbarr-public"
        "object": {
            "key": "eb_create_rule_mid_1.png",
            "size": 99797,
            "etag": "7a72374e1238761aca7778318b363232",
            "version-id": "a7diKodKIlW3mHIvhGvVphz5N_ZcL3RG",
            "sequencer": "00618F003B7286F496"
        "request-id": "4Z2S00BKW2P1AQK8",
        "requester": "348414629041",
        "source-ip-address": "",
        "reason": "PutObject"

My initial event pattern was very simple, and matched only the bucket name. I can use content-based filtering to write more complex and more interesting patterns. For example, I could use numeric matching to set up a pattern that matches events for objects that are smaller than 1 megabyte:

    "source": [
    "detail-type": [
        "Object Created",
        "Object Deleted",
        "Object Tags Added",
        "Object Tags Deleted"

    "detail": {
        "bucket": {
            "name": [
        "object" : {
            "size": [{"numeric" :["<=", 1048576 ] }]

Or, I could use prefix matching to set up a pattern that looks for objects uploaded to a “subfolder” (which doesn’t really exist) of a bucket:

"object": {
  "key" : [{"prefix" : "uploads/"}]

You can use all of this in conjunction with all of the existing EventBridge features, including Archive/Replay. You can also access the CloudWatch metrics for each of your rules:

Available Now
This feature is available now and you can start using it today in all commercial AWS Regions. You pay $1 for every 1 million events that match a rule; check out the EventBridge Pricing page for more information.


New – Recycle Bin for EBS Snapshots

Post Syndicated from Jeff Barr original

It is easy to create EBS Snapshots, and just as easy to either delete them manually or to use the Data Lifecycle Manager to delete them automatically in accord with your organization’s retention model. Sometimes, as it turns out, it is a bit too easy to delete snapshots, and a well-intended cleanup effort or a wayward script can sometimes go a bit overboard!

New Recycle Bin
In order to give you more control over the deletion process, we are launching a Recycle Bin for EBS Snapshots. As you will see in a moment, you can now set up rules to retain deleted snapshots so that you can recover them after an accidental deletion. You can think of this as a two-level model, where individual AWS users are responsible for the initial deletion, and then a designated “Recycle Bin Administrator” (as specified by an IAM role) manages retention and recovery.

Rules can apply to all snapshots, or to snapshots that include a specified set of tag/value pairs. Each rule specifies a retention period (between one day and one year), after which the snapshot is permanently deleted.

Let’s Recycle!
I open the Recycle Bin Console, select the region of interest, and click Create retention rule to begin:

I call my first rule KeepAll, and set it to retain all deleted EBS snapshots for 4 days:

I add a tag (User) to the rule, and click Create retention rule:

Because Apply to all resources is checked, this is a general rule that applies when there are no applicable rules that specify one or more tags.

Then I create a second rule (KeepDev) that retains snapshots tagged with a Mode of Dev for just one day:

If two different tag-based rules match the same resource, then the one with the longer retention period applies.

Here are my retention rules:

Here are my EBS snapshots. As you can see, the first three are tagged with a Mode of Dev:

In an effort to save several cents per month, I impulsively delete them all:

And they are gone:

Later in the day, a member of my developer team messages me in a panic and lets me know that they desperately need the latest snapshot of the development server’s code. I open the Recycle Bin and I locate the snapshot (DevServer_2021_10_6):

I select the snapshot and click Recover:

Then I confirm my intent:

And the snapshot is available once again:

As has always been the case, Fast Snapshot Restore is disabled when a snapshot is deleted. With this launch, it will remain disabled when a snapshot is restored.

All of this functionality (creating rules, listing resources in the Recycle Bin, and restoring them) is also available from the CLI and via the Recycle Bin APIs.

Things to Know
Here are a couple of things to know about the new Recycle Bin:

IAM Support – As I mentioned earlier, you can use AWS Identity and Access Management (IAM) to grant access to this feature, and should consider creating an empowered user known as the Recycle Bin Administrator.

Rule Changes – You can make changes to your retention rules at any time, but be aware that the rules are evaluated (and the retention period is set) when you delete a snapshot. Changing a rule after an item has been deleted will not alter the retention period for the item.

Pricing – Resources that are in the Recycle Bin are charged the usual price, but be aware that creating rules with long retention periods could increase your AWS bill. On a related note, be sure that keeping deleted snapshots around does not violate your organization’s data retention policies. There is no charge for deleting or recovering a resource.

In the Bin – Resources in the Recycle Bin are immutable. If a resource is recovered, all of its existing metadata (tags and so forth) is also recovered intact.

Recycling  – We will do our best to recycle all of the zeroes and all of the ones once when a resource in your Recycle Bin reaches the end of its retention period!


New – Real-User Monitoring for Amazon CloudWatch

Post Syndicated from Jeff Barr original

Way back in 2009 I wrote a blog post titled New Features for Amazon EC2: Elastic Load Balancing, Auto Scaling, and Amazon CloudWatch. In that post I talked about how Amazon CloudWatch helps you to build applications that are highly scalable and highly available, and noted that it gives you cost-effective real-time visibility into your metrics, with no deployment and no maintenance. Since that launch, we have added many new features to CloudWatch, all with that same goal in mind. For example, last year I showed you how you could Use CloudWatch Synthetics to Monitor Sites, API Endpoints, Web Workflows ,and More.

Real-User Monitoring (RUM)
The next big challenge (and the one that we are addressing today) is monitoring web applications with the goal of understanding performance and providing an optimal experience for your users. Because of the number of variables involved—browser type, browser configuration, user location, connectivity, and so forth—synthetic testing can only go so far. What really matters to your users is the experience that they receive, and that’s what we want to help you to deliver!

Amazon CloudWatch RUM will help you to collect the metrics that give you the insights that will help you to identity, understand, and improve this experience. You simply register your application, add a snippet of JavaScript to the header of each page, and deploy. The snippet runs when your users step through each page of your application, and sends the data to RUM for consolidation and analysis. You can use this tool on its own, and in conjunction with both Amazon CloudWatch ServiceLens and AWS X-Ray.

CloudWatch RUM in Action
To get started, I open the CloudWatch Console and navigate to RUM. Then I click Add app monitor:

I give my monitor a name and specify the domain that hosts my application:

Then I choose the events that I want to monitor & collect, and specify the percentage of sessions. My personal blog does not get a lot of traffic, so I will collect all of the sessions. I can also choose to store data in Amazon CloudWatch Logs in order to keep it around for more than the 30 days provided by CloudWatch RUM:

Finally, I opt to create a new Cognito identy pool, and add a tag. If I want to use CloudWatch ServiceLens and X-Ray, I can expand Active tracing and enable XRay. My app does not make any API requests, so I will not do that. I finish by clicking Add app monitor:

The console then shows me the JavaScript code snippet that I need to insert into the <head> element of my application:

I save the snippet, click Done, and then edit my application (my somewhat neglected personal blog in this case) to add the code snippet. I am using Jekyll, and added the snippet to my blog template:

Then I wait for some traffic to arrive. When I return to the RUM Console, I can see all of my app monitors. I click MonitorMyBlog to learn more:

Then I can explore the aggregated timing data and the other information that has been collected. There’s far more than I have space to show today, so feel free to try this out on your own and do a deeper dive. Each of the tabs contains multiple filters and options to help you to zoom in on areas of interest: specific pages, locations, browsers, user journeys, and so forth.

The Performance tab shows the vital signs for my application, followed by additional information:

The vital signs are apportioned into three levels (Positive, Tolerable, and Frustrating):

The screen above contains a metric (largest contentful paint) that was new to me. As Philip Walton explains it, “Largest Contentful Paint (LCP) is an important user-centered metric for measuring perceived load speed because it marks the point in the page load timeline when the page’s main content has likely loaded.”

I can also see the time consumed by the steps that the browser takes when loading a page:

And I can see average load time by time of day:

I can also see all of this information on a page-by-page basis:

The Browsers & Devices tab also shows a lot of interesting and helpful data. For example, I can learn more about the browsers that are used to access my blog, again with the page-by-page option:

I can also view the user journeys (page sequences) through my blog. Based on this information, it looks like I need to do a better job of leading users from one page to another:

As I noted earlier, there’s a lot of interesting and helpful information here, and you should check it out on your own.

Available Now
CloudWatch RUM is available now and you can start using it today in ten AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Singapore). You pay $1 for every 100K events that are collected.


re:Invent Session Preview – Under the Hood at Amazon Ads

Post Syndicated from Jeff Barr original

My colleagues have spent months creating, reviewing, and improving the content for their upcoming AWS re:Invent sessions. While I do my best not to play favorites, I would like to tell you about one that recently caught my eye!

Session ADM301 (Under the Hood at Amazon Ads) takes place on Tuesday, November 30th at 2 PM. In the session, my colleagues will introduce Amazon Ads, outline the challenges that come with building an advertising system at scale, and then show how they solved those challenges using multiple AWS services. I was able to review a near-final version of their presentation and this post is based on what I learned from that review.

Amazon Ads uses an omnichannel strategy with four elements: building awareness, increasing consideration, engaging shoppers, and driving purchases. Using the well-known “start from the customer and work backwards” model that we use at Amazon, they identified three distinct customer types and worked to design a system that would address their needs. The customer types were:

  • Advertisers running campaigns
  • Third-party partners who use Amazon Ads APIs to build tools & services
  • Shoppers on a purchase journey

Advertisers and third-party developers wanted an experience that spanned both UIs and programmatic interfaces, encompassing campaign management, budgeting, ad serving, a data lake for ad events, and machine learning to improve ad selection & relevance.

Scaling is a really interesting problem, with challenges around performance, storage, availability, cost, and effectiveness. In addition to handling hundreds of millions of ad requests per second (trillions of ads per day) within a latency budget of 120 ms, the ad server must be able to:

  • Track tens of billions of campaign objects, with overall storage measured in hundreds of petabytes
  • Deliver > 99.9999% availability
  • Handle peak events such as Prime Day automatically
  • Run economically and enforce advertiser budgets in near real-time
  • Deliver highly relevant ads using predictions from hundreds of machine learning models

As just one example of what it takes to handle a workload of this magnitude, they needed a caching system capable of handling 500 million requests per second!

As is often the case, the system went through multiple iterations before it reached its current form, and is still under active development. The presentation recaps the journey that the team went through, with architectural snapshots and performance metrics for each iteration.

The presentation wraps up by discussing some of the ways that they were able to apply machine learning at scale. For example, to select the right ad for each request, Amazon Ads uses deep learning models to predict relevant ads to show shoppers, predict whether a shopper will click or purchase, and allocate and price an ad. In order to do this, they needed to be able to score thousands of ads per request within a 20 ms window at over 100K transactions per second, all across hundreds of models that each required different hardware and software optimizations.

To handle this workload they built a micro-service inferencing architecture on top of Amazon Elastic Container Service (Amazon ECS) and AWS App Mesh with specific hardware and software optimizations for each type of inference model. For low-latency inferencing the Ads team began with a CPU-based solution and then moved to GPUs to reduce prediction time even as complexity and the number of models grew.

This looks like a very interesting session and I hope that you will be able to attend in person or to watch it online as part of virtual re:Invent.


AWS Free Tier Data Transfer Expansion – 100 GB From Regions and 1 TB From Amazon CloudFront Per Month

Post Syndicated from Jeff Barr original

The AWS Free Tier has been around since 2010 and allows you to use generous amounts of over 100 different AWS services. Some services offer free trials, others are free for the first 12 months after you sign up, and still others are always free, up to a per-service maximum. Our intent is to make it easy and cost-effective for you to gain experience with a wide variety of powerful services without having to pay any usage charges.

Free Tier Data Transfer Expansion
Today, as part of our long tradition of AWS price reductions, I am happy to share that we are expanding the Free Tier with additional data transfer out, as follows:

Data Transfer from AWS Regions to the Internet is now free for up to 100 GB of data per month (up from 1 GB per region). This includes Amazon EC2, Amazon S3, Elastic Load Balancing, and so forth. The expansion does not apply to the AWS GovCloud or AWS China Regions.

Data Transfer from Amazon CloudFront is now free for up to 1 TB of data per month (up from 50 GB), and is no longer limited to the first 12 months after signup. We are also raising the number of free HTTP and HTTPS requests from 2,000,000 to 10,000,000, and removing the 12 month limit on the 2,000,000 free CloudFront Function invocations per month. The expansion does not apply to data transfer from CloudFront PoPs in China.

This change is effective December 1, 2021 and takes effect with no effort on your part. As a result of this change, millions of AWS customers worldwide will no longer see a charge for these two categories of data transfer on their monthly AWS bill. Customers who go beyond one or both of these allocations will also see a reduction in their overall data transfer charges.

Your applications can run in any of 21 AWS Regions with a total of 69 Availability Zones (with more of both on the way), and can make use of the full range of CloudFront features (including SSL support and media streaming), and over 300 CloudFront PoPs, all connected across a dedicated network backbone. The network was designed with performance as a key driver, and is expanded continuously in order to meet the ever-growing needs of our customers. It is global, fully redundant, and built from parallel 100 GbE metro fibers linked via trans-oceanic cables across the Atlantic, Pacific, and Indian Oceans, as well as the Mediterranean, Red Sea, and South China Seas.


AWS Cloud Adoption Framework (CAF) 3.0 is Now Available

Post Syndicated from Jeff Barr original

The AWS Cloud Adoption Framework (AWS CAF) is designed to help you to build and then execute a comprehensive plan for your digital transformation. Taking advantage of AWS best practices and lessons learned from thousands of customer engagements, the AWS CAF will help you to identify and prioritize transformation opportunities, evaluate and improve your cloud readiness, and iteratively evolve the roadmaps that you follow to guide your transformation.

Version 3.0 Now Available
I am happy to announce the version 3.0 of the AWS CAF is now available. This version represents what we have learned since we released version 2.0, with a focus on digital transformation and an emphasis on the use of data & analytics.

The framework starts by identifying six groups of foundational perspectives (Business, People, Governance, Platform, Security, and Operations), totaling 47 discrete capabilities, up from 31 in the previous version.

From there it identifiers four transformation domains (Technology, Process, Organization, and Product) that must participate in a successful digital transformation.

With the capabilities and the transformation domains as a base, the AWS Cloud Adoption Framework then recommends a set of four iterative and incremental cloud transformation phases:

Envision – Demonstrate how the cloud will accelerate business outcomes. This phase is delivered as a facilitator-led interactive workshop that will help you to identify transformation opportunities and create a foundation for your digital transformation.

Align – Identify capability gaps across the foundational capabilities. This phase also takes the form of a facilitator-led workshop and results in an action plan.

Launch – Build and deliver pilot initiatives in production, while demonstrating incremental business value.

Scale – Expand pilot initiatives to the desired scale while realizing the anticipated & desired business benefits.

All in all, the AWS Cloud Adoption Framework is underpinned by hundreds of AWS offerings and programs that help you achieve specific business and technical outcomes.

Getting Started with the AWS Cloud Adoption Framework
You can use the following resources to learn more and to get started:

Web Page – Visit the AWS Cloud Adoption Framework web page.

White Paper – Download and read the AWS CAF Overview.

AWS Account Team – Your AWS account team stands ready to assist you with any and all of the phases of the AWS Cloud Adoption Framework.


New – EC2 Instances (G5) with NVIDIA A10G Tensor Core GPUs

Post Syndicated from Jeff Barr original

Two years ago I told you about the then-new G4 instances, which featured up to eight NVIDIA T4 Tensor Core GPUs. These instances were designed to give you cost-effective GPU power for machine learning inference and graphics-intensive applications.

Today I am happy to tell you about the new G5 instances, which feature up to eight NVIDIA A10G Tensor Core GPUs. Powered by second generation AMD EPYC processors, these instances deliver up to 40% better price-performance for inferencing and graphics-intensive operations in comparison to their predecessors.

On the GPU side, the A10G GPUs deliver to to 3.3x better ML training performance, up to 3x better ML inferencing performance, and up to 3x better graphics performance, in comparison to the T4 GPUs in the G4dn instances. Each A10G GPU has 24 GB of memory, 80 RT (ray tracing) cores, 320 third-generation NVIDIA Tensor Cores, and can deliver up to 250 TOPS (Tera Operations Per Second) of compute power for your AI workloads.

Here are the specs:

Instance Name
Tensor Core GPUs
vCPUs Memory Local Storage EBS Bandwidth Network Bandwidth
g5.xlarge 1 4 16 GiB 250 GB Up to 3.5 Gbps Up to 10 Gbps
g5.2xlarge 1 8 32 GiB 450 GB Up to 3.5 Gbps Up to 10 Gbps
g5.4xlarge 1 16 64 GiB 600 GB 8 Gbps Up to 25 Gbps
g5.8xlarge 1 32 128 GiB 1900 GB 16 Gbps 25 Gbps
g5.12xlarge 4 48 192 GiB 3800 GB 16 Gbps 40 Gbps
g5.16xlarge 1 64 256 GiB 1900 GB 16 Gbps 25 Gbps
g5.24xlarge 4 96 384 GiB 3800 GB 19 Gbps 50 Gbps
g5.48xlarge 8 192 768 GiB 7600 GB 19 Gbps 100 Gbps

Like their predecessors, these instances are a great fit for many interesting types of workloads. Here are a few examples:

Media and Entertainment – Customers can use G5 instances to support finishing and color grading tasks, generally with the aid of high-end pro-grade tools. These tasks can also support real-time playback, aided by the plentiful amount of EBS bandwidth allocated to each instance. Customers can also use the increased ray-tracing power of G5 instances to support game development tools.

Remote Workstations – Customers in many different industries including Media and Entertainment, Gaming, Education, Architecture, Engineering and Construction want to run high-end graphical workstations in the cloud, and are looking for instances that come in a broad array of sizes.

Machine & Deep Learning – G5 instances deliver high performance and significant value for training and inferencing workloads. They also offer access to NVIDIA CuDNN, NVIDIA TensorRT, NVIDIA Triton Inference Server, and other ML/DL software from the NVIDIA NGC catalog, which have all been optimized for use with NVIDIA GPUs.

Autonomous Vehicles – Several of our customers are designing and simulating autonomous vehicles that include multiple real-time sensors. The customers make use of ray tracing to simulate sensor input in real time, and also gather data from real-world tests using tools that benefit from powerful networking and large amounts of memory.

The instances support Linux and Windows, and are compatible with a very long list of graphical and machine learning libraries including CUDA, CuDNN, CuBLAS, NVENC, TensorRT, OpenCL, DirectX, Vulkan, and OpenGL.

Available Now
The new G5 instances are available now and you can start using them today in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) Regions in On-Demand, Spot, Savings Plan, and Reserved Instance form. You can also launch them in Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (EKS) clusters,

To learn more, check out the G5 Instances page.


In The Works – AWS Canada West (Calgary) Region

Post Syndicated from Jeff Barr original

We launched the Canada (Central) Region in 2016 and added a third Availability Zone in 2020. Since that launch, tens of thousands of AWS customers have used AWS services in Canada to accelerate innovation, increase agility, and to drive cost savings. This includes enterprises such as Air Canada, BMO Financial Group, NHL, Porter Airlines, and Lululemon, as well as startups with global reach such as Benevity, D2L, and Hootsuite. AWS is also used by Athabasca University, Humber College, the Vancouver General Hospital, and the Canada Border Services Agency, to name a few.

Hello, Calgary
I am happy to announce that we will be opening an AWS region in Calgary, Canada in late 2023 or early 2024. This three-AZ region will reduce latency for end-users in Western Canada and will also support the development of advanced, distributed solutions that span multiple AWS regions. It will also provide additional flexibility for AWS customers that need to store and process data within Canada’s borders.

As part of our commitment to running our business in the most environmentally friendly way possible, we are also investing in renewable energy projects in Canada. We currently have two projects underway, both in Alberta: an 80 MW solar farm (announced in April 2021) and a 375 MW solar farm (announced in June 2021). Together, these projects will contribute more than one million MWh to the power grid when they come online in 2022.

This region is part of a planned investment of CAD $4.3 billion over the next 15 years, including data center construction, ongoing utilities and facilities costs, and purchases of goods & services from regional businesses. Our Economic Impact Study (EIS) estimates that the spending on infrastructure and construction over the next 15 years will increase Canada’s GDP by about CAD $4.9 billion, along with direct and indirect economic benefits including nearly 1,000 new full-time equivalent jobs in Canada.

And Then There Were Nine
With this announcement we now have a total of nine regions (Australia, Canada, India, Indonesia, Israel, New Zealand, Spain, Switzerland, and the United Arab Emirates) in the works. As always, you can find the full list of operational and planned regions on the AWS Global Infrastructure page.


En construction : la Région AWS Canada Ouest (Calgary)

Nous avons inauguré la Région Canada (Centre) en 2016 et ajouté une troisième Zone de disponibilité en 2020. Depuis ce lancement, des dizaines de milliers de clients d’AWS ont utilisé les services d’AWS au Canada pour accélérer l’innovation, améliorer leur agilité et réaliser des économies. Cela inclut des entreprises telles qu’Air Canada, BMO Groupe financier, la LNH, Porter Airlines et Lululemon, en plus d’entreprises en démarrage à portée mondiale telles que Benevity, D2L et Hootsuite. AWS est également utilisé par l’Université Athabasca, le Collège Humber, l’Hôpital général de Vancouver et l’Agence des services frontaliers du Canada, pour ne citer que quelques exemples.

Bonjour Calgary
Je suis heureux d’annoncer que nous allons ouvrir une Région AWS à Calgary, au Canada, à la fin de 2023 ou au début de 2024. Cette Région à trois ZD réduira la latence pour les utilisateurs finaux de l’Ouest canadien et permettra aussi de soutenir le développement de solutions avancées et distribuées couvrant plusieurs Régions AWS. De plus, elle fournira une flexibilité supplémentaire aux clients d’AWS ayant besoin de stocker et traiter des données à l’intérieur des frontières canadiennes.

Dans le cadre de notre engagement à gérer notre entreprise de la manière la plus respectueuse de l’environnement possible, nous investissons également dans des projets d’énergie renouvelable au Canada. Nous avons actuellement deux projets en cours, tous deux situés en Alberta : une ferme solaire de 80 MW (annoncée en avril 2021) et une ferme solaire de 375 MW (annoncée en juin 2021). Lorsqu’ils seront opérationnels en 2022, ces projets apporteront conjointement plus de 1 million de MWh au réseau électrique.

Cette Région fait partie d’un investissement prévu de 4,3 milliards de dollars CAD au cours des 15 prochaines années, comprenant la construction de centres de données, les dépenses opérationnelles liées aux services publics et aux installations, ainsi que les achats de biens et de services auprès d’entreprises régionales. Notre étude d’impact économique (EIE) estime que les dépenses en matière d’infrastructure et de construction au cours des 15 prochaines années augmenteront le PIB du Canada d’environ 4,9 milliards de dollars CAD, en plus des retombées économiques directes et indirectes, dont près de 1 000 nouveaux emplois équivalents temps plein au Canada.

Et en voilà une neuvième
Avec cette annonce, nous avons maintenant un total de neuf Régions en cours de réalisation (Australie, Canada, Inde, Indonésie, Israël, Nouvelle-Zélande, Espagne, Suisse et les Émirats arabes unis). Comme toujours, vous trouverez la liste complète des Régions opérationnelles et planifiées en consultant la page de l’infrastructure mondiale d’AWS.


New – EC2 Instances Powered by Gaudi Accelerators for Training Deep Learning Models

Post Syndicated from Jeff Barr original

There are more applications today for deep learning than ever before. Natural language processing, recommendation systems, image recognition, video recognition, and more can all benefit from high-quality, well-trained models.

The process of building such a model is iterative: construct an initial model, train it on the ground truth data, do some test inferences, refine the model and repeat. Deep learning models contain many layers (hence the name), each of which transforms outputs of the previous layer. The training process is math and processor intensive, and places demands on just about every part of the systems used for training including the GPU or other training accelerator, the network, and local or network storage. This sophistication and complexity increases training time and raises costs.

New DL1 Instances
Today I would like to tell you about our new DL1 instances. Powered by Gaudi accelerators from Habana Labs, the dl1.24xlarge instances have the following specs:

Gaudi Accelerators – Each instance is equipped with eight Gaudi accelerators, with a total of 256 GB of High Bandwidth (HBM2) accelerator memory and high-speed, RDMA-powered communication between accelerators.

System Memory – 768 GB of system memory, enough to hold very large sets of training data in memory, as often requested by our customers.

Local Storage – 4 TB of local NVMe storage, configured as four 1 TB volumes.

Processor – Intel Cascade Lake processor with 96 vCPUs.

Network – 400 Gbps of network throughput.

As you can see, we have maxed out the specs in just about every dimension, with the goal of giving you a highly capable machine learning training platform with a low cost of entry and up to 40% better price-performance than current GPU-based EC2 instances.

Gaudi Inside
The Gaudi accelerators are custom-designed for machine learning training, and have a ton of cool & interesting features & attributes:

Data Types – Support for floating point (BF16 and FP32), signed integer (INT8, INT16, and INT32), and unsigned integer (UINT8, UINT16, and UINT32) data.

Generalized Matrix Multiplier Engine (GEMM) – Specialized hardware to accelerate matrix multiplication.

Tensor Processing Cores (TPCs) – Specialized VLIW SIMD (Very Long Instruction Word / Single Instruction Multiple Data) processing units designed for ML training. The TPCs are C-programmable, although most users will use higher-level tools and frameworks.

Getting Started with DL1 Instances
The Gaudi SynapseAI Software Suite for Training will help you to build new models and to migrate existing models from popular frameworks such as PyTorch and TensorFlow:

Here are some resources to get you started:

TensorFlow User Guide – Learn how to run your TensorFlow models on Gaudi.

PyTorch User Guide – Learn how to run your PyTorch models on Gaudi.

Gaudi Model Migration Guide – Learn how to port your PyTorch or TensorFlow to Gaudi.

HabanaAI Repo – This large, active repo contains setup instructions, reference models, academic papers, and much more.

You can use the TPC Programming Tools to write, simulate, and debug code that runs directly on the TPCs, and you can use the Habana Communication Library (HCL) to build applications that harness the power of multiple accelerators. The Habana Collective Communications Library (HCCL) runs atop HCL and gives you access to collective primitives for Reduce, Broadcast, Gather, and Scatter operations.

Now Available
DL1 instances are available today in the US East (N. Virginia) and US West (Oregon) Regions in On-Demand and Spot form. You can purchase Reserved Instances and Savings plans as well.


New – AWS Data Exchange for Amazon Redshift

Post Syndicated from Jeff Barr original

Back in 2019 I told you about AWS Data Exchange and showed you how to Find, Subscribe To, and Use Data Products. Today, you can choose from over 3600 data products in ten categories:

In my introductory post I showed you how could subscribe to data products and then download the data sets into an Amazon Simple Storage Service (Amazon S3) bucket. I then suggested various options for further processing, including AWS Lambda functions, a AWS Glue crawler, or an Amazon Athena query.

Today we are making it even easier for you to find, subscribe to, and use third-party data with the introduction of AWS Data Exchange for Amazon Redshift. As a subscriber, you can directly use data from providers without any further processing, and no need for an Extract Transform Load (ETL) process. Because you don’t have to do any processing, the data is always current and can be used directly in your Amazon Redshift queries. AWS Data Exchange for Amazon Redshift takes care of managing all entitlements and payments for you, with all charges billed to your AWS account.

As a provider, you now have a new way to license your data and make it available to your customers.

As I was writing this post, it was cool to realize just how many existing aspects of Redshift, and Data Exchange played central roles. Because Redshift has a clean separation of storage and compute, along with built-in data sharing features, the data provider allocates and pays for storage, and the data subscriber does the same for compute. The provider does not need to scale their cluster in proportion to the size of their subscriber base, and can focus on acquiring and providing data.

Let’s take a look at this feature from two vantage points: subscribing to a data product, and publishing a data product.

AWS Data Exchange for Amazon Redshift – Subscribing to a Data Product
As a data subscriber I can browse through the AWS Data Exchange catalog and find data products that are relevant to my business, and subscribe to them.

Data providers can also create private offers and extend them to me for access via the AWS Data Exchange Console. I click My product offers, and review the offers that have been extended to me. I click on Continue to subscribe to proceed:

Then I complete my subscription by reviewing the offer and the subscription terms, noting the data sets that I will get, and clicking Subscribe:

Once the subscription is completed, I am notified and can move forward:

From the Redshift Console, I click Datashares, select From other accounts, and I can see the subscribed data set:

Next, I associate it with one or more of my Redshift clusters by creating a database that points to the subscribed datashare, and use the tables, views, and stored procedures to power my Redshift queries and my applications.

AWS Data Exchange for Amazon Redshift – Publishing a Data Product
As a data provider I can include Redshift tables, views, schemas and user-defined functions in my AWS Data Exchange product. To keep things simple, I’ll create a product that includes just one Redshift table.

I use the spiffy new Redshift Query Editor V2 to create a table that maps US area codes to a city and a state:

Then I examine the list of existing datashares for my Redshift cluster, and click Create datashare to make a new one:

Next, I go through the usual process for creating a datashare. I select AWS Data Exchange datashare, assign a name (area_code_reference), pick the database within the cluster, and make the datashare accessible to publicly accessible clusters:

Then I scroll down and click Add to move forward:

I choose my schema (public), opt to include only tables and views in my datashare, and then add the area_codes table:

At this point I can click Add to wrap up, or Add and repeat to make a more complex product that contains additional objects.

I confirm that the datashare contains the table, and click Create datashare to move forward:

Now I am ready to start publishing my data! I visit the AWS Data Exchange Console, expand the navigation on the left, and click Owned data sets:

I review the Data set creation steps, and click Create data set to proceed:

I select Amazon Redshift datashare, give my data set a name (United States Area Codes), enter a description, and click Create data set to proceed:

I create a revision called v1:

I select my datashare and click Add datashare(s):

Then I finalize the revision:

I showed you how to create a datashare and a dataset, and to publish a product using the console. If you are publishing multiple products and/or making regular revisions, you can automate all of these steps using the AWS Command Line Interface (CLI) and the Amazon Data Exchange APIs.

Initial Data Products
Multiple data providers are working to make their data products available to you through AWS Data Exchange for Amazon Redshift. Here are some of the initial offerings and the official descriptions:

  • FactSet Supply Chain Relationships – FactSet Revere Supply Chain Relationships data is built to expose business relationship interconnections among companies globally. This feed provides access to the complex networks of companies’ key customers, suppliers, competitors, and strategic partners, collected from annual filings, investor presentations, and press releases.
  • Foursquare Places 2021: New York City Sample – This trial dataset contains Foursquare’ss integrated Places (POI) database for New York City, accessible as a Redshift Data Share. Instantly load Foursquare’s Places data in to a Redshift table for further processing and analysis. Foursquare data is privacy-compliant, uniquely sourced, and trusted by top enterprises like Uber, Samsung, and Apple.
  • Mathematica Medicare Pilot Dataset – Aggregate Medicare HCC counts and prevalence by state, county, payer, and filtered to the diabetic population from 2017 to 2019.
  • COVID-19 Vaccination in Canada – This listing contains sample datasets for COVID-19 Vaccination in Canada data.
  • Revelio Labs Workforce Composition and Trends Data (Trial data) – Understand the workforce composition and trends of any company.
  • Facteus – US Card Consumer Payment – CPG Backtest – Historical sample from panel of SKU-level transaction detail from cash and card transactions across hundreds of Consumer-Packaged Goods sold at over 9,000 urban convenience stores and bodegas across the U.S.
  • Decadata Argo Supply Chain Trial Data – Supply chain data for CPG firms delivering products to US Grocery Retailers.


Amazon QuickSight Q – Business Intelligence Using Natural Language Questions

Post Syndicated from Jeff Barr original

Making sense of business data so that you can get value out of it is worthwhile yet still challenging. Even though the term Business Intelligence (BI) has been around since the mid-1800s (according to Wikipedia) adoption of contemporary BI tools within enterprises is still fairly low.

Amazon QuickSight was designed to make it easier for you to put BI to work in your organization. Announced in 2015 and launched in 2016, QuickSight is a scalable BI service built for the cloud. Since that 2016 launch, we have added many new features, including geospatial visualization and private VPC access in 2017, pay-per-session pricing in 2018, additional APIs (data, dashboard, SPICE, and permissions in 2019), embedded authoring of dashboards & support for auto-naratives in 2020, and Dataset-as-a-Source in 2021.

QuickSight Q is Here
My colleague Harunobu Kameda announced Amazon QuickSight Q (or Q for short) last December and gave you a sneak peek. Today I am happy to announce the general availability of Q, and would like to show you how it works!

To recap, Q is a natural language query tool for the Enterprise Edition of QuickSight. Powered by machine learning, it makes your existing data more accessible, and therefore more valuable. Think of Q as your personal Business Intelligence Engineer or Data Analyst, one that is on call 24 hours a day and always ready to provide you with quick, meaningful results! You get high-quality results in seconds, always shown in an appropriate form.

Behind the scenes, Q uses Natural Language Understanding (NLU) to discover the intent of your question. Aided by models that have been trained to recognize vocabulary and concepts drawn from multiple domains (sales, marketing, retail, HR, advertising, financial services, health care, and so forth), Q is able to answer questions that refer all data sources supported by QuickSight. This includes data from AWS sources such as Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Aurora, Amazon Athena, and Amazon Simple Storage Service (Amazon S3) as well as third party sources & SaaS apps such as Salesforce, Adobe Analytics, ServiceNow, and Excel.

Q in Action
Q is powered by topics, which are generally created by QuickSight Authors for use within an organization (if you are a QuickSight Author, you can learn more about getting started). Topics represent subject areas for questions, and are created interactively. To learn more about the five-step process that Authors use to create a topic, be sure to watch our new video, Tips to Create a Great Q Topic.

To use Q, I simply select a topic (B2B Sales in this case) and enter a question in the Q bar at the top of the page:

Q query --

In addition to the actual results, Q gives me access to explanatory information that I can review to ensure that my question was understood and processed as desired. For example, I can click on sales and learn how Q handles the field:

Detailed information on the use of the sales field.

I can fine-tune each aspect as well; here I clicked Sorted by:

Changing sort order for sales field.

Q chooses an appropriate visual representation for each answer, but I can fine-tune that as well:

Select a new visual type.

Perhaps I want a donut chart instead:

Now that you have seen how Q processes a question and gives you control over how the question is processed & displayed, let’s take a look at a few more questions, starting with “which product sells best in south?”


Here’s “what is total sales by region and category?” using the vertical stacked bar chart visual:

Total sales by region and catergory.

Behind the Scenes – Q Topics
As I mentioned earlier, Q uses topics to represent a particular subject matter. I click Topics to see the list of topics that I have created or that have been shared with me:

I click B2B Sales to learn more. The Summary page is designed to provide QuickSight Authors with information that they can use to fine-tune the topic:

Info about the B2B Sales Topic.

I can click on the Data tab and learn more about the list of fields that Q uses to answer questions. Each field can have some synonyms or friendly names to make the process of asking questions simpler and more natural:

List of fields for the B2B Sales topic.

I can expand a field (row) to learn more about how Q “understands” and uses the field. I can make changes in order to exercise control over the types of aggregations that make sense for the field, and I can also provide additional semantic information:

Information about the Product Name field.

As an example of providing additional semantic information, if the field’s Semantic Type is Location, I can choose the appropriate sub-type:

The User Activity tab shows me the questions that users are asking of this topic:

User activirty for the B2B Sales topic.

QuickSight Authors can use this tab to monitor user feedback, get a sense of the most common questions, and also use the common questions to drive improvements to the content provided on QuickSight dashboards.

Finally, the Verified answers tab shows the answers that have been manually reviewed and approved:

Things to Know
Here are a couple of things to know about Amazon QuickSight Q:

Pricing – There’s a monthly fee for each Reader and each Author; take a look at the QuickSight Pricing Page for more information.

Regions – Q is available in the US East (N. Virginia), US West (Oregon), US East (Ohio), Europe (Ireland), Europe (Frankfurt), and Europe (London) Regions.

Supported Languages – We are launching with question support in English.


AWS Cloud Builders – Career Transformation & Personal Growth

Post Syndicated from Jeff Barr original

Long-time readers of this blog know that I firmly believe in the power of education to improve lives. AWS Training and Certification equips people and organizations around the world with cloud computing education to build and validate cloud computing skills. With demand for cloud skills and experience at an all-time high, there’s never been a better time to get started.

On the training side you have a multitude of options for classroom and digital training, including offerings from AWS Training Partners. After you have been trained and have gained some experience, you can prepare for, schedule, and earn one or more of the eleven AWS Certifications.

I encourage you to spend some time watching our new AWS Cloud Builder Career Stories videos. In these videos you will hear some AWS Training and Certification success stories:

  • Uri Parush became a Serverless Architect and rode a wave of innovation.
  • David Webster became an AWS Technical Practice Lead after dreaming of becoming an inventor.
  • Karolina Boboli retrained as a Cloud Architect after a career as an accountant.
  • Florian Clanet reminisces about putting his first application into service and how it reminded him of designing lighting for a high school play.
  • Veliswa Boya trained for her AWS Certification and became the first female AWS Developer Advocate in Africa.
  • Karen Tovmasyan wrote his first book about cloud and remembered his first boxing match.
  • Sara Alasfoor built her first AWS data analytics solution and learned that she could tackle any obstacle.
  • Bruno Amaro Almedia was happy to be thanked for publishing his first article about AWS after earning twelve AWS certifications.
  • Nicola Racco was terrified and exhilarated when he released his first serverless project.

I hope that you enjoy the stories, and that they inspire you to embark on a learning journey of your own!


In the Works – AWS Region in New Zealand

Post Syndicated from Jeff Barr original

We are currently working on regions in Australia, India, Indonesia, Israel, Spain , Switzerland, and the United Arab Emirates.

Auckland, New Zealand in the Works
Today I am happy to announce that the new AWS Asia Pacific (Auckland) Region is in the works and will open in 2024. This region will have three Availability Zones and will give AWS customers in New Zealand the ability to run workloads and store data that must remain in-country.

There are 81 Availability Zones within 25 AWS Regions in operation today, with 24 more Availability Zones and eight announced regions (including this one) underway.

Each of the Availability Zones will be physically independent of the others in region, close enough to support applications that need low latency, yet sufficiently distant to significantly reduce the risk that an AZ-level event will have an impact on business continuity. The AZs in this region will be connected together via high-bandwidth, low-latency network connections over dedicated, fully redundant fiber. This connectivity supports applications that need synchronous replication between AZs for availability or redundancy; you can take a peek at the AWS Global Infrastructure page to learn more about how we design and build regions and AZs.

AWS in New Zealand
According to an economic impact study (EIS) that we released as part of this launch, we estimate that our NZ$ 7.5 billion (5.3 billion USD) investment will create 1,000 new jobs and will have an estimated economic impact of NZ$ 10.8 billion (7.7 billion USD) over the next fifteen years.

The first AWS office in New Zealand opened in 2013 and now employs over 100 solution architects, account managers, sales representatives, professional services consultants, and cloud experts.

Other AWS infrastructure includes a pair of Amazon CloudFront edge locations in Auckland along with access to the AWS global backbone through multiple, redundant submarine cables. For more information about connectivity options, be sure to check out New Zealand Internet Connectivity to AWS.

Stay Tuned
We’ll announce the opening of this and the other regions in future blog posts, so be sure to stay tuned!


PS – The Amazon Polly Aria voice (New Zealand English) was launched earlier this year and should be of interest to New Zealanders. Visit the Amazon Polly Console to get started!

New – Amazon FSx for NetApp ONTAP

Post Syndicated from Jeff Barr original

Back in 2018 I wrote about the first two members of the Amazon FSx family of fully-managed, highly-reliable, and highly-performant file systems, Amazon FSx for Lustre and Amazon FSx for Windows File Server. Both of these services give you the ability to use popular open source and commercially-licensed file systems without having to deal with hardware provisioning, software configuration, patching, backups, and so forth. Since those launches, we have added many new features to both services in response to your requests:

Amazon FSx for Lustre now supports Persistent file systems with SSD- and HDD-based storage for longer-term storage and workloads, storage capacity scaling, crash-consistent backups, data compression, and storage quotas.

Amazon FSx for Windows File Server now supports many enterprise-ready features including Multi-AZ file systems, self-managed Active Directories, fine-grained file restoration, file access auditing, storage size and capacity throughput scaling, and a low cost HDD storage option.

Because these services support the file access and storage paradigms that are already well understood by Lustre and Windows File Server users, it is easy to migrate existing applications and to fine-tune existing operational regimens when you put them to use. While migration is important, so are new applications! All of the Amazon FSx systems make it easy for you to build applications that need high-performance fully managed storage along with the rich set of features provided by the file systems.

Amazon FSx for NetApp ONTAP
As I often tell you, we are always looking for more ways to meet the needs of our customers. To this end, we are launching Amazon FSx for NetApp ONTAP today. You get the popular features, performance, and APIs of ONTAP file systems with the agility, scalability, security, and resiliency of AWS, making it easier for you to migrate on-premises applications that rely on network-attached storage (NAS) appliances to AWS.

ONTAP (a NetApp product) is an enterprise data management offering designed to provide high-performance storage suitable for use with Oracle, SAP, VMware, Microsoft SQL Server, and so forth. ONTAP is flexible and scalable, with support for multi-protocol access and file systems that can scale up to 176 PiB. It supports a wide variety of features that are designed to make data management cheaper and easier including inline data compression, deduplication, compaction, thin provisioning, replication (SnapMirror), and point-in-time cloning (FlexClone).

FSx for ONTAP is fully managed so you can start to enjoy all of these features in minutes. AWS provisions the file servers and storage volumes, manages replication, installs software updates & patches, replaces misbehaving infrastructure components, manages failover, and much more. Whether you are migrating data from your on-premises NAS environment or building brand-new cloud native applications, you will find a lot to like! If you are migrating, you can enjoy all of the benefits of a fully-managed file system while taking advantage of your existing tools, workflows, processes, and operational expertise. If you are building brand-new applications, you can create a cloud-native experience that makes use of ONTAP’s rich feature set. Either way, you can scale to support hundreds of thousands of IOPS and benefit from the continued, behind-the-scenes evolution of the compute, storage, and networking components.

There are two storage tiers, and you can enable intelligent tiering to move data back and forth between them on an as-needed basis:

Primary Storage is built on high performance solid state drives (SSD), and is designed to hold the part of your data set that is active and/or sensitive to latency. You can provision up to 192 TiB of primary storage per file system.

Capacity Pool Storage grows and shrinks as needed, and can scale to pebibytes. It is cost-optimized and designed to hold data that is accessed infrequently.

Within each Amazon FSx for NetApp ONTAP file system you can create one or more Storage Virtual Machines (SVMs), each of which supports one or more Volumes. Volumes can be accessed via NFS, SMB, or as iSCSI LUNs for shared block storage. As you can see from this diagram, you can access each volume from AWS compute services, VMware Cloud on AWS, and from your on-premises applications:

If your on-premises applications are already making use of ONTAP in your own data center, you can easily create an ONTAP file system in the cloud, replicate your data using NetApp SnapMirror, and take advantage of all that Amazon FSx for NetApp ONTAP has to offer.

Getting Started with Amazon FSx for NetApp ONTAP
I can create my first file system from the command line, AWS Management Console, or the NetApp Cloud Manager. I can also make an API call or use a CloudFormation template. I’ll use the Management Console.

Each file system runs within a Virtual Private Cloud (VPC), so I start by choosing a VPC and a pair of subnets (preferred and standby). Every SVM has an endpoint in the Availability Zones associated with both of the subnets, with continuous monitoring, automated failover, and automated failback to ensure high availability.

I open the Amazon FSx Console, click Create file system, select Amazon FSx for NetApp ONTAP, and click Next:

I can choose Quick create and use a set of best practices, or Standard create and set all of the options myself. I’ll go for the first option, since I can change all of the configuration options later if necessary. I select Quick create, enter a name for my file system (jb-fsx-ontap-1), and set the storage capacity in GiB. I also choose the VPC, and enable ONTAP’s storage efficiency features:

I confirm all of my choices, and note that this option will also create a Storage Virtual Machine (fsx) and a volume (vol1) for me. Then I click Create file system to “make it so”:

The file system Status starts out as Creating, then transitions to Available within 20 minutes or so:

My first SVM transitions from Pending to Created shortly thereafter, and my first volume transitions from Pending to Created as well. I can click on the SVM to learn more about it and to see the full set of management and access endpoints that it provides:

I can click Volumes in the left-side navigation and see all of my volumes. The root volume (fsx_root) is created automatically and represents all of the storage on the SVM:

I can select a volume and click Attach to get customized instructions for attaching it to an EC2 instance running Linux or Windows:

I can select a volume and then choose Update volume from the Action menu to change the volume’s path, size, storage efficiency, or tiering policy:

To learn more about the tiering policy, read about Amazon FSx for NetApp ONTAP Storage.

I can click Create volume and create additional volumes within any of my file systems:

There’s a lot more than I have space to show you, so be sure to open up the Console and try it out for yourself.

Things to Know
Here are a couple of things to know about Amazon FSx for NetApp ONTAP:

Regions – The new file system is available in most AWS regions and in GovCloud; check out the AWS Regional Service list for more information.

Pricing – Pricing is based on multiple usage dimensions including the Primary Storage, Capacity Pool Storage, throughput capacity, additional SSD IOPS, and backup storage consumption; consult the Amazon FSx for NetApp ONTAP Pricing page for more information.

Connectivity – You can use AWS Direct Connect to connect your on-premises applications to your new file systems. You can use Transit Gateway to connect to VPCs in other accounts and/or regions.

Availability – As I mentioned earlier, each file system is powered by AWS infrastructure in a pair of Availability Zones. Amazon FSx for NetApp ONTAP automatically replicates data between the zones and monitors the AWS infrastructure, initiating a failover (typically within 60 seconds), and then replacing infrastructure components as necessary. There’s a 99.99% availability SLA for each file system.