All posts by Jeff Barr

New – NVMe Reservations for Amazon Elastic Block Store io2 Volumes

Post Syndicated from Jeff Barr original

Amazon Elastic Block Store (Amazon EBS) io2 and io2 Block Express volumes now support storage fencing using NVMe reservations. As I learned while writing this post, storage fencing is used to regulate access to storage for a compute or database cluster, ensuring that just one host in the cluster has permission to write to the volume at any given time. For example, you can set up SQL Server Failover Cluster Instances (FCI) and get higher application availability within a single Availability Zone without the need for database replication.

As a quick refresher, io2 Block Express volumes are designed to meet the needs of the most demanding I/O-intensive applications running on Nitro-based Amazon Elastic Compute Cloud (Amazon EC2) instances. Volumes can be as big as 64 TiB, and deliver SAN-like performance with up to 256,000 IOPS/volume and 4,000 MB/second of throughput, all with 99.999% durability and sub-millisecond latency. The volumes support other advanced EBS features including encryption and Multi-Attach, and can be reprovisioned online without downtime. To learn more, you can read Amazon EBS io2 Block Express Volumes with Amazon EC2 R5b Instances Are Now Generally Available.

Using Reservations
To make use of reservations, you simply create an io2 volume with Multi-Attach enabled, and then attach it to one or more Nitro-based EC2 instances (see Provisioned IOPS Volumes for a full list of supported instance types):

If you have existing io2 Block Express volumes, you can enable reservations by detaching the volumes from all of the EC2 instances, and then reattaching them. Reservations will be enabled as soon as you make the first attachment. If you are running Windows Server using AMIs data-stamped 2023.08 or earlier you will need to install the aws_multi_attach driver as described in AWS NVMe Drivers for Windows Instances.

Things to Know
Here are a couple of things to keep in mind regarding NVMe reservations:

Operating System Support – You can use NVMe reservations with Windows Server (2012 R2 and above, 2016, 2019, and 2022), SUSE SLES 12 SP3 and above, RHEL 8.3 and above, and Amazon Linux 2 & later (read NVMe reservations to learn more).

Cluster and Volume Managers – Windows Server Failover Clustering is supported; we are currently working to qualify other cluster and volume managers.

Charges – There are no additional charges for this feature. Each reservation counts as an I/O operation.


AWS Weekly Roundup: R7iz Instances, Amazon Connect, CloudWatch Logs, and Lots More (Sept. 11, 2023)

Post Syndicated from Jeff Barr original

Looks like it is my turn once again to write the AWS Weekly Roundup. I wrote and published the first one on April 16, 2012 — just 4,165 short day ago!

Last Week’s Launches
Here are some of the launches that caught my eye last week:

R7iz Instances – Optimized for high CPU performance and designed for your memory-intensive workloads, these instances are powered by the fastest 4th Generation Intel Xeon Scalable-based (Sapphire Rapids) instances in the cloud. They are available in eight sizes, with 2 to 128 vCPUs and 16 to 1024 GiB of memory, along with generous allocations of network and EBS bandwidth:

Memory (GiB)
Network Bandwidth
EBS Bandwidth
r7iz.large 2 16 Up to 12.5 Gbps Up to 10 Gbps
r7iz.xlarge 4 32 Up to 12.5 Gbps Up to 10 Gbps
r7iz.2xlarge 8 64 Up to 12.5 Gbps Up to 10 Gbps
r7iz.4xlarge 16 128 Up to 12.5 Gbps Up to 10 Gbps
r7iz.8xlarge 32 256 12.5 Gbps 10 Gbps
r7iz.12xlarge 48 384 25 Gbps 19 Gbps
r7iz.16xlarge 64 512 25 Gbps 20 Gbps
r7iz.32xlarge 128 1024 50 Gbps 40 Gbps

As Veliswa shared in her post, the R7iz instances also include four built-in accelerators, and are available in two AWS regions.

Amazon Connect APIs for View Resources – A new set of View APIs allows you to programmatically create and manage the view resources (UI templates) used in the step-by-step guides that are displayed in the agent’s UI.

Daily Disbursements to Marketplace Sellers – Sellers can now set disbursement preferences and opt-in to receiving outstanding balances on a daily basis for increased flexibility, including the ability to match payments to existing accounting processes.

Enhanced Error Handling for AWS Step Functions – You can now construct detailed error messages in Step Functions Fail states, and you can set a maximum limit on retry intervals.

Amazon CloudWatch Logs RegEx Filtering – You can now use regular expressions in your Amazon CloudWatch Logs filter patterns. You can, for example, define a single filter that matches multiple IP subnets or HTTP status codes instead of having to use multiple filters, as was previously the case. Each filter pattern can have up to two regular expression patterns.

Amazon SageMaker – There’s a new (and quick) Studio setup experience, support for Multi Model Endpoints for PyTorch, and the ability to use SageMaker’s geospatial capabilities on GPU-based instances when using Notebooks.

X in Y – We launched existing services and instance types in new regions:

Other AWS News
Here are some other AWS updates and news:

AWS Fundamentals – The second edition of this awesome book, AWS for the Real World, Not for Certifications, is now available. In addition to more than 400 pages that cover 16 vital AWS services, each chapter includes a detailed and attractive infographic. Here’s a small-scale sample:

More posts from AWS blogs  – Here are a few posts from some of the other AWS and cloud blogs that I follow:

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS End User Computing Innovation Day, Sept. 13 – The one-day virtual event is designed to help IT teams tasked with providing the tools employees need to do their jobs, especially in today’s challenging times. Learn more.

AWS Global Summits, Sept. 26 – The last in-person AWS Summit will be held in Johannesburg on Sept. 26th. You can also watch on-demand videos of the latest Summit events such as Berlin, Bogotá, Paris, Seoul, Sydney, Tel Aviv, and Washington DC in the AWS YouTube channels.

CDK Day, Sept. 29 – A community-led fully virtual event with tracks in English and Spanish about CDK and related projects. Learn more at the website.

AWS re:Invent, Nov. 27-Dec. 1AWS re:Invent 2023Ready to start planning your re:Invent? Browse the session catalog now. Join us to hear the latest from AWS, learn from experts, and connect with the global cloud community.

AWS Community Days, multiple dates AWS Community Day– Join a community-led conference run by AWS user group leaders in your region: Munich (Sept. 14), Argentina (Sept. 16), Spain (Sept. 23), Peru (Sept. 30), and Chile (Sept. 30). Visit the landing page to check out all the upcoming AWS Community Days.

You can browse all upcoming AWS-led in-person and virtual events, and developer-focused events such as AWS DevDay.


This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Mountpoint for Amazon S3 – Generally Available and Ready for Production Workloads

Post Syndicated from Jeff Barr original

Mountpoint for Amazon S3 is an open source file client that makes it easy for your file-aware Linux applications to connect directly to Amazon Simple Storage Service (Amazon S3) buckets. Announced earlier this year as an alpha release, it is now generally available and ready for production use on your large-scale read-heavy applications: data lakes, machine learning training, image rendering, autonomous vehicle simulation, ETL, and more. It supports file-based workloads that perform sequential and random reads, sequential (append only) writes, and that don’t need full POSIX semantics.

Why Files?
Many AWS customers use the S3 APIs and the AWS SDKs to build applications that can list, access, and process the contents of an S3 bucket. However, many customers have existing applications, commands, tools, and workflows that know how to access files in UNIX style: reading directories, opening & reading existing files, and creating & writing new ones. These customers have asked us for an official, enterprise-ready client that supports performant access to S3 at scale. After speaking with these customers and asking lots of questions, we learned that performance and stability were their primary concerns, and that POSIX compliance was not a necessity.

When I first wrote about Amazon S3 back in 2006 I was very clear that it was intended to be used as an object store, not as a file system. While you would not want use the Mountpoint / S3 combo to store your Git repositories or the like, using it in conjunction with tools that can read and write files, while taking advantage of S3’s scale and durability, makes sense in many situations.

All About Mountpoint
Mountpoint is conceptually very simple. You create a mount point and mount an Amazon S3 bucket (or a path within a bucket) at the mount point, and then access the bucket using shell commands (ls, cat, dd, find, and so forth), library functions (open, close, read, write, creat, opendir, and so forth) or equivalent commands and functions as supported in the tools and languages that you already use.

Under the covers, the Linux Virtual Filesystem (VFS) translates these operations into calls to Mountpoint, which in turns translates them into calls to S3: LIST, GET, PUT, and so forth. Mountpoint strives to make good use of network bandwidth, increasing throughput and allowing you to reduce your compute costs by getting more work done in less time.

Mountpoint can be used from an Amazon Elastic Compute Cloud (Amazon EC2) instance, or within an Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (EKS) container. It can also be installed on your existing on-premises systems, with access to S3 either directly or over an AWS Direct Connect connection via AWS PrivateLink for Amazon S3.

Installing and Using Mountpoint for Amazon S3
Mountpoint is available in RPM format and can easily be installed on an EC2 instance running Amazon Linux. I simply fetch the RPM and install it using yum:

$ wget
$ sudo yum install ./mount-s3.rpm

For the last couple of years I have been regularly fetching images from several of the Washington State Ferry webcams and storing them in my wsdot-ferry bucket:

I collect these images in order to track the comings and goings of the ferries, with a goal of analyzing them at some point to find the best times to ride. My goal today is to create a movie that combines an entire day’s worth of images into a nice time lapse. I start by creating a mount point and mounting the bucket:

$ mkdir wsdot-ferry
$  mount-s3 wsdot-ferry wsdot-ferry

I can traverse the mount point and inspect the bucket:

$ cd wsdot-ferry
$ ls -l | head -10
total 0
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 2020_12_30
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 2020_12_31
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 2021_01_01
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 2021_01_02
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 2021_01_03
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 2021_01_04
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 2021_01_05
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 2021_01_06
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 2021_01_07
$  cd 2020_12_30
$ ls -l
total 0
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 fauntleroy_holding
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 fauntleroy_way
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 lincoln
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 trenton
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 vashon_112_north
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 vashon_112_south
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 vashon_bunker_north
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 vashon_bunker_south
drwxr-xr-x 2 jeff jeff 0 Aug  7 23:07 vashon_holding
$ cd fauntleroy_holding
$  ls -l | head -10
total 2680
-rw-r--r-- 1 jeff jeff  19337 Feb 10  2021 17-12-01.jpg
-rw-r--r-- 1 jeff jeff  19380 Feb 10  2021 17-15-01.jpg
-rw-r--r-- 1 jeff jeff  19080 Feb 10  2021 17-18-01.jpg
-rw-r--r-- 1 jeff jeff  17700 Feb 10  2021 17-21-01.jpg
-rw-r--r-- 1 jeff jeff  17016 Feb 10  2021 17-24-01.jpg
-rw-r--r-- 1 jeff jeff  16638 Feb 10  2021 17-27-01.jpg
-rw-r--r-- 1 jeff jeff  16713 Feb 10  2021 17-30-01.jpg
-rw-r--r-- 1 jeff jeff  16647 Feb 10  2021 17-33-02.jpg
-rw-r--r-- 1 jeff jeff  16750 Feb 10  2021 17-36-01.jpg

I can create my animation with a single command:

$ ffmpeg -framerate 10 -pattern_type glob -i "*.jpg" ferry.gif

And here’s what I get:

As you can see, I used Mountpoint to access the existing image files and to write the newly created animation back to S3. While this is a fairly simple demo, it does show how you can use your existing tools and skills to process objects in an S3 bucket. Given that I have collected several million images over the years, being able to process them without explicitly syncing them to my local file system is a big win.

Mountpoint for Amazon S3 Facts
Here are a couple of things to keep in mind when using Mountpoint:

Pricing – There are no new charges for the use of Mountpoint; you pay only for the underlying S3 operations. You can also use Mountpoint to access requester-pays buckets.

PerformanceMountpoint is able to take advantage of the elastic throughput offered by S3, including data transfer at up to 100 Gb/second between each EC2 instance and S3.

CredentialsMountpoint accesses your S3 buckets using the AWS credentials that are in effect when you mount the bucket. See the CONFIGURATION doc for more information on credentials, bucket configuration, use of requester pays, some tips for the use of S3 Object Lambda, and more.

Operations & SemanticsMountpoint supports basic file operations, and can read files up to 5 TB in size. It can list and read existing files, and it can create new ones. It cannot modify existing files or delete directories, and it does not support symbolic links or file locking (if you need POSIX semantics, take a look at Amazon FSx for Lustre). For more information about the supported operations and their interpretation, read the SEMANTICS document.

Storage Classes – You can use Mountpoint to access S3 objects in all storage classes except S3 Glacier Flexible Retrieval, S3 Glacier Deep Archive, S3 Intelligent-Tiering Archive Access Tier, and S3 Intelligent-Tiering Deep Archive Access Tier.

Open SourceMountpoint is open source and has a public roadmap. Your contributions are welcome; be sure to read our Contributing Guidelines and our Code of Conduct first.

Hop On
As you can see, Mountpoint is really cool and I am guessing that you are going to find some awesome ways to put it to use in your applications. Check it out and let me know what you think!


New Seventh-Generation General Purpose Amazon EC2 Instances (M7i-Flex and M7i)

Post Syndicated from Jeff Barr original

Today we are launching Amazon Elastic Compute Cloud (Amazon EC2) M7i-Flex and M7i instances powered by custom 4th generation Intel Xeon Scalable processors available only on AWS, that offer the best performance among comparable Intel processors in the cloud – up to 15% faster than Intel processors utilized by other cloud providers. M7i-Flex instances are available in the five most common sizes, and are designed to give you up to 19% better price/performance than M6i instances for many workloads. The M7i instances are available in nine sizes (with two size of bare metal instances in the works), and offer 15% better price/performance than the previous generation of Intel-powered instances.

M7i-Flex Instances
The M7i-Flex instances are a lower-cost variant of the M7i instances, with 5% better price/performance and 5% lower prices. They are great for applications that don’t fully utilize all compute resources. The M7i-Flex instances deliver a baseline of 40% CPU performance, and can scale up to full CPU performance 95% of the time. M7i-Flex instances are ideal for running general purpose workloads such as web and application servers, virtual desktops, batch processing, micro-services, databases and enterprise applications. If you are currently using earlier generations of general-purposes instances, you can adopt M7i-Flex instances without having to make changes to your application or your workload.

Here are the specs for the M7i-Flex instances:

Instance Name vCPUs
Network Bandwidth
EBS Bandwidth
m7i-flex.large 2 8 GiB up to 12.5 Gbps up to 10 Gbps
m7i-flex.xlarge 4 16 GiB up to 12.5 Gbps up to 10 Gbps
m7i-flex.2xlarge 8 32 GiB up to 12.5 Gbps up to 10 Gbps
m7i-flex.4xlarge 16 64 GiB up to 12.5 Gbps up to 10 Gbps
m7i-flex.8xlarge 32 128 GiB up to 12.5 Gbps up to 10 Gbps

M7i Instances
For workloads such as large application servers and databases, gaming servers, CPU based machine learning, and video streaming that need the largest instance sizes or high CPU continuously, you can get price/performance benefits by using M7i instances.

Here are the specs for the M7i instances:

Instance Name vCPUs
Network Bandwidth
EBS Bandwidth
m7i.large 2 8 GiB up to 12.5 Gbps up to 10 Gbps
m7i.xlarge 4 16 GiB up to 12.5 Gbps up to 10 Gbps
m7i.2xlarge 8 32 GiB up to 12.5 Gbps up to 10 Gbps
m7i.4xlarge 16 64 GiB up to 12.5 Gbps up to 10 Gbps
m7i.8xlarge 32 128 GiB 12.5 Gbps 10 Gbps
m7i.12xlarge 48 192 GiB 18.75 Gbps 15 Gbps
m7i.16xlarge 64 256 GiB 25.0 Gbps 20 Gbps
m7i.24xlarge 96 384 GiB 37.5 Gbps 30 Gbps
m7i.48xlarge 192 768 GiB 50 Gbps 40 Gbps

You can attach up to 128 EBS volumes to each M7i instance; by way of comparison, the M6i instances allow you to attach up to 28 volumes.

We are also getting ready to launch two sizes of bare metal M7i instances:

Instance Name vCPUs
Network Bandwidth
EBS Bandwidth
m7i.metal-24xl 96 384 GiB 37.5 Gbps 30 Gbps
m7i.metal-48xl 192 768 GiB 50.0 Gbps 40 Gbps

Built-In Accelerators
The Sapphire Rapids processors include four built-in accelerators, each providing hardware acceleration for a specific workload:

  • Advanced Matrix Extensions (AMX) – This set of extensions to the x86 instruction set improve deep learning and inferencing, and support workloads such as natural language processing, recommendation systems, and image recognition. The extensions provide high-speed multiplication operations on 2-dimensional matrices of INT8 or BF16 values. To learn more, read Chapter 3 of the Intel AMX Instruction Set Reference.
  • Intel Data Streaming Accelerator (DSA) – This accelerator drives high performance for storage, networking, and data-intensive workloads by offloading common data movement tasks between CPU, memory, caches, network devices, and storage devices, improving streaming data movement and transformation operations. Read Introducing the Intel Data Streaming Accelerator (Intel DSA) to learn more.
  • Intel In-Memory Analytics Accelerator (IAA) – This accelerator runs database and analytic workloads faster, with the potential for greater power efficiency. In-memory compression, decompression, and encryption at very high throughput, and a suite of analytics primitives support in-memory databases, open source database, and data stores like RocksDB and ClickHouse. To learn more, read the Intel In-Memory Analytics Accelerator (Intel IAA) Architecture Specification.
  • Intel QuickAssist Technology (QAT) -This accelerator offloads encryption, decryption, and compression, freeing up processor cores and reducing power consumption. It also supports merged compression and encryption in a single data flow. To learn more start at the Intel QuickAssist Technology (Intel QAT) Overview.

Some of these accelerators require the use of specific kernel versions, drivers, and/or compilers.

The Advanced Matrix Extensions are available on all sizes of M7i and M7i-Flex instances. The Intel QAT, Intel IAA, and Intel DSA accelerators will be available on the m7i.metal-24xl and m7i.metal-48xl instances.

Here are a couple of things to keep in mind about the M7i-Flex and M7i instances:

Regions – The new instances are available in the US East (Ohio, N. Virginia), US West (Oregon), and Europe (Ireland) AWS Regions, and we plan to expand to additional regions throughout the rest of 2023.

Purchasing Options – M7i-Flex amd M7i instances are available in On-Demand, Reserved Instance, Savings Plan, and Spot form. M7i instances are also available in Dedicated Host and Dedicated Instance form.


Prime Day 2023 Powered by AWS – All the Numbers

Post Syndicated from Jeff Barr original

As part of my annual tradition to tell you about how AWS makes Prime Day possible, I am happy to be able to share some chart-topping metrics (check out my 2016, 2017, 2019, 2020, 2021, and 2022 posts for a look back).

This year I bought all kinds of stuff for my hobbies including a small drill press, filament for my 3D printer, and irrigation tools. I also bought some very nice Alphablock books for my grandkids. According to our official release, the first day of Prime Day was the single largest sales day ever on Amazon and for independent sellers, with more than 375 million items purchased.

Prime Day by the Numbers
As always, Prime Day was powered by AWS. Here are some of the most interesting and/or mind-blowing metrics:

Amazon Elastic Block Store (Amazon EBS) – The Amazon Prime Day event resulted in an incremental 163 petabytes of EBS storage capacity allocated – generating a peak of 15.35 trillion requests and 764 petabytes of data transfer per day. Compared to the previous year, Amazon increased the peak usage on EBS by only 7% Year-over-Year yet delivered +35% more traffic per day due to efficiency efforts including workload optimization using Amazon Elastic Compute Cloud (Amazon EC2) AWS Graviton-based instances. Here’s a visual comparison:

AWS CloudTrail – AWS CloudTrail processed over 830 billion events in support of Prime Day 2023.

Amazon DynamoDB – DynamoDB powers multiple high-traffic Amazon properties and systems including Alexa, the sites, and all Amazon fulfillment centers. Over the course of Prime Day, these sources made trillions of calls to the DynamoDB API. DynamoDB maintained high availability while delivering single-digit millisecond responses and peaking at 126 million requests per second.

Amazon Aurora – On Prime Day, 5,835 database instances running the PostgreSQL-compatible and MySQL-compatible editions of Amazon Aurora processed 318 billion transactions, stored 2,140 terabytes of data, and transferred 836 terabytes of data.

Amazon Simple Email Service (SES) – Amazon SES sent 56% more emails for during Prime Day 2023 vs. 2022, delivering 99.8% of those emails to customers.

Amazon CloudFront – Amazon CloudFront handled a peak load of over 500 million HTTP requests per minute, for a total of over 1 trillion HTTP requests during Prime Day.

Amazon SQS – During Prime Day, Amazon SQS set a new traffic record by processing 86 million messages per second at peak. This is 22% increase from Prime Day of 2022, where SQS supported 70.5M messages/sec.

Amazon Elastic Compute Cloud (EC2) – During Prime Day 2023, Amazon used tens of millions of normalized AWS Graviton-based Amazon EC2 instances, 2.7x more than in 2022, to power over 2,600 services. By using more Graviton-based instances, Amazon was able to get the compute capacity needed while using up to 60% less energy.

Amazon Pinpoint – Amazon Pinpoint sent tens of millions of SMS messages to customers during Prime Day 2023 with a delivery success rate of 98.3%.

Prepare to Scale
Every year I reiterate the same message: rigorous preparation is key to the success of Prime Day and our other large-scale events. If you are preparing for a similar chart-topping event of your own, I strongly recommend that you take advantage of AWS Infrastructure Event Management (IEM). As part of an IEM engagement, my colleagues will provide you with architectural and operational guidance that will help you to execute your event with confidence!


New – AWS Public IPv4 Address Charge + Public IP Insights

Post Syndicated from Jeff Barr original

We are introducing a new charge for public IPv4 addresses. Effective February 1, 2024 there will be a charge of $0.005 per IP per hour for all public IPv4 addresses, whether attached to a service or not (there is already a charge for public IPv4 addresses you allocate in your account but don’t attach to an EC2 instance).

Public IPv4 Charge
As you may know, IPv4 addresses are an increasingly scarce resource and the cost to acquire a single public IPv4 address has risen more than 300% over the past 5 years. This change reflects our own costs and is also intended to encourage you to be a bit more frugal with your use of public IPv4 addresses and to think about accelerating your adoption of IPv6 as a modernization and conservation measure.

This change applies to all AWS services including Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Service (RDS) database instances, Amazon Elastic Kubernetes Service (EKS) nodes, and other AWS services that can have a public IPv4 address allocated and attached, in all AWS regions (commercial, AWS China, and GovCloud). Here’s a summary in tabular form:

Public IP Address Type Current Price/Hour (USD) New Price/Hour (USD)
(Effective February 1, 2024)
In-use Public IPv4 address (including Amazon provided public IPv4 and Elastic IP) assigned to resources in your VPC, Amazon Global Accelerator, and AWS Site-to-site VPN tunnel No charge $0.005
Additional (secondary) Elastic IP Address on a running EC2 instance $0.005 $0.005
Idle Elastic IP Address in account $0.005 $0.005

The AWS Free Tier for EC2 will include 750 hours of public IPv4 address usage per month for the first 12 months, effective February 1, 2024. You will not be charged for IP addresses that you own and bring to AWS using Amazon BYOIP.

Starting today, your AWS Cost and Usage Reports automatically include public IPv4 address usage. When this price change goes in to effect next year you will also be able to use AWS Cost Explorer to see and better understand your usage.

As I noted earlier in this post, I would like to encourage you to consider accelerating your adoption of IPv6. A new blog post shows you how to use Elastic Load Balancers and NAT Gateways for ingress and egress traffic, while avoiding the use of a public IPv4 address for each instance that you launch. Here are some resources to show you how you can use IPv6 with widely used services such as EC2, Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Kubernetes Service (EKS), Elastic Load Balancing, and Amazon Relational Database Service (RDS):

Earlier this year we enhanced EC2 Instance Connect and gave it the ability to connect to your instances using private IPv4 addresses. As a result, you no longer need to use public IPv4 addresses for administrative purposes (generally using SSH or RDP).

Public IP Insights
In order to make it easier for you to monitor, analyze, and audit your use of public IPv4 addresses, today we are launching Public IP Insights, a new feature of Amazon VPC IP Address Manager that is available to you at no cost. In addition to helping you to make efficient use of public IPv4 addresses, Public IP Insights will give you a better understanding of your security profile. You can see the breakdown of public IP types and EIP usage, with multiple filtering options:

You can also see, sort, filter, and learn more about each of the public IPv4 addresses that you are using:

Using IPv4 Addresses Efficiently
By using the new IP Insights tool and following the guidance that I shared above, you should be ready to update your application to minimize the effect of the new charge. You may also want to consider using AWS Direct Connect to set up a dedicated network connection to AWS.

Finally, be sure to read our new blog post, Identify and Optimize Public IPv4 Address Usage on AWS, for more information on how to make the best use of public IPv4 addresses.


New: AWS Local Zone in Phoenix, Arizona – More Instance Types, More EBS Storage Classes, and More Services

Post Syndicated from Jeff Barr original

I am happy to announce that a new AWS Local Zone in Phoenix, Arizona is now open and ready for you to use, with more instance types, storage classes, and services than ever before.

We launched the first AWS Local Zone in 2019 (AWS Now Available from a Local Zone in Los Angeles) with the goal of making a select set of EC2 instance types, EBS volume types, and other AWS services available with single-digit millisecond when accessed from Los Angeles and other locations in Southern California. Since then, we have launched a second Local Zone in Los Angeles, along with 15 more in other parts of the United States and another 17 around the world, 34 in all. We are also planning to build 19 more Local Zones outside of the US (see the Local Zones Locations page for a complete list).

Local Zones In Action
Our customers make use of Local Zones in many different ways. Popular use cases include real-time gaming, hybrid migrations, content creation for media & entertainment, live video streaming, engineering simulations, and AR/VR at the edge. Here are a couple of great examples that will give you a taste of what is possible:

Arizona State University (ASU) – Known for its innovation and research, ASU is among the largest universities in the U.S. with 173,000 students and 20,000 faculty and staff. Local Zones help them to accelerate the delivery of online services and storage, giving them a level of performance that is helping them to transform the educational experience for students and staff.

DISH Wireless -Two years ago they began to build a cloud-native, fully virtualized 5G network on AWS, making use of Local Zones to support latency-sensitive real-time 5G applications and workloads at the network edge (read Telco Meets AWS Cloud to learn more). The new Local Zone in Phoenix will allow them to further enhance the strength and reliability of their network by extending their 5G core to the edge.

We work closely with these and many other customers to make sure that the Local Zone(s) that they use are a great fit for their use cases. In addition to the already-strong set of instance types, storage classes, and services that are part-and-parcel of every Local Zone, we add others on an as-needed basis.

For example, Local Zones in Los Angeles, Miami, and other locations have additional instance types; several Local Zones have additional Amazon Elastic Block Store (Amazon EBS) storage classes, and others have extra services such as Application Load Balancer, Amazon FSx, Amazon EMR, Amazon ElastiCache, Amazon Relational Database Service (RDS), Amazon GameLift, and AWS Application Migration Service (AWS MGN). You can see this first-hand on the Local Zones Features page.

And Now, Phoenix
As I mentioned earlier, this Local Zone has more instance types, storage classes, and services than earlier Local Zones. Here’s what’s inside:

Instance Types – Compared to all other Local Zones with the T3, C5(d), R5(d), and G4dn instance types, the Phoenix Local Zone includes C6i, M6i, R6i, and Cg6n instances.

EBS Volume Types  – In addition to the gp2 volumes that are available in all Local Zones, the Phoenix Local Zone includes gp3 (General Purpose SSD) , io1 (Provisioned IOPS SSD) , st1 (Throughput Optimized HDD), and sc1 (Cold HDD) storage.

Services – In addition to Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Block Store (Amazon EBS), AWS Shield, Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Container Service (Amazon ECS). Amazon Elastic Kubernetes Service (EKS), Application Load Balancer, and AWS Direct Connect, the Phoenix LZ includes NAT Gateway.

Pricing Models – In addition to On-Demand and Savings Plans, the Phoenix Local Zone includes Spot.

Going forward, we plan to launch more Local Zones that are similarly equipped.

Opting-In to the Phoenix Local Zone
The original Phoenix Local Zone was launched in 2022 and remains available to customers who have already enabled it. The Zone that we are announcing today can be enabled by new and existing customers.

To get started with this or any other Local Zone, I must first enable it. To do this, I open the EC2 Console, select the parent region (US West (Oregon)) from the menu, and then click EC2 Dashboard in the left-side navigation:

Then I click on Zones in the Account attributes box:

Next, I scroll down to the new Phoenix Local Zone (us-west-2-phx-2), and click Manage:

I click Enabled, and then Update zone group:

I confirm that I want to enable the Zone Group, and click Ok:

And I am all set. I can create EBS volumes, launch EC2 instances, and make use of the other services in this Local Zone.


AWS Week in Review – Redshift+Forecast, CodeCatalyst+GitHub, Lex Analytics, Llama 2, and Much More – July 24, 2023

Post Syndicated from Jeff Barr original

Summer is in full swing here in Seattle and we are spending more time outside and less at the keyboard. Nevertheless, the launch machine is running at full speed and I have plenty to share with you today. Let’s dive in and take a look!

Last Week’s Launches
Here are some launches that caught my eye:

Amazon Redshift – Amazon Redshift ML can now make use of an integrated connection to Amazon Forecast. You can now use SQL statements of the form CREATE MODEL to create and train forecasting models from your time series data stored in Redshift, and then use these models to make forecasts for revenue, inventory, demand, and so forth. You can also define probability metrics and use them to generate forecasts. To learn more, read the What’s New and the Developer’s Guide.

Amazon CodeCatalyst – You can now trigger Amazon CodeCatalyst workflows from pull request events in linked GitHub repositories. The workflows can perform build, test, and deployment operations, and can be triggered when the pull requests in the linked repositories are opened, revised, or closed. To learn more, read Using GitHub Repositories with CodeCatalyst.

Amazon Lex – You can now use the Analytics on Amazon Lex dashboard to review data-driven insights that will help you to improve the performance of your Lex bots. You get a snapshot of your key metrics, and the ability to drill down for more. You can use conversational flow visualizations to see how users navigate across intents, and you can review individual conversations to make qualitative assessments. To learn more, read the What’s New and the Analytics Overview.

Llama2 Foundation Models – The brand-new Llama 2 foundation models from Meta are now available in Amazon SageMaker JumpStart. The Llama 2 model is available in three parameter sizes (7B, 13B, and 70B) with pretrained and fine-tuned variations. You can deploy and use the models with a few clicks in Amazon SageMaker Studio, and you can also use the SageMaker Python SDK (code and docs) to access them programmatically. To learn more, read Llama 2 Foundation Models from Meta are Now Available in Amazon SageMaker JumpStart and the What’s New.

X in Y – We launched some existing services and instances types in additional AWS Regions:

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here are some additional blog posts and news items that you might find interesting:

AWS Open Source News and Updates – My colleague Ricardo has published issue 166 of his legendary and highly informative AWS Open Source Newsletter!

CodeWhisperer in Action – My colleague Danilo wrote an interesting post to show you how to Reimagine Software Development With CodeWhisperer as Your AI Coding Companion.

News Blog Survey – If you have read this far, please consider taking the AWS Blog Customer Survey. Your responses will help us to gauge your satisfaction with this blog, and will help us to do a better job in the future. This survey is hosted by an external company, so the link does not lead to our web site. AWS handles your information as described in the AWS Privacy Notice.

CDK Integration Tests – The AWS Application Management Blog wrote a post to show you How to Write and Execute Integration Tests for AWS CDK Applications.

Event-Driven Architectures – The AWS Architecture Blog shared some Best Practices for Implementing Event-Driven Architectures in Your Organization.

Amazon Connect – The AWS Contact Center Blog explained how to Manage Prompts Programmatically with Amazon Connect.

Rodents – The AWS Machine Learning Blog showed you how to Analyze Rodent Infestation Using Amazon SageMaker Geospatial Capabilities.

Secrets Migration – The AWS Security Blog published a two-part series that discusses migrating your secrets to AWS Secrets Manager (Part 1: Discovery and Design, Part 2: Implementation).

Upcoming AWS Events
Check your calendar and sign up for these AWS events:

AWS Storage Day – Join us virtually on August 9th to learn about how to prepare for AI/ML, deliver holistic data protection, and optimize storage costs for your on-premises and cloud data. Register now.

AWS Global Summits – Attend the upcoming AWS Summits in New York (July 26), Taiwan (August 2 & 3), São Paulo (August 3), and Mexico City (August 30).

AWS Community Days – Attend upcoming AWS Community Days in The Philippines (July 29-30), Colombia (August 12), and West Africa (August 19).

re:InventRegister now for re:Invent 2023 in Las Vegas (November 27 to December 1).

That’s a Wrap
And that’s about it for this week. I’ll be sharing additional news this coming Friday on AWS on Air – tune in and say hello!


New – Amazon FSx for NetAPP ONTAP Now Supports WORM Protection for Regulatory Compliance and Ransomware Protection

Post Syndicated from Jeff Barr original

Amazon FSx for NetApp ONTAP was launched in late 2021. With FSx for ONTAP you get the popular features, performance, and APIs of ONTAP file systems, with the agility, scalability, security, and resilience of AWS, all as a fully managed service.

Today we are adding support for SnapLock, an ONTAP feature that gives you the power to create volumes that provide Write Once Read Many (WORM) functionality. SnapLock volumes prevent modification or deletion of files within a specified retention period, and can be used to meet regulatory requirements and to protect business-critical data from ransomware attacks and other malicious attempts at alteration or deletion. FSx for ONTAP is the only cloud-based file system that supports SnapLock Compliance mode. FSx for ONTAP also supports tiering of WORM data to lower-cost storage for all SnapLock volumes.

Protecting Data with SnapLock
SnapLock gives you an additional layer of data protection, and can be thought of as part of your organization’s overall data protection strategy. When you create a volume and enable SnapLock, you choose one of the following retention modes:

Compliance – This mode is used to address mandates such as SEC Rule 17a-4(f), FINRA Rule 4511 and CFTC Regulation 1.31. You can use this mode to ensure a WORM file cannot be deleted by any user until after its retention period expires. Volumes in this mode cannot be renamed and cannot be deleted until the retention periods of all WORM files on the volume have expired.

Enterprise – This mode is used to enforce organizational data retention policies or to test retention settings before creating volumes in Compliance mode. You can use this mode to prevent most users from deleting WORM data, while allowing authorized users to perform deletions, if necessary. Volumes in this mode can be deleted even if they contain WORM files under an active retention period.

You also choose a default retention period. This period indicates the length of time that each file must be retained after it is committed to the WORM state, and can be as long as 100 years, and there’s also an Infinite option. You can also set a custom retention period for specific files or specific trees of files and it will apply to those files at the time that they are committed to the WORM state.

Files are committed to the WORM state when they become read-only (chmod -w on Linux or attrib +r on Windows). You can configure a per-volume autocommit period (5 minutes to 10 years) to automatically commit files that have remained as-is for the period, and you can also initiate a Legal Hold in Compliance mode in order to retain specific files for legal purposes.

You also have another interesting data protection and compliance option. You can create one volume without SnapLock enabled, and another one with it enabled, and then periodically replicate from the first one to the second using NetApp SnapVault. This will give you snapshot copies of entire volumes that you can retain for months, years, or decades as needed.

Speaking of interesting options, you can make use of FSx for ONTAP volume data tiering to keep active files on high-performance SSD storage and the other files on storage that is cost-optimized for data that is accessed infrequently.

Creating SnapLock Volumes
I can create new volumes and enable SnapLock with a couple of clicks. I enter the volume name, size, and path as usual:

As I mentioned earlier, I can also make use of a capacity pool (this is set to Auto by default, and I set a 10 day cooling period):

I scroll down to the Advanced section and click Enabled, then select Enterprise retention mode. I also set up my retention periods, enable autocommit after 9 days, and leave the other options as-is:

I add a tag, and click Create volume to move ahead:

I take a quick break, and when I come back my volume is ready to use:

At this point I can mount it in the usual way, create files, and allow SnapLock to do its thing!

Things to Know
Here are a couple of things that you should know about this powerful new feature:

Existing Volumes – You cannot enable this feature for an existing volume, but you can create a new, SnapLock-enabled volume, and copy or migrate the data to it.

Volume Deletion – As I noted earlier, you cannot delete a SnapLock Compliance volume if it contains WORM files with an unexpired retention period. Take care when setting this to avoid creating volumes that will last longer than needed.

Pricing – There’s an additional GB/month license charge for the use of SnapLock volumes; check out the Amazon FSx for NetAPP ONTAP Pricing page for more information.

Regions – This feature is available in all AWS Regions where Amazon FSx for NetApp ONTAP is available.


New Amazon EC2 C7gn Instances: Graviton3E Processors and Up To 200 Gbps Network Bandwidth

Post Syndicated from Jeff Barr original

The C7gn instances that we previewed last year are now available and you can start using them today. The instances are designed for your most demanding network-intensive workloads (firewalls, virtual routers, load balancers, and so forth), data analytics, and tightly-coupled cluster computing jobs. They are powered by AWS Graviton3E processors and support up to 200 Gbps of network bandwidth.

Here are the specs:

Instance Name vCPUs
Network Bandwidth
EBS Bandwidth
c7gn.medium 1 2 GiB up to 25 Gbps up to 10 Gbps
c7gn.large 2 4 GiB up to 30 Gbps up to 10 Gbps
c7gn.xlarge 4 8 GiB up to 40 Gbps up to 10 Gbps
c7gn.2xlarge 8 16 GiB up to 50 Gbps up to 10 Gbps
c7gn.4xlarge 16 32 GiB 50 Gbps up to 10 Gbps
c7gn.8xlarge 32 64 GiB 100 Gbps up to 20 Gbps
c7gn.12xlarge 48 96 GiB 150 Gbps up to 30 Gbps
c7gn.16xlarge 64 128 GiB 200 Gbps up to 40 Gbps

The increased network bandwidth is made possible by the new 5th generation AWS Nitro Card. As another benefit, these instances deliver the lowest Elastic Fabric Adapter (EFA) latency of any current EC2 instance.

Here’s a quick infographic that shows you how the C7gn instances and the Graviton3E processors compare to previous instances and processors:

As you can see, the Graviton3E processors deliver substantially higher memory bandwidth and compute performance than the Graviton2 processors, along with higher vector instruction performance than the Graviton3 processors.

C7gn instances are available in the US East (Ohio, N. Virginia), US West (Oregon), and Europe (Ireland) AWS Regions in On-Demand, Reserved Instance, Spot, and Savings Plan form. Dedicated Instances and Dedicated Hosts are also available.


New – Snowball Edge Storage Optimized Devices with More Storage and Bandwidth

Post Syndicated from Jeff Barr original

AWS Snow Family family devices are used to cost-effectively move data to the cloud and to process data at the edge. The enhanced Snowball Edge Storage Optimized devices are designed for your petabyte-scale data migration projects, with 210 terabytes of NVMe storage and the ability to transfer up to 1.5 gigabytes of data per second. The devices also include several connectivity options: 10GBASE-T, SFP48, and QSFP28.

Large Data Migration
In order to make your migration as smooth and efficient as possible, we now have a well-defined Large Data Migration program. As part of this program, we will work with you to make sure that your site is able to support rapid data transfer, and to set up a proof-of-concept migration. If necessary, we will also recommend services and solutions from our AWS Migration Competency Partners. After successful completion of the proof-of-concept you will be familiar with the Snow migration process, and you will be ready to order devices using the process outlined below.

You can make use of the Large Data Migration program by contacting AWS Sales Support.

Ordering Devices
While you can order and manage devices individually, you can save time and reduce complexity by using a large data migration plan. Let’s walk through the process of creating one. I open the AWS Snow Family Console and click Create your large data migration plan:

I enter a name for my migration plan (MediaMigrationPlan), and select or enter the shipping address of my data center:

Then I specify the amount of data that I plan to migrate, and the number of devices that I want to use concurrently (taking into account space, power, bandwidth, and logistics within my data center):

When everything looks good I click Create data migration plan to proceed and my plan becomes active:

I can review the Monitoring section my my plan to see how my migration is going (these are simply Amazon CloudWatch metrics and I can add them to a dashboard, set alarms, and so forth):

The Jobs section includes a recommended job ordering schedule that takes the maximum number of concurrent devices into account:

When I am ready to start transferring data, I visit the Jobs ordered tab and create a Snow job:

As the devices arrive, I connect them to my network and copy data to them via S3 (read Managing AWS Storage) or NFS (read Using NFS File Shares to Manage File Storage), then return it to AWS for ingestion!

Things to Know
Here are a couple of fun facts about this enhanced device:

Regions – Snowball Edge Storage Optimized Devices with 210 TB of storage are available in the US East (N. Virginia) and US West (Oregon) AWS Regions.

Pricing – You pay for the use of the device and for data transfer in and out of AWS, with on-demand and committed upfront pricing available. To learn more about pricing for Snowball Edge Storage Optimized 210 TB devices contact your AWS account team or AWS Sales Support.


Retiring the AWS Documentation on GitHub

Post Syndicated from Jeff Barr original

About five years ago I announced that AWS Documentation is Now Open Source and on GitHub. After a prolonged period of experimentation we will archive most of the repos starting the week of June 5th, and will devote all of our resources to directly improving the AWS documentation and website.

The primary source for most of the AWS documentation is on internal systems that we had to manually sync with the GitHub repos. Despite the best efforts of our documentation team, keeping the public repos in sync with our internal ones has proven to be very difficult and time consuming, with several manual steps and some parallel editing. With 262 separate repos and thousands of feature launches every year, the overhead was very high and actually consumed precious time that could have been put to use in ways that more directly improved the quality of the documentation.

Our intent was to increase value to our customers through openness and collaboration, but we learned through customer feedback that this wasn’t necessarily the case. After carefully considering many options we decided to retire the repos and to invest all of our resources in making the content better.

Repos containing code samples, sample apps, CloudFormation templates, configuration files, and other supplementary resources will remain as-is since those repos are primary sources and get a high level of engagement.

To help us improvement the documentation, we’re also focusing more resources on your feedback:

We watch the thumbs-up and thumbs-down metrics on a weekly basis, and use the metrics as top-level pointers to areas of the documentation that could be improved. The incoming feedback creates tickets that are routed directly to the person or the team that is responsible for the page. I strongly encourage you to make frequent use of both feedback mechanisms.


New Storage-Optimized Amazon EC2 I4g Instances: Graviton Processors and AWS Nitro SSDs

Post Syndicated from Jeff Barr original

Today we are launching I4g instances powered by AWS Graviton2 processors that deliver up to 15% better compute performance than our other storage-optimized instances.

With up to 64 vCPUs, 512 GiB of memory, and 15 TB of NVMe storage, one of the six instance sizes is bound to be a great fit for your storage-intensive workloads: relational and non-relational databases, search engines, file systems, in-memory analytics, batch processing, streaming, and so forth. These workloads are generally very sensitive to I/O latency, and require plenty of random read/write IOPS along with high CPU performance.

Here are the specs:

Instance Name vCPUs
Network Bandwidth
EBS Bandwidth
i4g.large 2 16 GiB 468 GB up to 10 Gbps up to 40 Gbps
i4g.xlarge 4 32GiB 937 GB up to 10 Gbps up to 40 Gbps
i4g.2xlarge 8 64 GiB 1.875 TB up to 12 Gbps up to 40 Gbps
i4g.4xlarge 16 128 GiB 3.750 TB up to 25 Gbps up to 40 Gbps
i4g.8xlarge 32 256 GiB 7.500 TB
(2 x 3.750 TB)
18.750 Gbps 40 Gbps
i4g.16xlarge 64 512 GiB 15.000 TB
(4 x 3.750 TB)
37.500 Gbps 80 Gbps

The I4g instances make use of AWS Nitro SSDs (read AWS Nitro SSD – High Performance Storage for your I/O-Intensive Applications to learn more) for NVMe storage. Each storage volume can deliver the following performance (all measured using 4 KiB blocks):

  • Up to 800K random write IOPS
  • Up to 1 million random read IOPS
  • Up to 5600 MB/second of sequential writes
  • Up to 8000 MB/second of sequential reads

Torn Write Protection is supported for 4 KiB, 8 KiB, and 16 KiB blocks.

Available Now
I4g instances are available today in the US East (Ohio, N. Virginia), US West (Oregon), and Europe (Ireland) AWS Regions in On-Demand, Spot, Reserved Instance, and Savings Plan form.


AWS Week in Review – April 24, 2023: Amazon CodeCatalyst, Amazon S3 on Snowball Edge, and More…

Post Syndicated from Jeff Barr original

As always, there’s plenty to share this week: Amazon CodeCatalyst is now generally available, Amazon S3 is now available on Snowball Edge devices, version 1.0.0 of AWS Amplify Flutter is here, and a lot more. Let’s dive in!

Last Week’s Launches
Here are some of the launches that caught my eye this past week:

Amazon CodeCatalyst – First announced at re:Invent in preview form (Announcing Amazon CodeCatalyst, a Unified Software Development Service), this unified software development and delivery service is now generally available. As Steve notes in the post that he wrote for the preview, “Amazon CodeCatalyst enables software development teams to quickly and easily plan, develop, collaborate on, build, and deliver applications on AWS, reducing friction throughout the development lifecycle.” During the preview we added the ability to use AWS Graviton2 for CI/CD workflows and deployment environments, along with other new features, as detailed in the What’s New.

Amazon S3 on Snowball Edge – You have had the power to create S3 buckets on AWS Snow Family devices for a couple of years, and to PUT and GET object. With this new launch you can, as Channy says, “…use an expanded set of Amazon S3 APIs to easily build applications on AWS and deploy them on Snowball Edge Compute Optimized devices.” This launch allows you to manage the storage using AWS OpsHub, and to address multiple Denied, Disrupted, Intermittent, and Limited Impact (DDIL) use cases. To learn more, read Amazon S3 Compatible Storage on AWS Snowball Edge Compute Optimized Devices Now Generally Available.

Amazon Redshift Updates – We announced multiple updates to Amazon Redshift including the MERGE SQL command so that you can combine a series of DML statements into a single statement, dynamic data masking to simplify the process of protecting sensitive data in your Amazon Redshift data warehouse, and centralized access control for data sharing with AWS Lake Formation.

AWS Amplify – You can now build cross-platform Flutter apps that target iOS, Android, Web, and desktop using a single codebase and with a consistent user experience. To learn more and to see how to get started, read Amplify Flutter announces general availability for web and desktop support. In addition to the GA, we also announced that AWS Amplify supports Push Notifications for Android, Swift, React Native, and Flutter apps.

X in Y – We made existing services available in additional regions and locations:

For a full list of AWS announcements, take a look at the What’s New at AWS page and consider subscribing to the page’s RSS feed. If you want even more detail, you can Subscribe to AWS Daily Feature Updates via Amazon SNS.

Interesting Blog Posts

Other AWS Blogs – Here are some fresh posts from a few of the other AWS Blogs:

AWS Open Source – My colleague Ricardo writes a weekly newsletter to highlight new open source projects, tools, and demos from the AWS Community. Read edition 154 to learn more.

AWS Graviton Weekly – Marcos Ortiz writes a weekly newsletter to highlight the latest developments in AWS custom silicon. Read AWS Graviton weekly #33 to see what’s up.

Upcoming Events
Here are some upcoming live and online events that may be of interest to you:

AWS Community Day Turkey will take place in Istanbul on May 6, and I will be there to deliver the keynote. Get your tickets and I will see you there!

AWS Summits are coming to Berlin (May 4), Washington, DC (June 7 and 8), London (June 7), and Toronto (June 14). These events are free but I highly recommend that you register ahead of time.

.NET Enterprise Developer Day EMEA is a free one-day virtual conference on April 25; register now.

AWS Developer Innovation Day is also virtual, and takes place on April 26 (read Discover Building without Limits at AWS Developer Innovation Day for more info). I’ll be watching all day and sharing a live recap at the end; learn more and see you there.

And that’s all for today!


Subscribe to AWS Daily Feature Updates via Amazon SNS

Post Syndicated from Jeff Barr original

Way back in 2015 I showed you how to Subscribe to AWS Public IP Address Changes via Amazon SNS. Today I am happy to tell you that you can now receive timely, detailed information about releases and updates to AWS via the same, simple mechanism.

Daily Feature Updates
Simply subscribe to topic arn:aws:sns:us-east-1:692768080016:aws-new-feature-updates using the email protocol and confirm the subscription in the usual way:

You will receive daily emails that start off like this, with an introduction and a summary of the update:

After the introduction, the email contains a JSON representation of the daily feature updates:

As noted in the message, the JSON content is also available online at URLs that look like . You can also edit the date in the URL to access historical data going back up to six months.

The email message also includes detailed information about changes and additions to managed policies that will be of particular interest to AWS customers who currently manually track and then verify the impact that these changes may have on their security profile. Here’s a sample list of changes (additional permissions) to existing managed policies:

And here’s a new managed policy:

Even More Information
The header of the email contains a link to a treasure trove of additional information. Here are some examples:

AWS Regions and AWS Services – A pair of tables. The first one includes a row for each AWS Region and a column for each service, and the second one contains the transposed version:

AWS Regions and EC2 Instance Types – Again, a pair of tables. The first one includes a row for each AWS Region and a column for each EC2 instance type, and the second one contains the transposed version:

The EC2 Instance Types Configuration link leads to detailed information about each instance type:

Each page also includes a link to the same information in JSON form. For example (EC2 Instance Types Configuration), starts like this:

    "a1.2xlarge": {
        "af-south-1": "-",
        "ap-east-1": "-",
        "ap-northeast-1": "a1.2xlarge",
        "ap-northeast-2": "-",
        "ap-northeast-3": "-",
        "ap-south-1": "a1.2xlarge",
        "ap-south-2": "-",
        "ap-southeast-1": "a1.2xlarge",
        "ap-southeast-2": "a1.2xlarge",
        "ap-southeast-3": "-",
        "ap-southeast-4": "-",
        "ca-central-1": "-",
        "eu-central-1": "a1.2xlarge",
        "eu-central-2": "-",
        "eu-north-1": "-",
        "eu-south-1": "-",
        "eu-south-2": "-",
        "eu-west-1": "a1.2xlarge",
        "eu-west-2": "-",
        "eu-west-3": "-",
        "me-central-1": "-",
        "me-south-1": "-",
        "sa-east-1": "-",
        "us-east-1": "a1.2xlarge",
        "us-east-2": "a1.2xlarge",
        "us-gov-east-1": "-",
        "us-gov-west-1": "-",
        "us-west-1": "-",
        "us-west-2": "a1.2xlarge"

Other information includes:

  • VPC Endpoints
  • AWS Services Integrated with Service Quotas
  • Amazon SageMaker Instance Types
  • RDS DB Engine Versions
  • Amazon Nimble Instance Types
  • Amazon MSK Apache Kafka Versions

Information Sources
The information is pulled from multiple public sources, cross-checked, and then issued. Here are some of the things that we look for:

Things to Know
Here are a couple of things that you should keep in mind about the AWS Daily Feature Updates:

Content – The content provided in the Daily Feature Updates and in the treasure trove of additional information will continue to grow as new features are added to AWS.

Region Coverage – The Daily Feature Updates cover all AWS Regions in the public partition. Where possible, it also provides information about GovCloud regions; this currently includes EC2 Instance Types, SageMaker Instance Types, and Amazon Nimble Instance Types.

Region Mappings – The internal data that drives all of the information related to AWS Regions is updated once a day if there are applicable new features, and also when new AWS Regions are enabled.

Updates – On days when there are no updates, there will not be an email notification.

Usage – Similar to the updates on the What’s New page and the associated RSS feed, the updates are provided for informational purposes, and you still need to do your own evaluation and testing before deploying to production.



AWS Week in Review – March 6, 2023

Post Syndicated from Jeff Barr original

It has been a week full of interesting launches and I am thrilled to be able to share them with you today. We’ve got a new region in the works, a new tool for researchers, updates to Amazon Timestream, Control Tower, and Amazon Inspector, Lambda Powertools for .NET, existing services in new locations, lots of posts from other AWS blogs, upcoming events, and more.

Last Week’s Launches
Here are some of the launches that caught my eye this past week:

AWS Region in Malaysia – We are working on an AWS Region in Malaysia, bringing the number of regions that are currently in the works to five. The upcoming region will include three Availability Zones, and represents our commitment to invest at least $6 Billion in Malaysia by 2037. You can read my post to learn about how our enterprise, startup, and public sector customers are already using AWS.

Amazon Lightsail for Research – You can get access to analytical applications such as Scilab, RStudio, and Jupyter with just a couple of couple of clicks. Instead of processing large data sets on your laptop, you can get to work quickly without having to deal with hardware setup, software setup, or tech support.

Batch Loading of Data into Amazon Timestream – You can now batch-load time series data into Amazon Timestream. You upload the data to an Amazon Simple Storage Service (Amazon S3) bucket in CSV form, specify a target database and table, and a data model. The ingestion happens automatically and reliably, with parallel processes at work for efficiency.

Control Tower Progress TrackerAWS Control Tower now includes a progress tracker that shows you the milestones (and their status) of the landing zone setup and upgrade process. Milestones such as updating shared accounts for logging, configuring Account Factory, and enabling mandatory controls are tracked so that you have additional visibility into the status of your setup or upgrade process.

Kinesis Data Streams Throughput Increase – Each Amazon Kinesis Data Stream now supports up to 1 GB/second of write throughput and 2 GB/second of read throughput, both in On-Demand capacity mode. To reach this level of throughput for your data streams you will need to submit a Support Ticket, as described in the What’s New.

Lambda Powertools for .NET – This open source developer library is now generally available. It helps you to incorporate Well-Architected serverless best practices into your code, with a focus on observability features including distributed tracing, structured logging, and asynchronous metrics (both business and applications).

Amazon Inspector Code Scans for Lambda Functions – This preview launch gives Amazon Inspector the power to scan your AWS Lambda functions for vulnerabilities such as injection flaws, data leaks, weak cryptography, or missing encryption. Findings are aggregated in the Amazon Inspector console, routed to AWS Security Hub, and pushed to Amazon EventBridge.

X in Y – We made existing services and features available in additional regions and locations:

For a full list of AWS announcements, take a look at the What’s New at AWS page, and consider subscribing to the page’s RSS feed.

Interesting Blog Posts

Other AWS Blogs – Here are some fresh posts from a few of the other AWS Blogs:

AWS Open Source – My colleague Ricardo writes a weekly newsletter to highlight new open source projects, tools, and demos from the AWS Community. Read edition #147 to learn more.

Upcoming AWS Events
Check your calendar and be sure to attend these upcoming events:

AWSome Women Community Summit LATAM 2023 – Organized by members of the woman-led AWS communities in Perú, Chile, Argentina, Guatemala, Colombia, this event will take place in Bogotá, Colombia with an online option as well.

AWS Pi Day 2023 SmallAWS Pi Day – Join us on March 14th for the third annual AWS Pi Day live, virtual event hosted on the AWS On Air channel on Twitch as we celebrate the 17th birthday of Amazon S3 and the cloud.

We will discuss the latest innovations across AWS Data services, from storage to analytics and AI/ML. If you are curious about how AI can transform your business, register here and join my session.

AWS Innovate Data and AI/ML edition – AWS Innovate is a free online event to learn the latest from AWS experts and get step-by-step guidance on using AI/ML to drive fast, efficient, and measurable results. Register now for EMEA (March 9) and the Americas (March 14th).

You can browse all upcoming AWS-led in-person, virtual events and developer focused events such as Community Days.

And that’s all for today!


In the Works – AWS Region in Malaysia

Post Syndicated from Jeff Barr original

We launched an AWS Region in Australia earlier this year, four more (Switzerland, Spain, the United Arab Emirates, and India) in 2022, and are working on regions in Canada, Israel, New Zealand, and Thailand. All told, we now have 99 Availability Zones spread across 31 geographic regions.

Malaysia in the Works
Today I am happy to announce that we are working on an AWS region in Malaysia. This region will give AWS customers the ability to run workloads and store data that must remain in-country.

The region will include three Availability Zones (AZs), each one physically independent of the others in the region yet far enough apart to minimize the risk that an AZ-level event will have on business continuity. The AZs will be connected to each other by high-bandwidth, low-latency network connections over dedicated, fully-redundant fiber.

AWS in Malaysia
We are planning to invest at least $6 Billion (25.5 billion Malaysian ringgit) in Malaysia by 2037.

Many organizations in Malaysia are already making use of the existing AWS Regions. This includes enterprise and public sector organizations such as Axiata Group, Baba Products, Bank Islam Malaysia, Celcom Digi, PayNet, PETRONAS, Tenaga Nasional Berhad (TNB), Asia Pacific University of Technology & Innovation, Cybersecurity Malaysia, Department of Statistics Malaysia, Ministry of Higher Education Malaysia, and Pos Malaysia, and startups like Baba’s, BeEDucation Adventures, CARSOME, and StoreHub.

Here’s a small sample of some of the exciting and innovative work that our customers are doing in Malaysia:

Johor Corporation (JCorp) is the principal development institution that drives the growth of the state of Johor’s economy through its operations in the agribusiness, wellness, food and restaurants, and real estate and infrastructure sectors. To power JCorp’s digital transformation and achieve the JCorp 3.0 reinvention plan goals, the company is leveraging the AWS cloud to manage its data and applications, serving as a single source of truth for its business and operational knowledge, and paving the way for the company to tap on artificial intelligence, machine learning and blockchain technologies in the future.

Radio Televisyen Malaysia (RTM), established in 1946, is the national public broadcaster of Malaysia, bringing news, information, and entertainment programs through its six free-to-air channels and 34 radio stations to millions of Malaysians daily. Bringing cutting-edge AWS technologies closer to RTM in Malaysia will accelerate the time it takes to develop new media services, while delivering a better viewer experience with lower latency.

Bank Islam, Malaysia’s first listed Islamic banking institution, provides end-to-end financial solutions that meet the diverse needs of their customers. The bank taps AWS’ expertise to power its digital transformation and the development of Be U digital bank through its Centre of Digital Experience, a stand-alone division that creates cutting-edge financial services on AWS to enhance customer experiences.

Malaysian Administrative Modernization Management Planning Unit (MAMPU) encourages public sector agencies to adopt cloud in all ICT projects in order to accelerate emerging technologies application and increase the efficiency of public service. MAMPU believes the establishment of the AWS Region in Malaysia will further accelerate digitalization of the public sector, and bolster efforts for public sector agencies to deliver advanced citizen services seamlessly.

Malaysia is also home to both Independent Software Vendors (ISVs) and Systems Integrators that are members of the AWS Partner Network (APN). The ISV partners build innovative solutions on AWS and the SIs provide business, technical, marketing, and go-to-market support to customers. AWS Partners based in Malaysia include Axrail, eCloudvalley, Exabytes, G-AsiaPacific, GHL, Maxis, Radmik Solutions Sdn Bhd, Silverlake, Tapway, Fourtitude, and Wavelet.

New Explainer Video
To learn more about our global infrastructure, be sure to watch our new AWS Global Infrastructure Explainer video:

Stay Tuned
As usual, subscribe to this blog so that you will be among the first to know when the new region is open!


New: AWS Telco Network Builder – Deploy and Manage Telco Networks

Post Syndicated from Jeff Barr original

Over the course of more than one hundred years, the telecom industry has become standardized and regulated, and has developed methods, technologies, and an entire vocabulary (chock full of interesting acronyms) along the way. As an industry, they need to honor this tremendous legacy while also taking advantage of new technology, all in the name of delivering the best possible voice and data services to their customers.

Today I would like to tell you about AWS Telco Network Builder (TNB). This new service is designed to help Communications Service Providers (CSPs) deploy and manage public and private telco networks on AWS. It uses existing standards, practices, and data formats, and makes it easier for CSPs to take advantage of the power, scale, and flexibility of AWS.

Today, CSPs often deploy their code to virtual machines. However, as they look to the future they are looking for additional flexibility and are increasingly making use of containers. AWS TNB is intended to be a part of this transition, and makes use of Kubernetes and Amazon Elastic Kubernetes Service (EKS) for packaging and deployment.

Concepts and Vocabulary
Before we dive in to the service, let’s take a look some concepts and vocabulary that are unique to this industry, and are relevant to AWS TNB:

European Telecommunications Standards Institute (ETSI) – A European organization that defines specifications suitable for global use. AWS TNB supports multiple ETSI specifications including ETSI SOL001 through ETSI SOL005, and ETSI SOL007.

Communications Service Provider (CSP) – An organization that offers telecommunications services.

Topology and Orchestration Specification for Cloud Applications (TOSCA) – A standardized grammar that is used to describe service templates for telecommunications applications.

Network Function (NF) – A software component that performs a specific core or value-added function within a telco network.

Virtual Network Function Descriptor (VNFD) – A specification of the metadata needed to onboard and manage a Network Function.

Cloud Service Archive (CSAR) – A ZIP file that contains a VNFD, references to container images that hold Network Functions, and any additional files needed to support and manage the Network Function.

Network Service Descriptor (NSD) – A specification of the compute, storage, networking, and location requirements for a set of Network Functions along with the information needed to assemble them to form a telco network.

Network Core – The heart of a network. It uses control plane and data plane operations to manage authentication, authorization, data, and policies.

Service Orchestrator (SO) – An external, high-level network management tool.

Radio Access Network (RAN) – The components (base stations, antennas, and so forth) that provide wireless coverage over a specific geographic area.

Using AWS Telco Network Builder (TNB)
I don’t happen to be a CSP, but I will do my best to walk you through the getting-started experience anyway! The primary steps are:

  1. Creating a function package for each Network Function by uploading a CSAR.
  2. Creating a network package for the network by uploading a Network Service Descriptor (NSD).
  3. Creating a network by selecting and instantiating an NSD.

To begin, I open the AWS TNB Console and click Get started:

Initially, I have no networks, no function packages, and no network packages:

My colleagues supplied me with sample CSARs and an NSD for use in this blog post (the network functions are from Free 5G Core):

Each CSAR is a fairly simple ZIP file with a VNFD and other items inside. For example, the VNFD for the Free 5G Core Session Management Function (smf) looks like this:

tosca_definitions_version: tnb_simple_yaml_1_0



      type: tosca.nodes.AWS.VNF
        descriptor_id: "4b2abab6-c82a-479d-ab87-4ccd516bf141"
        descriptor_version: "1.0.0"
        descriptor_name: "Free5gc SMF 1.0.0"
        provider: "Free5gc"
        helm: HelmImage

      type: tosca.nodes.AWS.Artifacts.Helm
        implementation: "./free5gc-smf"

The final section (HelmImage) of the VNFD points to the Kubernetes Helm Chart that defines the implementation.

I click Function packages in the console, then click Create function package. Then I upload the first CSAR and click Next:

I review the details and click Create function package (each VNFD can include a set of parameters that have default values which can be overwritten with values that are specific to a particular deployment):

I repeat this process for the nine remaining CSARs, and all ten function packages are ready to use:

Now I am ready to create a Network Package. The Network Service Descriptor is also fairly simple, and I will show you several excerpts. First, the NSD establishes a mapping from descriptor_id to namespace for each Network Function so that the functions can be referenced by name:

  - descriptor_id: "aa97cf70-59db-4b13-ae1e-0942081cc9ce"
    namespace: "amf"
  - descriptor_id: "86bd1730-427f-480a-a718-8ae9dcf3f531"
    namespace: "ausf"

Then it defines the input variables, including default values (this reminds me of a AWS CloudFormation template):

      type: String
      description: "CIDR Block for Free5GCVPC"
      default: ""

      type: String
      description: "CIDR Block for Free5GCENISubnet01"
      default: ""

Next, it uses the variables to create a mapping to the desired AWS resources (a VPC and a subnet in this case):

      type: tosca.nodes.AWS.Networking.VPC
        cidr_block: { get_input: vpc_cidr_block }
        dns_support: true

      type: tosca.nodes.AWS.Networking.Subnet
        type: "PUBLIC"
        availability_zone: { get_input: subnet_01_az }
        cidr_block: { get_input: eni_subnet_01_cidr_block }
        route_table: Free5GCRouteTable
        vpc: Free5GCVPC

Then it defines an AWS Internet Gateway within the VPC:

      type: tosca.nodes.AWS.Networking.InternetGateway
            dest_cidr: { get_input: igw_dest_cidr }
        route_table: Free5GCRouteTable
        vpc: Free5GCVPC

Finally, it specifies deployment of the Network Functions to an EKS cluster; the functions are deployed in the specified order:

      type: tosca.nodes.AWS.Deployment.VNFDeployment
        cluster: Free5GCEKS
        deployment: Free5GCNRFHelmDeploy
          - amf.Free5gcAMF
          - ausf.Free5gcAUSF
          - nssf.Free5gcNSSF
          - pcf.Free5gcPCF
          - smf.Free5gcSMF
          - udm.Free5gcUDM
          - udr.Free5gcUDR
          - upf.Free5gcUPF
          - webui.Free5gcWEBUI
          pre_create: Free5gcSimpleHook

I click Create network package, select the NSD, and click Next to proceed. AWS TNB asks me to review the list of function packages and the NSD parameters. I do so, and click Create network package:

My network package is created and ready to use within seconds:

Now I am ready to create my network instance! I select the network package and choose Create network instance from the Actions menu:

I give my network a name and a description, then click Next:

I make sure that I have selected the desired network package, review the list of functions packages that will be deployed, and click Next:

Then I do one final review, and click Create network instance:

I select the new network instance and choose Instantiate from the Actions menu:

I review the parameters, and enter any desired overrides, then click Instantiate network:

AWS Telco Network Builder (TNB) begins to instantiate my network (behind the scenes, the service creates a AWS CloudFormation template, uses the template to create a stack, and executes other tasks including Helm charts and custom scripts). When the instantiation step is complete, my network is ready to go. Instantiating a network creates a deployment, and the same network (perhaps with some parameters overridden) can be deployed more than once. I can see all of the deployments at a glance:

I can return to the dashboard to see my networks, function packages, network packages, and recent deployments:

Inside an AWS TNB Deployment
Let’s take a quick look inside my deployment. Here’s what AWS TNB set up for me:

Network – An Amazon Virtual Private Cloud (Amazon VPC) with three subnets, a route table, a route, and an Internet Gateway.

Compute – An Amazon Elastic Kubernetes Service (EKS) cluster.

CI/CD – An AWS CodeBuild project that is triggered every time a node is added to the cluster.

Things to Know
Here are a couple of things to know about AWS Telco Network Builder (TNB):

Access – In addition to the console access that I showed you above, you can access AWS TNB from the AWS Command Line Interface (AWS CLI) and the AWS SDKs.

Deployment Options – We are launching with the ability to create a network that spans multiple Availability Zones in a single AWS Region. Over time we expect to add additional deployment options such as Local Zones and Outposts.

Pricing – Pricing is based on the number of Network Functions that are managed by AWS TNB and on calls to the AWS TNB APIs, but the first 45,000 API requests per month in each AWS Region are not charged. There are also additional charges for the AWS resources that are created as part of the deployment. To learn more, read the TNB Pricing page.

Getting Started
To learn more and to get started, visit the AWS Telco Network Builder (TNB) home page.


Behind the Scenes at AWS – DynamoDB UpdateTable Speedup

Post Syndicated from Jeff Barr original

We often talk about the Pace of Innovation at AWS, and share the results in this blog, in the AWS What’s New page, and in our weekly AWS on Air streams. Today I would like to talk about a slightly different kind of innovation, the kind that happens behind the scenes.

Each AWS customer uses a different mix of services, and uses those services in unique ways. Every service is instrumented and monitored, and the team responsible for designing, building, running, scaling, and evolving the service pays continuous attention to all of the resulting metrics. The metrics provide insights into how the service is being used, how it performs under load, and in many cases highlights areas for optimization in pursuit of higher availability, better performance, and lower costs.

Once an area for improvement has been identified, a plan is put in to place, changes are made and tested in pre-production environments, then deployed to multiple AWS regions. This happens routinely, and (to date) without fanfare. Each part of AWS gets better and better, with no action on your part.

DynamoDB UpdateTable
In late 2021 we announced the Standard-Infrequent Access table class for Amazon DynamoDB. As Marcia noted in her post, using this class can reduce your storage costs by 60% compared to the existing (Standard) class. She also showed you how you could modify a table to use the new class. The modification operation calls the UpdateTable function, and that function is the topic of this post!

As is the case with just about every AWS launch, customers began to make use of the new table class right away. They created new tables and modified existing ones, benefiting from the lower pricing as soon as the modification was complete.

DynamoDB uses a highly distributed storage architecture. Each table is split into multiple partitions; operations such as changing the storage class are done in parallel across the partitions. After looking at a lot of metrics, the DynamoDB team found ways to increase parallelism and to reduce the amount of time spent managing the parallel operations.

This change had a dramatic effect for Amazon DynamoDB tables over 500 GB in size, reducing the time to update the table class by up to 97%.

Each time we make a change like this, we capture the “before” and “after” metrics, and share the results internally so that other teams can learn from the experience while they are in the process of making similar improvements of their own. Even better, each change that we make opens the door to other ones, creating a positive feedback loop that (once again) benefits everyone that uses a particular service or feature.

Every DynamoDB user can take advantage of this increased performance right away without the need for a version upgrade or downtime for maintenance (DynamoDB does not even have maintenance windows).

Incremental performance and operational improvements like this one are done routinely and without much fanfare. However it is always good to hear back from our customers when their own measurements indicate that some part of AWS became better or faster.

Leadership Principles
As I was thinking about this change while getting ready to write this post, several Amazon Leadership Principles came to mind. The DynamoDB team showed Customer Obsession by implementing a change that would benefit any DynamoDB user with tables over 500 GB in size. To do this they had to Invent and Simplify, coming up with a better way to implement the UpdateTable function.

While you, as an AWS customer, get the benefits with no action needed on your part, this does not mean that you have to wait until we decide to pay special attention to your particular use case. If you are pushing any aspect of AWS to the limit (or want to), I recommend that you make contact with the appropriate service team and let them know what’s going on. You might be running into a quota or other limit, or pushing bandwidth, memory, or other resources to extremes. Whatever the case, the team would love to hear from you!

Stay Tuned
I have a long list of other internal improvements that we have made, and will be working with the teams to share more of them throughout the year.


New Graviton3-Based General Purpose (m7g) and Memory-Optimized (r7g) Amazon EC2 Instances

Post Syndicated from Jeff Barr original

We’ve come a long way since the launch of the m1.small instance in 2006, adding instances with additional memory, compute power, and your choice of Intel, AMD, or Graviton processors. The original general-purpose “one size fits all” instance has evolved into six families, each one optimized for specific uses cases, with over 600 generally available instances in all.

New M7g and R7g
Today I am happy to tell you about the newest Amazon EC2 instance types, the M7g and the R7g. Both types are powered by the latest generation AWS Graviton3 processors, and are designed to deliver up to 25% better performance than the equivalent sixth-generation (M6g and R6g) instances, making them the best performers in EC2.

The M7g instances are for general purpose workloads such as application servers, microservices, gaming servers, mid-sized data stores, and caching fleets. The R7g instances are a great fit for memory-intensive workloads such as open-source databases, in-memory caches, and real-time big data analytics.

Here are the specs for the M7g instances:

Instance Name vCPUs
Network Bandwidth
EBS Bandwidth
m7g.medium 1 4 GiB up to 12.5 Gbps up to 10 Gbps
m7g.large 2 8 GiB up to 12.5 Gbps up to 10 Gbps
m7g.xlarge 4 16 GiB up to 12.5 Gbps up to 10 Gbps
m7g.2xlarge 8 32 GiB up to 15 Gbps up to 10 Gbps
m7g.4xlarge 16 64 GiB up to 15 Gbps up to 10 Gbps
m7g.8xlarge 32 128 GiB 15 Gbps 10 Gbps
m7g.12xlarge 48 192 GiB 22.5 Gbps 15 Gbps
m7g.16xlarge 64 256 GiB 30 Gbps 20 Gbps
m7g.metal 64 256 GiB 30 Gbps 20 Gbps

And here are the specs for the R7g instances:

Instance Name vCPUs
Network Bandwidth
EBS Bandwidth
r7g.medium 1 8 GiB up to 12.5 Gbps up to 10 Gbps
r7g.large 2 16 GiB up to 12.5 Gbps up to 10 Gbps
r7g.xlarge 4 32 GiB up to 12.5 Gbps up to 10 Gbps
r7g.2xlarge 8 64 GiB up to 15 Gbps up to 10 Gbps
r7g.4xlarge 16 128 GiB up to 15 Gbps up to 10 Gbps
r7g.8xlarge 32 256 GiB 15 Gbps 10 Gbps
r7g.12xlarge 48 384 GiB 22.5 Gbps 15 Gbps
r7g.16xlarge 64 512 GiB 30 Gbps 20 Gbps
r7g.metal 64 512 GiB 30 Gbps 20 Gbps

Both types of instances are equipped with DDR5 memory, which provides up to 50% higher memory bandwidth than the DDR4 memory used in previous generations. Here’s an infographic that I created to highlight the principal performance and capacity improvements that we have made available with the new instances:

If you are not yet running your application on Graviton instances, be sure to take advantage of the AWS Graviton Ready Program. The partners in this program provide services and solutions that will help you to migrate your application and to take full advantage of all that the Graviton instances have to offer. Other helpful resources include the Porting Advisor for Graviton and the Graviton Fast Start program.

The instances are built on the AWS Nitro System, and benefit from multiple features that enhance security: always-on memory encryption, a dedicated cache for each vCPU, and support for pointer authentication. They also support encrypted EBS volumes, which protect data at rest on the volume, data moving between the instance and the volume, snapshots created from the volume, and volumes created from those snapshots. To learn more about these and other Nitro-powered security features, be sure to read The Security Design of the AWS Nitro System.

On the network side the instances are EBS-Optimized with dedicated networking between the instances and the EBS volumes, and also support Enhanced Networking (read How do I enable and configure enhanced networking on my EC2 instances? for more info). The 16xlarge and metal instances also support Elastic Fabric Adapter (EFA) for applications that need a high level of inter-node communication.

Pricing and Regions
M7g and R7g instances are available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) AWS Regions in On-Demand, Spot, Reserved Instance, and Savings Plan form.


PS – Launch one today and let me know what you think!