All posts by Jeff Barr

Subscribe to AWS Daily Feature Updates via Amazon SNS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/subscribe-to-aws-daily-feature-updates-via-amazon-sns/

Way back in 2015 I showed you how to Subscribe to AWS Public IP Address Changes via Amazon SNS. Today I am happy to tell you that you can now receive timely, detailed information about releases and updates to AWS via the same, simple mechanism.

Daily Feature Updates
Simply subscribe to topic arn:aws:sns:us-east-1:692768080016:aws-new-feature-updates using the email protocol and confirm the subscription in the usual way:

You will receive daily emails that start off like this, with an introduction and a summary of the update:

After the introduction, the email contains a JSON representation of the daily feature updates:

As noted in the message, the JSON content is also available online at URLs that look like https://aws-new-features.s3.us-east-1.amazonaws.com/update/2023-02-27.json . You can also edit the date in the URL to access historical data going back up to six months.

The email message also includes detailed information about changes and additions to managed policies that will be of particular interest to AWS customers who currently manually track and then verify the impact that these changes may have on their security profile. Here’s a sample list of changes (additional permissions) to existing managed policies:

And here’s a new managed policy:

Even More Information
The header of the email contains a link to a treasure trove of additional information. Here are some examples:

AWS Regions and AWS Services – A pair of tables. The first one includes a row for each AWS Region and a column for each service, and the second one contains the transposed version:

AWS Regions and EC2 Instance Types – Again, a pair of tables. The first one includes a row for each AWS Region and a column for each EC2 instance type, and the second one contains the transposed version:

The EC2 Instance Types Configuration link leads to detailed information about each instance type:

Each page also includes a link to the same information in JSON form. For example (EC2 Instance Types Configuration), starts like this:

{
    "a1.2xlarge": {
        "af-south-1": "-",
        "ap-east-1": "-",
        "ap-northeast-1": "a1.2xlarge",
        "ap-northeast-2": "-",
        "ap-northeast-3": "-",
        "ap-south-1": "a1.2xlarge",
        "ap-south-2": "-",
        "ap-southeast-1": "a1.2xlarge",
        "ap-southeast-2": "a1.2xlarge",
        "ap-southeast-3": "-",
        "ap-southeast-4": "-",
        "ca-central-1": "-",
        "eu-central-1": "a1.2xlarge",
        "eu-central-2": "-",
        "eu-north-1": "-",
        "eu-south-1": "-",
        "eu-south-2": "-",
        "eu-west-1": "a1.2xlarge",
        "eu-west-2": "-",
        "eu-west-3": "-",
        "me-central-1": "-",
        "me-south-1": "-",
        "sa-east-1": "-",
        "us-east-1": "a1.2xlarge",
        "us-east-2": "a1.2xlarge",
        "us-gov-east-1": "-",
        "us-gov-west-1": "-",
        "us-west-1": "-",
        "us-west-2": "a1.2xlarge"
    },

Other information includes:

  • VPC Endpoints
  • AWS Services Integrated with Service Quotas
  • Amazon SageMaker Instance Types
  • RDS DB Engine Versions
  • Amazon Nimble Instance Types
  • Amazon MSK Apache Kafka Versions

Information Sources
The information is pulled from multiple public sources, cross-checked, and then issued. Here are some of the things that we look for:

Things to Know
Here are a couple of things that you should keep in mind about the AWS Daily Feature Updates:

Content – The content provided in the Daily Feature Updates and in the treasure trove of additional information will continue to grow as new features are added to AWS.

Region Coverage – The Daily Feature Updates cover all AWS Regions in the public partition. Where possible, it also provides information about GovCloud regions; this currently includes EC2 Instance Types, SageMaker Instance Types, and Amazon Nimble Instance Types.

Region Mappings – The internal data that drives all of the information related to AWS Regions is updated once a day if there are applicable new features, and also when new AWS Regions are enabled.

Updates – On days when there are no updates, there will not be an email notification.

Usage – Similar to the updates on the What’s New page and the associated RSS feed, the updates are provided for informational purposes, and you still need to do your own evaluation and testing before deploying to production.

???

Jeff;

AWS Week in Review – March 6, 2023

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-6-2023/

It has been a week full of interesting launches and I am thrilled to be able to share them with you today. We’ve got a new region in the works, a new tool for researchers, updates to Amazon Timestream, Control Tower, and Amazon Inspector, Lambda Powertools for .NET, existing services in new locations, lots of posts from other AWS blogs, upcoming events, and more.

Last Week’s Launches
Here are some of the launches that caught my eye this past week:

AWS Region in Malaysia – We are working on an AWS Region in Malaysia, bringing the number of regions that are currently in the works to five. The upcoming region will include three Availability Zones, and represents our commitment to invest at least $6 Billion in Malaysia by 2037. You can read my post to learn about how our enterprise, startup, and public sector customers are already using AWS.

Amazon Lightsail for Research – You can get access to analytical applications such as Scilab, RStudio, and Jupyter with just a couple of couple of clicks. Instead of processing large data sets on your laptop, you can get to work quickly without having to deal with hardware setup, software setup, or tech support.

Batch Loading of Data into Amazon Timestream – You can now batch-load time series data into Amazon Timestream. You upload the data to an Amazon Simple Storage Service (Amazon S3) bucket in CSV form, specify a target database and table, and a data model. The ingestion happens automatically and reliably, with parallel processes at work for efficiency.

Control Tower Progress TrackerAWS Control Tower now includes a progress tracker that shows you the milestones (and their status) of the landing zone setup and upgrade process. Milestones such as updating shared accounts for logging, configuring Account Factory, and enabling mandatory controls are tracked so that you have additional visibility into the status of your setup or upgrade process.

Kinesis Data Streams Throughput Increase – Each Amazon Kinesis Data Stream now supports up to 1 GB/second of write throughput and 2 GB/second of read throughput, both in On-Demand capacity mode. To reach this level of throughput for your data streams you will need to submit a Support Ticket, as described in the What’s New.

Lambda Powertools for .NET – This open source developer library is now generally available. It helps you to incorporate Well-Architected serverless best practices into your code, with a focus on observability features including distributed tracing, structured logging, and asynchronous metrics (both business and applications).

Amazon Inspector Code Scans for Lambda Functions – This preview launch gives Amazon Inspector the power to scan your AWS Lambda functions for vulnerabilities such as injection flaws, data leaks, weak cryptography, or missing encryption. Findings are aggregated in the Amazon Inspector console, routed to AWS Security Hub, and pushed to Amazon EventBridge.

X in Y – We made existing services and features available in additional regions and locations:

For a full list of AWS announcements, take a look at the What’s New at AWS page, and consider subscribing to the page’s RSS feed.

Interesting Blog Posts

Other AWS Blogs – Here are some fresh posts from a few of the other AWS Blogs:

AWS Open Source – My colleague Ricardo writes a weekly newsletter to highlight new open source projects, tools, and demos from the AWS Community. Read edition #147 to learn more.

Upcoming AWS Events
Check your calendar and be sure to attend these upcoming events:

AWSome Women Community Summit LATAM 2023 – Organized by members of the woman-led AWS communities in Perú, Chile, Argentina, Guatemala, Colombia, this event will take place in Bogotá, Colombia with an online option as well.

AWS Pi Day 2023 SmallAWS Pi Day – Join us on March 14th for the third annual AWS Pi Day live, virtual event hosted on the AWS On Air channel on Twitch as we celebrate the 17th birthday of Amazon S3 and the cloud.

We will discuss the latest innovations across AWS Data services, from storage to analytics and AI/ML. If you are curious about how AI can transform your business, register here and join my session.

AWS Innovate Data and AI/ML edition – AWS Innovate is a free online event to learn the latest from AWS experts and get step-by-step guidance on using AI/ML to drive fast, efficient, and measurable results. Register now for EMEA (March 9) and the Americas (March 14th).

You can browse all upcoming AWS-led in-person, virtual events and developer focused events such as Community Days.

And that’s all for today!

Jeff;

In the Works – AWS Region in Malaysia

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-malaysia/

We launched an AWS Region in Australia earlier this year, four more (Switzerland, Spain, the United Arab Emirates, and India) in 2022, and are working on regions in Canada, Israel, New Zealand, and Thailand. All told, we now have 99 Availability Zones spread across 31 geographic regions.

Malaysia in the Works
Today I am happy to announce that we are working on an AWS region in Malaysia. This region will give AWS customers the ability to run workloads and store data that must remain in-country.

The region will include three Availability Zones (AZs), each one physically independent of the others in the region yet far enough apart to minimize the risk that an AZ-level event will have on business continuity. The AZs will be connected to each other by high-bandwidth, low-latency network connections over dedicated, fully-redundant fiber.

AWS in Malaysia
We are planning to invest at least $6 Billion (25.5 billion Malaysian ringgit) in Malaysia by 2037.

Many organizations in Malaysia are already making use of the existing AWS Regions. This includes enterprise and public sector organizations such as Axiata Group, Baba Products, Bank Islam Malaysia, Celcom Digi, PayNet, PETRONAS, Tenaga Nasional Berhad (TNB), Asia Pacific University of Technology & Innovation, Cybersecurity Malaysia, Department of Statistics Malaysia, Ministry of Higher Education Malaysia, and Pos Malaysia, and startups like Baba’s, BeEDucation Adventures, CARSOME, and StoreHub.

Here’s a small sample of some of the exciting and innovative work that our customers are doing in Malaysia:

Johor Corporation (JCorp) is the principal development institution that drives the growth of the state of Johor’s economy through its operations in the agribusiness, wellness, food and restaurants, and real estate and infrastructure sectors. To power JCorp’s digital transformation and achieve the JCorp 3.0 reinvention plan goals, the company is leveraging the AWS cloud to manage its data and applications, serving as a single source of truth for its business and operational knowledge, and paving the way for the company to tap on artificial intelligence, machine learning and blockchain technologies in the future.

Radio Televisyen Malaysia (RTM), established in 1946, is the national public broadcaster of Malaysia, bringing news, information, and entertainment programs through its six free-to-air channels and 34 radio stations to millions of Malaysians daily. Bringing cutting-edge AWS technologies closer to RTM in Malaysia will accelerate the time it takes to develop new media services, while delivering a better viewer experience with lower latency.

Bank Islam, Malaysia’s first listed Islamic banking institution, provides end-to-end financial solutions that meet the diverse needs of their customers. The bank taps AWS’ expertise to power its digital transformation and the development of Be U digital bank through its Centre of Digital Experience, a stand-alone division that creates cutting-edge financial services on AWS to enhance customer experiences.

Malaysian Administrative Modernization Management Planning Unit (MAMPU) encourages public sector agencies to adopt cloud in all ICT projects in order to accelerate emerging technologies application and increase the efficiency of public service. MAMPU believes the establishment of the AWS Region in Malaysia will further accelerate digitalization of the public sector, and bolster efforts for public sector agencies to deliver advanced citizen services seamlessly.

Malaysia is also home to both Independent Software Vendors (ISVs) and Systems Integrators that are members of the AWS Partner Network (APN). The ISV partners build innovative solutions on AWS and the SIs provide business, technical, marketing, and go-to-market support to customers. AWS Partners based in Malaysia include Axrail, eCloudvalley, Exabytes, G-AsiaPacific, GHL, Maxis, Radmik Solutions Sdn Bhd, Silverlake, Tapway, Fourtitude, and Wavelet.

New Explainer Video
To learn more about our global infrastructure, be sure to watch our new AWS Global Infrastructure Explainer video:

Stay Tuned
As usual, subscribe to this blog so that you will be among the first to know when the new region is open!

Jeff;

New: AWS Telco Network Builder – Deploy and Manage Telco Networks

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-telco-network-builder-deploy-and-manage-telco-networks/

Over the course of more than one hundred years, the telecom industry has become standardized and regulated, and has developed methods, technologies, and an entire vocabulary (chock full of interesting acronyms) along the way. As an industry, they need to honor this tremendous legacy while also taking advantage of new technology, all in the name of delivering the best possible voice and data services to their customers.

Today I would like to tell you about AWS Telco Network Builder (TNB). This new service is designed to help Communications Service Providers (CSPs) deploy and manage public and private telco networks on AWS. It uses existing standards, practices, and data formats, and makes it easier for CSPs to take advantage of the power, scale, and flexibility of AWS.

Today, CSPs often deploy their code to virtual machines. However, as they look to the future they are looking for additional flexibility and are increasingly making use of containers. AWS TNB is intended to be a part of this transition, and makes use of Kubernetes and Amazon Elastic Kubernetes Service (EKS) for packaging and deployment.

Concepts and Vocabulary
Before we dive in to the service, let’s take a look some concepts and vocabulary that are unique to this industry, and are relevant to AWS TNB:

European Telecommunications Standards Institute (ETSI) – A European organization that defines specifications suitable for global use. AWS TNB supports multiple ETSI specifications including ETSI SOL001 through ETSI SOL005, and ETSI SOL007.

Communications Service Provider (CSP) – An organization that offers telecommunications services.

Topology and Orchestration Specification for Cloud Applications (TOSCA) – A standardized grammar that is used to describe service templates for telecommunications applications.

Network Function (NF) – A software component that performs a specific core or value-added function within a telco network.

Virtual Network Function Descriptor (VNFD) – A specification of the metadata needed to onboard and manage a Network Function.

Cloud Service Archive (CSAR) – A ZIP file that contains a VNFD, references to container images that hold Network Functions, and any additional files needed to support and manage the Network Function.

Network Service Descriptor (NSD) – A specification of the compute, storage, networking, and location requirements for a set of Network Functions along with the information needed to assemble them to form a telco network.

Network Core – The heart of a network. It uses control plane and data plane operations to manage authentication, authorization, data, and policies.

Service Orchestrator (SO) – An external, high-level network management tool.

Radio Access Network (RAN) – The components (base stations, antennas, and so forth) that provide wireless coverage over a specific geographic area.

Using AWS Telco Network Builder (TNB)
I don’t happen to be a CSP, but I will do my best to walk you through the getting-started experience anyway! The primary steps are:

  1. Creating a function package for each Network Function by uploading a CSAR.
  2. Creating a network package for the network by uploading a Network Service Descriptor (NSD).
  3. Creating a network by selecting and instantiating an NSD.

To begin, I open the AWS TNB Console and click Get started:

Initially, I have no networks, no function packages, and no network packages:

My colleagues supplied me with sample CSARs and an NSD for use in this blog post (the network functions are from Free 5G Core):

Each CSAR is a fairly simple ZIP file with a VNFD and other items inside. For example, the VNFD for the Free 5G Core Session Management Function (smf) looks like this:

tosca_definitions_version: tnb_simple_yaml_1_0

topology_template:

  node_templates:

    Free5gcSMF:
      type: tosca.nodes.AWS.VNF
      properties:
        descriptor_id: "4b2abab6-c82a-479d-ab87-4ccd516bf141"
        descriptor_version: "1.0.0"
        descriptor_name: "Free5gc SMF 1.0.0"
        provider: "Free5gc"
      requirements:
        helm: HelmImage

    HelmImage:
      type: tosca.nodes.AWS.Artifacts.Helm
      properties:
        implementation: "./free5gc-smf"

The final section (HelmImage) of the VNFD points to the Kubernetes Helm Chart that defines the implementation.

I click Function packages in the console, then click Create function package. Then I upload the first CSAR and click Next:

I review the details and click Create function package (each VNFD can include a set of parameters that have default values which can be overwritten with values that are specific to a particular deployment):

I repeat this process for the nine remaining CSARs, and all ten function packages are ready to use:

Now I am ready to create a Network Package. The Network Service Descriptor is also fairly simple, and I will show you several excerpts. First, the NSD establishes a mapping from descriptor_id to namespace for each Network Function so that the functions can be referenced by name:

vnfds:
  - descriptor_id: "aa97cf70-59db-4b13-ae1e-0942081cc9ce"
    namespace: "amf"
  - descriptor_id: "86bd1730-427f-480a-a718-8ae9dcf3f531"
    namespace: "ausf"
...

Then it defines the input variables, including default values (this reminds me of a AWS CloudFormation template):

  inputs:
    vpc_cidr_block:
      type: String
      description: "CIDR Block for Free5GCVPC"
      default: "10.100.0.0/16"

    eni_subnet_01_cidr_block:
      type: String
      description: "CIDR Block for Free5GCENISubnet01"
      default: "10.100.50.0/24"
...

Next, it uses the variables to create a mapping to the desired AWS resources (a VPC and a subnet in this case):

   Free5GCVPC:
      type: tosca.nodes.AWS.Networking.VPC
      properties:
        cidr_block: { get_input: vpc_cidr_block }
        dns_support: true

    Free5GCENISubnet01:
      type: tosca.nodes.AWS.Networking.Subnet
      properties:
        type: "PUBLIC"
        availability_zone: { get_input: subnet_01_az }
        cidr_block: { get_input: eni_subnet_01_cidr_block }
      requirements:
        route_table: Free5GCRouteTable
        vpc: Free5GCVPC

Then it defines an AWS Internet Gateway within the VPC:

    Free5GCIGW:
      type: tosca.nodes.AWS.Networking.InternetGateway
      capabilities:
        routing:
          properties:
            dest_cidr: { get_input: igw_dest_cidr }
      requirements:
        route_table: Free5GCRouteTable
        vpc: Free5GCVPC

Finally, it specifies deployment of the Network Functions to an EKS cluster; the functions are deployed in the specified order:

    Free5GCHelmDeploy:
      type: tosca.nodes.AWS.Deployment.VNFDeployment
      requirements:
        cluster: Free5GCEKS
        deployment: Free5GCNRFHelmDeploy
        vnfs:
          - amf.Free5gcAMF
          - ausf.Free5gcAUSF
          - nssf.Free5gcNSSF
          - pcf.Free5gcPCF
          - smf.Free5gcSMF
          - udm.Free5gcUDM
          - udr.Free5gcUDR
          - upf.Free5gcUPF
          - webui.Free5gcWEBUI
      interfaces:
        Hook:
          pre_create: Free5gcSimpleHook

I click Create network package, select the NSD, and click Next to proceed. AWS TNB asks me to review the list of function packages and the NSD parameters. I do so, and click Create network package:

My network package is created and ready to use within seconds:

Now I am ready to create my network instance! I select the network package and choose Create network instance from the Actions menu:

I give my network a name and a description, then click Next:

I make sure that I have selected the desired network package, review the list of functions packages that will be deployed, and click Next:

Then I do one final review, and click Create network instance:

I select the new network instance and choose Instantiate from the Actions menu:

I review the parameters, and enter any desired overrides, then click Instantiate network:

AWS Telco Network Builder (TNB) begins to instantiate my network (behind the scenes, the service creates a AWS CloudFormation template, uses the template to create a stack, and executes other tasks including Helm charts and custom scripts). When the instantiation step is complete, my network is ready to go. Instantiating a network creates a deployment, and the same network (perhaps with some parameters overridden) can be deployed more than once. I can see all of the deployments at a glance:

I can return to the dashboard to see my networks, function packages, network packages, and recent deployments:

Inside an AWS TNB Deployment
Let’s take a quick look inside my deployment. Here’s what AWS TNB set up for me:

Network – An Amazon Virtual Private Cloud (Amazon VPC) with three subnets, a route table, a route, and an Internet Gateway.

Compute – An Amazon Elastic Kubernetes Service (EKS) cluster.

CI/CD – An AWS CodeBuild project that is triggered every time a node is added to the cluster.

Things to Know
Here are a couple of things to know about AWS Telco Network Builder (TNB):

Access – In addition to the console access that I showed you above, you can access AWS TNB from the AWS Command Line Interface (AWS CLI) and the AWS SDKs.

Deployment Options – We are launching with the ability to create a network that spans multiple Availability Zones in a single AWS Region. Over time we expect to add additional deployment options such as Local Zones and Outposts.

Pricing – Pricing is based on the number of Network Functions that are managed by AWS TNB and on calls to the AWS TNB APIs, but the first 45,000 API requests per month in each AWS Region are not charged. There are also additional charges for the AWS resources that are created as part of the deployment. To learn more, read the TNB Pricing page.

Getting Started
To learn more and to get started, visit the AWS Telco Network Builder (TNB) home page.

Jeff;

Behind the Scenes at AWS – DynamoDB UpdateTable Speedup

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/behind-the-scenes-at-aws-dynamodb-updatetable-speedup/

We often talk about the Pace of Innovation at AWS, and share the results in this blog, in the AWS What’s New page, and in our weekly AWS on Air streams. Today I would like to talk about a slightly different kind of innovation, the kind that happens behind the scenes.

Each AWS customer uses a different mix of services, and uses those services in unique ways. Every service is instrumented and monitored, and the team responsible for designing, building, running, scaling, and evolving the service pays continuous attention to all of the resulting metrics. The metrics provide insights into how the service is being used, how it performs under load, and in many cases highlights areas for optimization in pursuit of higher availability, better performance, and lower costs.

Once an area for improvement has been identified, a plan is put in to place, changes are made and tested in pre-production environments, then deployed to multiple AWS regions. This happens routinely, and (to date) without fanfare. Each part of AWS gets better and better, with no action on your part.

DynamoDB UpdateTable
In late 2021 we announced the Standard-Infrequent Access table class for Amazon DynamoDB. As Marcia noted in her post, using this class can reduce your storage costs by 60% compared to the existing (Standard) class. She also showed you how you could modify a table to use the new class. The modification operation calls the UpdateTable function, and that function is the topic of this post!

As is the case with just about every AWS launch, customers began to make use of the new table class right away. They created new tables and modified existing ones, benefiting from the lower pricing as soon as the modification was complete.

DynamoDB uses a highly distributed storage architecture. Each table is split into multiple partitions; operations such as changing the storage class are done in parallel across the partitions. After looking at a lot of metrics, the DynamoDB team found ways to increase parallelism and to reduce the amount of time spent managing the parallel operations.

This change had a dramatic effect for Amazon DynamoDB tables over 500 GB in size, reducing the time to update the table class by up to 97%.

Each time we make a change like this, we capture the “before” and “after” metrics, and share the results internally so that other teams can learn from the experience while they are in the process of making similar improvements of their own. Even better, each change that we make opens the door to other ones, creating a positive feedback loop that (once again) benefits everyone that uses a particular service or feature.

Every DynamoDB user can take advantage of this increased performance right away without the need for a version upgrade or downtime for maintenance (DynamoDB does not even have maintenance windows).

Incremental performance and operational improvements like this one are done routinely and without much fanfare. However it is always good to hear back from our customers when their own measurements indicate that some part of AWS became better or faster.

Leadership Principles
As I was thinking about this change while getting ready to write this post, several Amazon Leadership Principles came to mind. The DynamoDB team showed Customer Obsession by implementing a change that would benefit any DynamoDB user with tables over 500 GB in size. To do this they had to Invent and Simplify, coming up with a better way to implement the UpdateTable function.

While you, as an AWS customer, get the benefits with no action needed on your part, this does not mean that you have to wait until we decide to pay special attention to your particular use case. If you are pushing any aspect of AWS to the limit (or want to), I recommend that you make contact with the appropriate service team and let them know what’s going on. You might be running into a quota or other limit, or pushing bandwidth, memory, or other resources to extremes. Whatever the case, the team would love to hear from you!

Stay Tuned
I have a long list of other internal improvements that we have made, and will be working with the teams to share more of them throughout the year.

Jeff;

New Graviton3-Based General Purpose (m7g) and Memory-Optimized (r7g) Amazon EC2 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-graviton3-based-general-purpose-m7g-and-memory-optimized-r7g-amazon-ec2-instances/

We’ve come a long way since the launch of the m1.small instance in 2006, adding instances with additional memory, compute power, and your choice of Intel, AMD, or Graviton processors. The original general-purpose “one size fits all” instance has evolved into six families, each one optimized for specific uses cases, with over 600 generally available instances in all.

New M7g and R7g
Today I am happy to tell you about the newest Amazon EC2 instance types, the M7g and the R7g. Both types are powered by the latest generation AWS Graviton3 processors, and are designed to deliver up to 25% better performance than the equivalent sixth-generation (M6g and R6g) instances, making them the best performers in EC2.

The M7g instances are for general purpose workloads such as application servers, microservices, gaming servers, mid-sized data stores, and caching fleets. The R7g instances are a great fit for memory-intensive workloads such as open-source databases, in-memory caches, and real-time big data analytics.

Here are the specs for the M7g instances:

Instance Name vCPUs
Memory
Network Bandwidth
EBS Bandwidth
m7g.medium 1 4 GiB up to 12.5 Gbps up to 10 Gbps
m7g.large 2 8 GiB up to 12.5 Gbps up to 10 Gbps
m7g.xlarge 4 16 GiB up to 12.5 Gbps up to 10 Gbps
m7g.2xlarge 8 32 GiB up to 15 Gbps up to 10 Gbps
m7g.4xlarge 16 64 GiB up to 15 Gbps up to 10 Gbps
m7g.8xlarge 32 128 GiB 15 Gbps 10 Gbps
m7g.12xlarge 48 192 GiB 22.5 Gbps 15 Gbps
m7g.16xlarge 64 256 GiB 30 Gbps 20 Gbps
m7g.metal 64 256 GiB 30 Gbps 20 Gbps

And here are the specs for the R7g instances:

Instance Name vCPUs
Memory
Network Bandwidth
EBS Bandwidth
r7g.medium 1 8 GiB up to 12.5 Gbps up to 10 Gbps
r7g.large 2 16 GiB up to 12.5 Gbps up to 10 Gbps
r7g.xlarge 4 32 GiB up to 12.5 Gbps up to 10 Gbps
r7g.2xlarge 8 64 GiB up to 15 Gbps up to 10 Gbps
r7g.4xlarge 16 128 GiB up to 15 Gbps up to 10 Gbps
r7g.8xlarge 32 256 GiB 15 Gbps 10 Gbps
r7g.12xlarge 48 384 GiB 22.5 Gbps 15 Gbps
r7g.16xlarge 64 512 GiB 30 Gbps 20 Gbps
r7g.metal 64 512 GiB 30 Gbps 20 Gbps

Both types of instances are equipped with DDR5 memory, which provides up to 50% higher memory bandwidth than the DDR4 memory used in previous generations. Here’s an infographic that I created to highlight the principal performance and capacity improvements that we have made available with the new instances:

If you are not yet running your application on Graviton instances, be sure to take advantage of the AWS Graviton Ready Program. The partners in this program provide services and solutions that will help you to migrate your application and to take full advantage of all that the Graviton instances have to offer. Other helpful resources include the Porting Advisor for Graviton and the Graviton Fast Start program.

The instances are built on the AWS Nitro System, and benefit from multiple features that enhance security: always-on memory encryption, a dedicated cache for each vCPU, and support for pointer authentication. They also support encrypted EBS volumes, which protect data at rest on the volume, data moving between the instance and the volume, snapshots created from the volume, and volumes created from those snapshots. To learn more about these and other Nitro-powered security features, be sure to read The Security Design of the AWS Nitro System.

On the network side the instances are EBS-Optimized with dedicated networking between the instances and the EBS volumes, and also support Enhanced Networking (read How do I enable and configure enhanced networking on my EC2 instances? for more info). The 16xlarge and metal instances also support Elastic Fabric Adapter (EFA) for applications that need a high level of inter-node communication.

Pricing and Regions
M7g and R7g instances are available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) AWS Regions in On-Demand, Spot, Reserved Instance, and Savings Plan form.

Jeff;

PS – Launch one today and let me know what you think!

AWS Week in Review – January 23, 2023

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-january-23-2023/

Welcome to my first AWS Week in Review of 2023. As usual, it has been a busy week, so let’s dive right in:

Last Week’s Launches
Here are some launches that caught my eye last week:

Amazon Connect – You can now deliver long lasting, persistent chat experiences for your customers, with the ability to resume previous conversations including context, metadata, and transcripts. Learn more.

Amazon RDS for MariaDB – You can now enforce the use of encrypted (SSL/TLS) connections to your databases instances that are running Amazon RDS for MariaDB. Learn more.

Amazon CloudWatch – You can now use Metric Streams to send metrics across AWS accounts on a continuous, near real-time basis, within a single AWS Region. Learn more.

AWS Serverless Application Model – You can now run CloudFormation Linter from the SAM CLI to validate your SAM templates. The default rules check template size, Fn:GetAtt parameters, Fn:If syntax, and more. Learn more.

EC2 Auto Scaling – You can now see (and take advantage of) recommendations for activating a predictive scaling policy to optimize the capacity of your Auto Scaling groups. Recommendations can make use of up to 8 weeks of past date; learn more.

Service Limit Increases – Service limits for several AWS services were raised, and other services now have additional quotas that can be raised upon request:

X In Y – Existing AWS services became available in additional regions:

Other AWS News
Here are some other news items and blog posts that may be of interest to you:

AWS Open Source News and Updates – My colleague Ricardo Sueiras highlights the latest open source projects, tools, and demos from the open source community around AWS. Read edition #142 here.

AWS Fundamentals – This new book is designed to teach you about AWS in a real-world context. It covers the fundamental AWS services (compute, database, networking, and so forth), and helps you to make use of Infrastructure as Code using AWS CloudFormation, CDK, and Serverless Framework. As an add-on purchase you can also get access to a set of gorgeous, high-resolution infographics.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS on Air – Every Friday at Noon PT we discuss the latest news and go in-depth on several of the most recent launches. Learn more.

#BuildOnLiveBuild On AWS Live events are a series of technical streams on twitch.tv/aws that focus on technology topics related to challenges hands-on practitioners face today:

  • Join the Build On Live Weekly show about the cloud, the community, the code, and everything in between, hosted by AWS Developer Advocates. The show streams every Thursday at 9:00 PT on twitch.tv/aws.
  • Join the new The Big Dev Theory show, co-hosted with AWS partners, discussing various topics such as data and AI, AIOps, integration, and security. The show streams every Tuesday at 8:00 PT on twitch.tv/aws.

Check the AWS Twitch schedule for all shows.

AWS Community DaysAWS Community Day events are community-led conferences that deliver a peer-to-peer learning experience, providing developers with a venue to acquire AWS knowledge in their preferred way: from one another.

AWS Innovate Data and AI/ML edition – AWS Innovate is a free online event to learn the latest from AWS experts and get step-by-step guidance on using AI/ML to drive fast, efficient, and measurable results.

  • AWS Innovate Data and AI/ML edition for Asia Pacific and Japan is taking place on February 22, 2023. Register here.
  • Registrations for AWS Innovate EMEA (March 9, 2023) and the Americas (March 14, 2023) will open soon. Check the AWS Innovate page for updates.

You can browse all upcoming in-person and virtual events.

And that’s all for this week!

Jeff;

Heads-Up: Amazon S3 Security Changes Are Coming in April of 2023

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/

Starting in April of 2023 we will be making two changes to Amazon Simple Storage Service (Amazon S3) to put our latest best practices for bucket security into effect automatically. The changes will begin to go into effect in April and will be rolled out to all AWS Regions within weeks.

Once the changes are in effect for a target Region, all newly created buckets in the Region will by default have S3 Block Public Access enabled and access control lists (ACLs) disabled. Both of these options are already console defaults and have long been recommended as best practices. The options will become the default for buckets that are created using the S3 API, S3 CLI, the AWS SDKs, or AWS CloudFormation templates.

As a bit of history, S3 buckets and objects have always been private by default. We added Block Public Access in 2018 and the ability to disable ACLs in 2021 in order to give you more control, and have long been recommending the use of AWS Identity and Access Management (IAM) policies as a modern and more flexible alternative.

In light of this change, we recommend a deliberate and thoughtful approach to the creation of new buckets that rely on public buckets or ACLs, and believe that most applications do not need either one. If your application turns out be one that does, then you will need to make the changes that I outline below (be sure to review your code, scripts, AWS CloudFormation templates, and any other automation).

What’s Changing
Let’s take a closer look at the changes that we are making:

S3 Block Public Access – All four of the bucket-level settings described in this post will be enabled for newly created buckets:

A subsequent attempt to set a bucket policy or an access point policy that grants public access will be rejected with a 403 Access Denied error. If you need public access for a new bucket you can create it as usual and then delete the public access block by calling DeletePublicAccessBlock (you will need s3:PutBucketPublicAccessBlock permission in order to call this function; read Block Public Access to learn more about the functions and the permissions).

ACLs Disabled – The Bucket owner enforced setting will be enabled for newly created buckets, making bucket ACLs and object ACLs ineffective, and ensuring that the bucket owner is the object owner no matter who uploads the object. If you want to enable ACLs for a bucket, you can set the ObjectOwnership parameter to ObjectWriter in your CreateBucket request or you can call DeleteBucketOwnershipControls after you create the bucket. You will need s3:PutBucketOwnershipControls permission in order to use the parameter or to call the function; read Controlling Ownership of Objects and Creating a Bucket to learn more.

Stay Tuned
We will publish an initial What’s New post when we start to deploy this change and another one when the deployment has reached all AWS Regions. You can also run your own tests to detect the change in behavior.

Jeff;

Join the Preview – AWS Glue Data Quality

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/join-the-preview-aws-glue-data-quality/

Back in 1980, at my second professional programming job, I was working on a project that analyzed driver’s license data from a bunch of US states. At that time data of that type was generally stored in fixed-length records, with values carefully (or not) encoded into each field. Although we were given schemas for the data, we would invariably find that the developers had to resort to tricks in order to represent values that were not anticipated up front. For example, coding for someone with heterochromia, eyes of different colors. We ended up doing a full scan of the data ahead of our actual time-consuming and expensive analytics run in order to make sure that we were dealing with known data. This was my introduction to data quality, or the lack thereof.

AWS makes it easier for you to build data lakes and data warehouses at any scale. We want to make it easier than ever before for you to measure and maintain the desired quality level of the data that you ingest, process, and share.

Introducing AWS Glue Data Quality
Today I would like to tell you about AWS Glue Data Quality, a new set of features for AWS Glue that we are launching in preview form. It can analyze your tables and recommend a set of rules automatically based on what it finds. You can fine-tune those rules if necessary and you can also write your own rules. In this blog post I will show you a few highlights, and will save the details for a full post when these features progress from preview to generally available.

Each data quality rule references a Glue table or selected columns in a Glue table and checks for specific types of properties: timeliness, accuracy, integrity, and so forth. For example, a rule can indicate that a table must have the expected number of columns, that the column names match a desired pattern, and that a specific column is usable as a primary key.

Getting Started
I can open the new Data quality tab on one of my Glue tables to get started. From there I can create a ruleset manually, or I can click Recommend ruleset to get started:

Then I enter a name for my Ruleset (RS1), choose an IAM Role that has permission to access it, and click Recommend ruleset:

My click initiates a Glue Recommendation task (a specialized type of Glue job) that scans the data and makes recommendations. Once the task has run to completion I can examine the recommendations:

I click Evaluate ruleset to check on the quality of my data.

The data quality task runs and I can examine the results:

In addition to creating Rulesets that are attached to tables, I can use them as part of a Glue job. I create my job as usual and then add an Evaluate Data Quality node:

Then I use the Data Quality Definition Language (DDQL) builder to create my rules. I can choose between 20 different rule types:

For this blog post, I made these rules more strict than necessary so that I could show you what happens when the data quality evaluation fails.

I can set the job options, and choose the original data or the data quality results as the output of the transform. I can also write the data quality results to an S3 bucket:

After I have created my Ruleset, I set any other desired options for the job, save it, and then run it. After the job completes I can find the results in the Data quality tab. Because I made some overly strict rules, the evaluation correctly flagged my data with a 0% score:

There’s a lot more, but I will save that for the next blog post!

Things to Know
Preview Regions – This is an open preview and you can access it today the US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland) AWS Regions.

Pricing – Evaluating data quality consumes Glue Data Processing Units (DPU) in the same manner and at the same per-DPU pricing as any other Glue job.

Jeff;

New – Accelerate Your Lambda Functions with Lambda SnapStart

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-accelerate-your-lambda-functions-with-lambda-snapstart/

Our customers tell me that they love AWS Lambda for many reasons. On the development side they appreciate the simple programming model and ease with which their functions can make use of other AWS services. On the operations side they benefit from the ability to build powerful applications that can respond quickly to changing usage patterns.

As you might know if you are already using Lambda, your functions are run inside of a secure and isolated execution environment. The lifecycle of each environment consists of three main phases: Init, Invoke, and Shutdown. Among other things, the Init phase bootstraps the runtime for the function and runs the function’s static code. In many cases, these operations are completed within milliseconds and do not lengthen the phase in any appreciable way. In the remaining cases, they can take a considerable amount of time, for several reasons. First, initializing the runtime for some languages can be expensive. For example, the Init phase for a Lambda function that uses one of the Java runtimes in conjunction with a framework such as Spring Boot, Quarkus, or Micronaut can sometimes take as long as ten seconds (this includes dependency injection, compilation of the code for the function, and classpath component scanning). Second, the static code might download some machine learning models, pre-compute some reference data, or establish network connections to other AWS services.

Introducing Lambda SnapStart
In order to allow you to put Lambda to use in even more ways, we are introducing Lambda SnapStart today.

After you enable Lambda SnapStart for a particular Lambda function, publishing a new version of the function will trigger an optimization process. The process launches your function and runs it through the entire Init phase. Then it takes an immutable, encrypted snapshot of the memory and disk state, and caches it for reuse. When the function is subsequently invoked, the state is retrieved from the cache in chunks on an as-needed basis and used to populate the execution environment. This optimization makes invocation time faster and more predictable, since creating a fresh execution environment no longer requires a dedicated Init phase.

We are launching with support for Java functions that make use of the Corretto (java11) runtime, and expect to see Lambda SnapStart put to use right away for applications that make use of Spring Boot, Quarkus, Micronaut, and other Java frameworks. Enabling Lambda SnapStart for Java functions can make them start up to 10x faster, at no extra cost.

Using Lambda SnapStart
Because my last actual encounter with Java took place in the last century, I used the Serverless Spring Boot 2 example from the AWS Labs repo as a starting point. I installed the AWS SAM CLI and did a test build & deploy to establish a baseline. I invoked the function and saw that the Init duration was slightly more than 6 seconds:

Then I added two lines to template.yml to configure the SnapStart property:

I rebuilt and redeployed, published a fresh version of the function to set up SnapStart, and ran another test:

With SnapStart, the initialization phase (represented by the Init duration that I showed you earlier) happens when I publish a new version of the function. When I invoke a function that has SnapStart enabled, Lambda restores the snapshot (represented by the Restore duration) before invoking the function handler. As a result, the total cold invoke with SnapStart is now Restore duration + Duration. SnapStart has reduced the cold start duration from over 6 seconds to less than 200 ms.

Becoming Snap-Resilient
Lambda SnapStart speeds up applications by reusing a single initialized snapshot to resume multiple execution environments. This has a few interesting implications for your code:

Uniqueness – When using SnapStart, any unique content that used to be generated during the initialization must now be generated after initialization in order to maintain uniqueness. If you (or a library that you reference) uses a pseudo-random number generator, it should not be based on a seed that is obtained during the Init phase. We have updated OpenSSL’s RAND_Bytes to ensure randomness when used in conjunction with SnapStart, and we have verified that java.security.SecureRandom is already snap-resilient. Amazon Linux’s /dev/random and /dev/urandom are also snap-resilient.

Network Connections -If your code creates long-term connections to network services during the Init phase and uses them during the Invoke phase, make sure that it can re-establish the connection if necessary. The AWS SDKs have already been updated to do this.

Ephemeral Data – This is effectively a more general form of the above items. If your code downloads or computes reference information during the Init phase, consider doing a quick check to make sure that it has not gone stale during the caching period.

Lambda provides a pair of runtime hooks to help you to maintain uniqueness, as well as a scanning tool to help detect possible issues.

Things to Know
Here are a couple of other things to know about Lambda SnapStart:

Caching – Cached snapshots are removed after 14 days of inactivity. Lambda will automatically refresh the cache if a snapshot depends on a runtime that has been updated or patched.

Pricing – There is no extra charge for the use of Lambda SnapStart.

Feature Compatibility – You cannot use Lambda SnapStart with larger ephemeral storage, Elastic File Systems, Provisioned Concurrency, or Graviton2. In general, we recommend using SnapStart for your general-purpose Lambda functions and Provisioned Concurrency for the subset of those functions that are exceptionally sensitive to latency.

Firecracker – This feature makes use of Firecracker Snapshotting.

Regions – Lambda SnapStart is available in the US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Singapore, Sydney, Tokyo), and Europe (Frankfurt, Ireland, Stockholm) Regions.

Jeff;

New – ENA Express: Improved Network Latency and Per-Flow Performance on EC2

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-ena-express-improved-network-latency-and-per-flow-performance-on-ec2/

We know that you can always make great use of all available network bandwidth and network performance, and have done our best to supply it to you. Over the years, network bandwidth has grown from the 250 Mbps on the original m1 instance to 200 Gbps on the newest m6in instances. In addition to raw bandwidth, we have also introduced advanced networking features including Enhanced Networking, Elastic Network Adapters (ENAs), and (for tightly coupled HPC workloads) Elastic Fabric Adapters (EFAs).

Introducing ENA Express
Today we are launching ENA Express. Building on the Scalable Reliable Datagram (SRD) protocol that already powers Elastic Fabric Adapters, ENA Express reduces P99 latency of traffic flows by up to 50% and P99.9 latency by up to 85% (in comparison to TCP), while also increasing the maximum single-flow bandwidth from 5 Gbps to 25 Gbps. Bottom line, you get a lot more per-flow bandwidth and a lot less variability.

You can enable ENA Express on new and existing ENAs and take advantage of this performance right away for TCP and UDP traffic between c6gn instances running in the same Availability Zone.

Using ENA Express
I used a pair of c6gn instances to set up and test ENA Express. After I launched the instances I used the AWS Management Console to enable ENA Express for both instances. I find each ENI, select it, and choose Manage ENA Express from the Actions menu:

I enable ENA Express and ENA Express UDP and click Save:

Then I set the Maximum Transmission Unit (MTU) to 8900 on both instances:

$ sudo /sbin/ifconfig eth0 mtu 8900

I install iperf3 on both instances, and start the first one in server mode:

$ iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

Then I run the second one in client mode and observe the results:

$ iperf3 -c 10.0.178.46
Connecting to host 10.0.178.46, port 5201
[  4] local 10.0.187.74 port 35622 connected to 10.0.178.46 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  2.80 GBytes  24.1 Gbits/sec    0   1.43 MBytes
[  4]   1.00-2.00   sec  2.81 GBytes  24.1 Gbits/sec    0   1.43 MBytes
[  4]   2.00-3.00   sec  2.80 GBytes  24.1 Gbits/sec    0   1.43 MBytes
[  4]   3.00-4.00   sec  2.81 GBytes  24.1 Gbits/sec    0   1.43 MBytes
[  4]   4.00-5.00   sec  2.81 GBytes  24.1 Gbits/sec    0   1.43 MBytes
[  4]   5.00-6.00   sec  2.80 GBytes  24.1 Gbits/sec    0   1.43 MBytes
[  4]   6.00-7.00   sec  2.80 GBytes  24.1 Gbits/sec    0   1.43 MBytes
[  4]   7.00-8.00   sec  2.81 GBytes  24.1 Gbits/sec    0   1.43 MBytes
[  4]   8.00-9.00   sec  2.81 GBytes  24.1 Gbits/sec    0   1.43 MBytes
[  4]   9.00-10.00  sec  2.81 GBytes  24.1 Gbits/sec    0   1.43 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  28.0 GBytes  24.1 Gbits/sec    0             sender
[  4]   0.00-10.00  sec  28.0 GBytes  24.1 Gbits/sec                  receiver

The ENA driver reports on metrics that I can review to confirm the use of SRD:

ethtool -S eth0 | grep ena_srd
     ena_srd_mode: 3
     ena_srd_tx_pkts: 25858313
     ena_srd_eligible_tx_pkts: 25858323
     ena_srd_rx_pkts: 2831267
     ena_srd_resource_utilization: 0

The metrics work as follows:

  • ena_srd_mode indicates that SRD is enabled for TCP and UDP.
  • ena_srd_tx_pkts denotes the number of packets that have been transmitted via SRD.
  • ena_srd_eligible_pkts denotes the number of packets that were eligible for transmission via SRD. A packet is eligible for SRD if ENA-SRD is enabled on both ends of the connection, both connections reside in the same Availability Zone, and the packet is using either UDP or TCP.
  • ena_srd_rx_pkts denotes the number of packets that have been received via SRD.
  • ena_srd_resource_utilization denotes the percent of allocated Nitro network card resources that are in use, and is proportional to the number of open SRD connections. If this value is consistently approaching 100%, scaling out to more instances or scaling up to a larger instance size may be warranted.

Thing to Know
Here are a couple of things to know about ENA Express and SRD:

Access – I used the Management Console to enable and test ENA Express; CLI, API, CloudFormation and CDK support is also available.

Fallback – If a TCP or UDP packet is not eligible for transmission via SRD, it will simply be transmitted in the usual way.

UDP – SRD takes advantage of multiple network paths and “sprays” packets across them. This would normally present a challenge for applications that expect packets to arrive more or less in order, but ENA Express helps out by putting the UDP packets back into order before delivering them to you, taking the burden off of your application. If you have built your own reliability layer over UDP, or if your application does not require packets to arrive in order, you can enable ENA Express for TCP but not for UDP.

Instance Types and Sizes – We are launching with support for the 16xlarge size of the c6gn instances, with additional instance families and sizes in the works.

Resource Utilization – As I hinted at above, ENA Express uses some Nitro card resources to process packets. This processing also adds a few microseconds of latency per packet processed, and also has a moderate but measurable effect on the maximum number of packets that a particular instance can process per second. In situations where high packet rates are coupled with small packet sizes, ENA Express may not be appropriate. In all other cases you can simply enable SRD to enjoy higher per-flow bandwidth and consistent latency.

Pricing – There is no additional charge for the use of ENA Express.

Regions – ENA Express is available in all commercial AWS Regions.

All About SRD
I could write an entire blog post about SRD, but my colleagues beat me to it! Here are some great resources to help you to learn more:

A Cloud-Optimized Transport for Elastic and Scalable HPC – This paper reviews the challenges that arise when trying to run HPC traffic across a TCP-based network, and points out that the variability (latency outliers) can have a profound effect on scaling efficiency, and includes a succinct overview of SRD:

Scalable reliable datagram (SRD) is optimized for hyper-scale datacenters: it provides load balancing across multiple paths and fast recovery from packet drops or link failures. It utilizes standard ECMP functionality on the commodity Ethernet switches and works around its limitations: the sender controls the ECMP path selection by manipulating packet encapsulation.

There’s a lot of interesting detail in the full paper, and it is well worth reading!

In the Search for Performance, There’s More Than One Way to Build a Network – This 2021 blog post reviews our decision to build the Elastic Fabric Adapter, and includes some important data (and cool graphics) to demonstrate the impact of packet loss on overall application performance. One of the interesting things about SRD is that it keeps track of the availability and performance of multiple network paths between transmitter and receiver, and sprays packets across up to 64 paths at a time in order to take advantage of as much bandwidth as possible and to recover quickly in case of packet loss.

Jeff;

New General Purpose, Compute Optimized, and Memory-Optimized Amazon EC2 Instances with Higher Packet-Processing Performance

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-general-purpose-compute-optimized-and-memory-optimized-amazon-ec2-instances-with-higher-packet-processing-performance/

Today I would like to tell you about the next generation of Intel-powered general purpose, compute-optimized, and memory-optimized instances. All three of these instance families are powered by 3rd generation Intel Xeon Scalable processors (Ice Lake) running at 3.5 GHz, and are designed to support your data-intensive workloads with up to 200 Gbps of network bandwidth, the highest EBS performance in EC2 (up to 80 Gbps of bandwidth and up to 350,000 IOPS), and the ability to handle up to twice as many packets per second (PPS) as earlier instances.

New General Purpose (M6in/M6idn) Instances
The original general purpose EC2 instance (m1.small) was launched in 2006 and was the one and only instance type for a little over a year, until we launched the m1.large and m1.xlarge in late 2007. After that, we added the m3 in 2012, m4 in 2015, and the first in a very long line of m5 instances starting in 2017. The family tree branched in 2018 with the addition of the m5d instances with local NVMe storage.

And that brings us to today, and to the new m6in and m6idn instances, both available in 9 sizes:

Name vCPUs Memory Local Storage
(m6idn only)
Network Bandwidth EBS Bandwidth EBS IOPS
m6in.large
m6idn.large
2 8 GiB 118 GB Up to 25 Gbps Up to 20 Gbps Up to 87,500
m6in.xlarge
m6idn.xlarge
4 16 GiB 237 GB Up to 30 Gbps Up to 20 Gbps Up to 87,500
m6in.2xlarge
m6idn.2xlarge
8 32 GiB 474 GB Up to 40 Gbps Up to 20 Gbps Up to 87,500
m6in.4xlarge
m6idn.4xlarge
16 64 GiB 950 GB Up to 50 Gbps Up to 20 Gbps Up to 87,500
m6in.8xlarge
m6idn.8xlarge
32 128 GiB 1900 GB 50 Gbps 20 Gbps 87,500
m6in.12xlarge
m6idn.12xlarge
48 192 GiB 2950 GB
(2 x 1425)
75 Gbps 30 Gbps 131,250
m6in.16xlarge
m6idn.16xlarge
64 256 GiB 3800 GB
(2 x 1900)
100 Gbps 40 Gbps 175,000
m6in.24xlarge
m6idn.24xlarge
96 384 GiB 5700 GB
(4 x 1425)
150 Gbps 60 Gbps 262,500
m6in.32xlarge
m6idn.32xlarge
128 512 GiB 7600 GB
(4 x 1900)
200 Gbps 80 Gbps 350,000

The m6in and m6idn instances are available in the US East (Ohio, N. Virginia) and Europe (Ireland) regions in On-Demand and Spot form. Savings Plans and Reserved Instances are available.

New C6in Instances
Back in 2008 we launched the first in what would prove to be a very long line of Amazon Elastic Compute Cloud (Amazon EC2) instances designed to give you high compute performance and a higher ratio of CPU power to memory than the general purpose instances. Starting with those initial c1 instances, we went on to launch cluster computing instances in 2010 (cc1) and 2011 (cc2), and then (once we got our naming figured out), multiple generations of compute-optimized instances powered by Intel processors: c3 (2013), c4 (2015), and c5 (2016). As our customers put these instances to use in environments where networking performance was starting to become a limiting factor, we introduced c5n instances with 100 Gbps networking in 2018. We also broadened the c5 instance lineup by adding additional sizes (including bare metal), and instances with blazing-fast local NVMe storage.

Today I am happy to announce the latest in our lineup of Intel-powered compute-optimized instances, the c6in, available in 9 sizes:

Name vCPUs Memory
Network Bandwidth EBS Bandwidth
EBS IOPS
c6in.large 2 4 GiB Up to 25 Gbps Up to 20 Gbps Up to 87,500
c6in.xlarge 4 8 GiB Up to 30 Gbps Up to 20 Gbps Up to 87,500
c6in.2xlarge 8 16 GiB Up to 40 Gbps Up to 20 Gbps Up to 87,500
c6in.4xlarge 16 32 GiB Up to 50 Gbps Up to 20 Gbps Up to 87,500
c6in.8xlarge 32 64 GiB 50 Gbps 20 Gbps 87,500
c6in.12xlarge 48 96 GiB 75 Gbps 30 Gbps 131,250
c6in.16xlarge 64 128 GiB 100 Gbps 40 Gbps 175,000
c6in.24xlarge 96 192 GiB 150 Gbps 60 Gbps 262,500
c6in.32xlarge 128 256 GiB 200 Gbps 80 Gbps 350,000

The c6in instances are available in the US East (Ohio, N. Virginia), US West (Oregon), and Europe (Ireland) Regions.

As I noted earlier, these instances are designed to be able to handle up to twice as many packets per second (PPS) as their predecessors. This allows them to deliver increased performance in situations where they need to handle a large number of small-ish network packets, which will accelerate many applications and use cases includes network virtual appliances (firewalls, virtual routers, load balancers, and appliances that detect and protect against DDoS attacks), telecommunications (Voice over IP (VoIP) and 5G communication), build servers, caches, in-memory databases, and gaming hosts. With more network bandwidth and PPS on tap, heavy-duty analytics applications that retrieve and store massive amounts of data and objects from Amazon Amazon Simple Storage Service (Amazon S3) or data lakes will benefit. For workloads that benefit from low latency local storage, the disk versions of the new instances offer twice as much instance storage versus previous generation.

New Memory-Optimized (R6in/R6idn) Instances
The first memory-optimized instance was the m2, launched in 2009 with the now-quaint Double Extra Large and Quadruple Extra Large names, and a higher ration of memory to CPU power than the earlier m1 instances. We had yet to learn our naming lesson and launched the High Memory Cluster Eight Extra Large (aka cr1.8xlarge) in 2013, before settling on the r prefix and launching r3 instances in 2013, followed by r4 instances in 2014, and r5 instances in 2018.

And again that brings us to today, and to the new r6in and r6idn instances, also available in 9 sizes:

Name vCPUs Memory Local Storage
(r6idn only)
Network Bandwidth EBS Bandwidth EBS IOPS
r6in.large
r6idn.large
2 16 GiB 118 GB Up to 25 Gbps Up to 20 Gbps Up to 87,500
r6in.xlarge
r6idn.xlarge
4 32 GiB 237 GB Up to 30 Gbps Up to 20 Gbps Up to 87,500
r6in.2xlarge
r6idn.2xlarge
8 64 GiB 474 GB Up to 40 Gbps Up to 20 Gbps Up to 87,500
r6in.4xlarge
r6idn.4xlarge
16 128 GiB 950 GB Up to 50 Gbps Up to 20 Gbps Up to 87,500
r6in.8xlarge
r6idn.8xlarge
32 256 GiB 1900 GB 50 Gbps 20 Gbps 87,500
r6in.12xlarge
r6idn.12xlarge
48 384 GiB 2950 GB
(2 x 1425)
75 Gbps 30 Gbps 131,250
r6in.16xlarge
r6idn.16xlarge
64 512 GiB 3800 GB
(2 x 1900)
100 Gbps 40 Gbps 175,000
r6in.24xlarge
r6idn.24xlarge
96 768 GiB 5700 GB
(4 x 1425)
150 Gbps 60 Gbps 262,500
r6in.32xlarge
r6idn.32xlarge
128 1024 GiB 7600 GB
(4 x 1900)
200 Gbps 80 Gbps 350,000

The r6in and r6idn instances are available in the US East (Ohio, N. Virginia), US West (Oregon), and Europe (Ireland) regions in On-Demand and Spot form. Savings Plans and Reserved Instances are available.

Inside the Instances
As you can probably guess from these specs and from the blog post that I wrote to launch the c6in instances, all of these new instance types have a lot in common. I’ll do a rare cut-and-paste from that post in order to reiterate all of the other cool features that are available to you:

Ice Lake Processors – The 3rd generation Intel Xeon Scalable processors run at 3.5 GHz, and (according to Intel) offer a 1.46x average performance gain over the prior generation. All-core Intel Turbo Boost mode is enabled on all instance sizes up to and including the 12xlarge. On the larger sizes, you can control the C-states. Intel Total Memory Encryption (TME) is enabled, protecting instance memory with a single, transient 128-bit key generated at boot time within the processor.

NUMA – Short for Non-Uniform Memory Access, this important architectural feature gives you the power to optimize for workloads where the majority of requests for a particular block of memory come from one of the processors, and that block is “closer” (architecturally speaking) to one of the processors. You can control processor affinity (and take advantage of NUMA) on the 24xlarge and 32xlarge instances.

NetworkingElastic Network Adapter (ENA) is available on all sizes of m6in, m6idn, c6in, r6in, and r6idn instances, and Elastic Fabric Adapter (EFA) is available on the 32xlarge instances. In order to make use of these adapters, you will need to make sure that your AMI includes the latest NVMe and ENA drivers. You can also make use of Cluster Placement Groups.

io2 Block Express – You can use all types of EBS volumes with these instances, including the io2 Block Express volumes that we launched earlier this year. As Channy shared in his post (Amazon EBS io2 Block Express Volumes with Amazon EC2 R5b Instances Are Now Generally Available), these volumes can be as large as 64 TiB, and can deliver up to 256,000 IOPS. As you can see from the tables above, you can use a 24xlarge or 32xlarge instance to achieve this level of performance.

Choosing the Right Instance
Prior to today’s launch, you could choose a c5n, m5n, or r5n instance to get the highest network bandwidth on an EC2 instance, or an r5b instance to have access to the highest EBS IOPS performance and high EBS bandwidth. Now, customers who need high networking or EBS performance can choose from a full portfolio of instances with different memory to vCPU ratio and instance storage options available, by selecting one of c6in, m6in, m6idn, r6in, or r6idn instances.

The higher performance of the c6in instances will allow you to scale your network intensive workloads that need a low memory to vCPU, such as network virtual appliances, caching servers, and gaming hosts.

The higher performance of m6in instances will allow you to scale your network and/or EBS intensive workloads such as data analytics, and telco applications including 5G User Plane Functions (UPF). You have the option to use the m6idn instance for workloads that benefit from low-latency local storage, such as high-performance file systems, or distributed web-scale in-memory caches.

Similarly, the higher network and EBS performance of the r6in instances will allow you to scale your network-intensive SQL, NoSQL, and in-memory database workloads, with the option to use the r6idn when you need low-latency local storage.

Jeff;

New Amazon EC2 Instance Types In the Works – C7gn, R7iz, and Hpc7g

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-amazon-ec2-instance-types-in-the-works-c7gn-r7iz-and-hpc7g/

We are getting ready to launch three new Amazon Elastic Compute Cloud (Amazon EC2) instance types and I am happy to be able to give you a sneak peek at them today.

C7gn Instances are designed for your most demanding network-intensive workloads: network virtual appliances (firewalls, virtual routers, load balancers, and so forth), data analytics, and tightly-coupled cluster computing jobs. They are powered by AWS Graviton3E processors and will support up to 200 Gbps of network bandwidth, along with 50% higher packet processing performance. The c7gn instances will be available in multiple sizes with up to 64 vCPUs and 128 GiB of memory. We are launching the preview today and you can Sign Up Today to join in.

Hpc7g Instances are also powered by AWS Graviton3E processors, with up to 35% higher vector instruction processing performance than the Graviton3. They are designed to give you the best price/performance for tightly coupled compute-intensive HPC and distributed computing workloads, and deliver 200 Gbps of dedicated network bandwidth that is optimized for traffic between instances in the same VPC. The hpc7g instances will be available in multiple sizes with up to 64 vCPUs and 128 GiB of memory. I’ll have more information to share on these instances in early 2023.

R7iz Instances are powered by the latest 4th generation Intel Xeon Scalable Processors (code named Sapphire Rapids) and run at a sustained all-core turbo frequency of 3.9 GHz. With high performance and DDR5 memory, these instances are a perfect match for your Electronic Design Automation (EDA), financial, actuarial, and simulation workloads. They are also great hosts for relational databases and other commercial software that is licensed on a per-core basis. The r7iz instances will be available in multiple sizes with up to 128 vCPUs and 1 TiB of memory. We are launching the instances in preview today and you can Sign up Today to participate.

Jeff;

New – Failover Controls for Amazon S3 Multi-Region Access Points

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-failover-controls-for-amazon-s3-multi-region-access-points/

We launched Amazon S3 Multi-Region Access Points to give you a global endpoint that spans S3 buckets in multiple AWS Regions. With S3 Multi-Region Access Points, you can build multi-region applications with the same simple architecture used in a single Region. This cool and powerful feature uses AWS Global Accelerator to monitor network congestion and connectivity, and to route traffic to the closest copy of your data. In the event that connectivity between a client and a bucket in a particular Region is lost, the Multi-Region Access Point will automatically route all traffic to the closest bucket (synchronized via S3 Replication) in another Region.

In addition to the use case that I just described, customers have told us that they want to build highly available multi-region apps and need explicit control over failover and failback.

New Failover Controls
Today we are adding failover controls for Multi-Region Access Points. These controls let you shift S3 data access request traffic routed through an Amazon S3 Multi-Region Access Point to an alternate AWS Region within minutes to test and build highly available applications for business continuity.

The existing Multi-Region Access Point model treats all of the Regions as active and can send traffic to any of them. The model that we are introducing today lets you designate Regions as either active or passive. Buckets in active Regions receive traffic (GET, PUT, and other requests) from the Multi-Region Access Point, buckets in passive Regions don’t. Amazon S3 Cross-Region Replication operates regardless of the active or passive status of a Region with respect to a particular Multi-Region Access Point.

To get started, I create a new Multi-Region Access Point that refers to two or more S3 buckets in distinct AWS Regions. I enter a name for my Multi-Region Access Point (jbarr-mrap-1), and choose the buckets:

I leave the Amazon S3 Block Public Access settings as-is, and click Create Multi-Region Access Point:

Then I wait until my Multi-Region Access Point is ready (generally just a few minutes):

By default, my new Multi-Region Access Point routes traffic to all of the buckets, and behaves as it did before we launched this new feature. However, I can now exercise control over routing and failover. I click on the Multi-Region Access Point, and on the Replication and failover tab (which used to be just a Replication tab). The map now allows me to see my replication rules and my failover status:

I can scroll down to view, create, and modify my replication rules:

As you can see, the replication rules that I created for this demo preserve the storage class. S3 Intelligent-Tiering is generally a better choice, since I would get automatic cost savings without increased data retrieval costs after a failover. I can use S3 Replication metrics to make sure that my replication rules are proceeding as expected. Also, S3 Replication Time Control provides a predictable replication time (backed by an SLA), and should also be considered.

The tab also includes the failover configuration:

To change my failover configuration, I select the buckets of interest and click Edit failover configuration. My application runs in the Asia Pacific (Tokyo) Region and makes use of a bucket there, so I leave the Tokyo Region active and make the others passive:

All is well until one fine day Godzilla wakes up and eats all of the submarine cables in and around Tokyo. I quickly pull up the console, return to the Failover configuration, select the active Tokyo Region and the passive Osaka Region, and click Failover:

I confirm my intent, click Failover again, and the failover is complete within two minutes:

Later, after Godzilla has been subdued and the cables have been repaired, I can fail back to the original bucket in the Tokyo Region:

Things to Know
Here are a couple of things to keep in mind as you start to make use of this important new AWS feature:

Active/Passive – There must be at least one active Region at all times.

CLI & API Access – You can initiate a failover programmatically by calling SubmitMultiRegionAccessPointRoutes. You can retrieve the current set of routes by calling GetMultiRegionAccessPointRoutes. The endpoints for these APIs are available in the US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney, Tokyo), and Europe (Ireland) Regions.

Pricing – There is no extra charge for this feature beyond the use of the new APIs, which are billed as standard S3 GET and PUT requests. For S3 Multi-Region Access Point usage prices, see the Pricing tab of the Amazon S3 Pricing page.

Regions – This feature is available in all AWS Regions where Multi-Region Access Points are currently available.

Jeff;

New AWS Glue 4.0 – New and Updated Engines, More Data Formats, and More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-glue-4-0-new-and-updated-engines-more-data-formats-and-more/

AWS Glue is a scalable, serverless tool that helps you to accelerate the development and execution of your data integration and ETL workloads. Today we are launching Glue 4.0, with updated engines, support for additional data formats, Ray support, and a lot more.

Before I dive in, just a word about versioning. Unlike most AWS services, where the service team owns and has full control over the APIs, Glue includes a collection of libraries, engines, and tools developed by the open source community. Some of these components do not maintain strict backward compatibility, often in pursuit of efficiency. In order to make sure that changes to the components do not impact your Glue jobs, you must select a particular Glue version when you create the job.

Each version of Glue includes performance and reliability benefits in addition to the added features, and you should plan to upgrade your jobs over time to take advantage of all that Glue has to offer.

Dive in to Glue
Let’s take a look at what’s new in Glue 4.0:

Updated Engines – This version of Glue includes Python 3.10 and Apache Spark 3.3.0. Both engines include bug fixes and performance enhancements; Spark includes new features such as row-level runtime filtering, improved error messages, additional built-in functions, and much more. Glue and Amazon EMR make use of the same optimized Spark runtime, which has been optimized to run in the AWS cloud and can be 2-3 times faster than the basic open source version.

New Engine Plugins – Glue 4.0 adds native support for the Cloud Shuffle Service Plugin for Spark to help you scale your disk usage, and Adaptive Query Execution to dynamically optimize your queries as they run.

Pandas Support Pandas is an open source data analysis and manipulation tool that is built on top of Python. It is easy to learn and includes all kinds of interesting and useful data manipulation functions.

New Data Formats – Whether you are building a data lake or a data warehouse, Glue 4.0 now handles new open source data formats for sources and targets, with support for Apache Hudi, Apache Iceberg, and Delta Lake. To learn more about these new options and formats, read Get Started with Apache Hudi using AWS Glue by Implementing Key Design Concepts.

Everything Else – In addition to the above items, Glue 4.0 also includes the Parquet vectorized reader, with support for additional data types and encodings. It has been upgraded to use log4j 2 and is no longer dependent on log4j 1.

Available Now
Glue 4.0 is available today in the US East (Ohio, N. Virginia), US West (N. California, Oregon), Africa (Cape Town), Asia Pacific (Hong Kong, Jakarta, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm), Middle East (Bahrain), and South America (Sao Paulo) AWS Regions.

Jeff;

New – Amazon RDS Optimized Reads and Optimized Writes

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-amazon-rds-optimized-reads-and-optimized-writes/

Way back in 2009 I wrote Introducing Amazon RDS – The Amazon Relational Database Service and told you that:

RDS makes it easier for you to set up, operate, and scale a relational database in the cloud. You get direct database access without worrying about infrastructure provisioning, software maintenance, or common database management tasks.

Since that launch we have continued to do our best to help you to avoid all of those items, while also working to make RDS ever-more cost effective. For example, we recently launched Graviton2 DB Instances that deliver up to 52% better price/performance and a new Multi-AZ Deployment Option that delivers up to 33% better price/performance along with 2x faster transaction commit latency.

Today I would like to tell you about two new features that will accelerate your Amazon RDS for MySQL workloads:

Amazon RDS Optimized Reads achieve faster query processing by placing temporary tables generated by MySQL on NVMe-based SSD block storage that is physically connected to the host server. Queries that use temporary tables, such as those involving sorts, hash aggregations, high-load joins, and Common Table Expressions (CTEs) can execute up to 50% faster with Optimized Reads.

Amazon RDS Optimized Writes deliver an improvement of up to 2x in write transaction throughput at no extra charge, and with the same level of provisioned IOPS. Optimized Writes are a great fit for write-heavy workloads that generate lots of concurrent transactions. This includes digital payments, financial trading platforms, and online games.

Amazon RDS Optimized Reads
Amazon RDS for MySQL without Optimized Reads places temporary tables on Amazon Elastic Block Store (Amazon EBS) volumes. Optimized Reads offload the operations on temporary objects from EBS to the instance store attached to r5d, m5d, r6gd and m6gd instances. As a result EBS volumes can be more efficiently utilized for reads and writes on persistent data, as well as background operations such as flushes, insert buffer merges, and so forth. This increased efficiency is (of course) always nice to have, but it is particularly beneficial for certain use cases:

  • Analytical Queries that include Complex Table Expressions, derived tables, and grouping operations.
  • Read Replicas that handle the unoptimized queries for an application.
  • On-Demand or Dynamic Reporting Queries with complex operations such as GROUP BY and ORDER BY that can’t always use appropriate indexes.
  • Other Workloads that use internal temporary tables.

You can monitor the MySQL status variable created_tmp_files to observe the rate of creation for temporary tables.

The amount of instance storage available on the instance varies by instance family and size. Here’s a guide:

Instance Family Minimum Storage
Maximum Storage
m5d 75 GB 3.6 TB
m6gd 237 GB 3.8 TB
r5d 75 GB 3.6 TB
r6gd 59 GB 3.8 TB

Using Optimized Reads
To take advantage of this new feature, choose MySQL engine version 8.0.28 or newer and launch Amazon RDS for MySQL on one of the instance types listed above:

You can monitor the use of instance storage by watching new CloudWatch metrics including FreeLocalStorage, ReadIOPSLocalStorage, WriteIOPSLocalStorage, and so forth (see the User Guide for a complete list of new and existing metrics).

Optimized Reads are available in all AWS Regions where the eligible database instance types are available.

Amazon RDS Optimized Writes
By default, MySQL uses an on-disk doublewrite buffer that serves as an intermediate stop between memory and the final on-disk storage. Each page of the buffer is 16 KiB but is written to the final on-disk storage in 4 KiB chunks. This extra step maintains data integrity, but also consumes additional I/O bandwidth. When running those write-heavy workloads that I described earlier, this might require provisioning of additional IOPS to meet your performance and throughput requirements.

Optimized Writes uses uniform 16 KiB database pages, file system blocks, and operating system pages, and writes them to storage atomically (all or nothing), resulting in the performance improvement of up to 2x that I mentioned earlier.

Using Optimized Writes
You must create a new DB Instance from scratch on a db.r5b or db.r6i instance with the latest version of MySQL 8.0 in order to make use of Optimized Writes:

This setting affects the format of DB snapshots, with two important consequences:

  1. You cannot restore an existing non-optimized snapshot to a new, optimized one in order to enable Optimized Writes.
  2. Restoring a snapshot that was made with optimization enabled will enable Optimized Writes in the new instance.

If you scale to an instance type that does not support Optimized Writes, Amazon RDS will enable MySQL’s doublewrite mode on the instance as a fallback. If you scale into an instance that supports Optimized Writes from one that does not, Amazon RDS will launch MySQL in doublewrite mode, wait for the recovery and log replay to complete, and then relaunch MySQL with doublewrite disabled.

Optimized Writes are now available in the US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Singapore, Tokyo), and Europe (Frankfurt, Ireland, Paris) Regions and you can start to benefit from them today!

Jeff;

AWS Week in Review – November 7, 2022

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-november-7-2022/

With three weeks to go until AWS re:Invent opens in Las Vegas, the AWS News Blog Team is hard at work creating blog posts to share the latest launches and previews with you. As usual, we have a strong mix of new services, new features, and a surprise or two.

Last Week’s Launches
Here are some launches that caught my eye last week:

Amazon SNS Data Protection and Masking – After a quick public preview, this cool feature is now generally available. It uses pattern matching, machine learning models, and content policies to help protect data at scale. You can find many different kinds of personally identifiable information (PII) and protected health information (PHI) in message bodies and either block message delivery or mask (de-identify) the sensitive data, all in real-time and on a per-topic basis. To learn more, read the blog post or the message data protection documentation.

Amazon Textract Updates – This service extracts text, handwriting, and data from any document or image. This past week we updated the AnalyzeID function so that it can now extract the machine readable zone (MRZ) on passports issued by the United States, and we added the entire OCR output to the API response. We also updated the machine learning models that power the AnalyzeDocument function, with a focus on single-character boxed forms commonly found on tax and immigration documents. Finally, we updated the AnalyzeExpense function with support for new fields and higher accuracy for existing fields, bringing the total field count to more than 40.

Another Amazon Braket Processor – Our quantum computing service now supports Aquila, a new 256-qubit quantum computer from QuEra that is based on a programmable array of neutral Rubidium atoms. According to the What’s New, Aquila supports the Analog Hamiltonian Simulation (AHS) paradigm, allowing it to solve for the static and dynamic properties of quantum systems composed of many interacting particles.

Amazon S3 on Outposts – This service now lets you use additional S3 Lifecycle rules to optimize capacity management. You can expire objects as they age or are replaced with newer versions, with control at the bucket level, or for subsets defined by prefixes, object tags, or object sizes. There’s more info in the What’s New and in the S3 documentation.

AWS CloudFormation – There were two big updates last week: support for Amazon RDS Multi-AZ deployments with two readable standbys, and better access to detailed information on failed stack instances for operations on CloudFormation StackSets.

Amazon MemoryDB for Redis – You can now use data tiering as a lower cost way to to scale your clusters up to hundreds of terabytes of capacity. This new option uses a combination of instance memory and SSD storage in each cluster node, with all data stored durably in a multi-AZ transaction log. There’s more information in the What’s New and the blog post.

Amazon EC2 – You can now remove launch permissions for Amazon Machine Images (AMIs) that are directly shared with your AWS account.

X in Y – We launched existing AWS services and instance types in additional Regions:

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here are some additional news items that you may find interesting:

AWS Open Source News and Updates – My colleague Ricardo Sueiras highlights new open source projects, tools, and demos from the AWS Community. Read Installment 134 to see what’s going on!

New Case Study – A new AWS case study describes how Taggle (a company focused on smart water solutions in Australia) created an IoT platform that runs on AWS and uses Amazon Kinesis Data Streams to store & ingest data in real time. Using AWS allowed them to scale to accommodate 80,000 additional sensors that will roll out in 2022.

Upcoming AWS Events
re:Invent 2022AWS re:Invent is just three weeks away! Join us live from November 28th to December 2nd for keynotes, training and certification opportunities, and over 1,500 technical sessions. If you cannot make it to Las Vegas you can also join us online to watch the keynotes and leadership sessions live. Be sure to check out the re:Invent 2022 Attendee Guides, each curated by an AWS Hero, AWS industry team, or AWS partner.

PeerTalk – If you will be attending re:Invent in person and are interested in meeting with me or any of our featured experts, be sure to check out PeerTalk, our new onsite networking program.

That’s all for this week!

Jeff;

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS.

AWS Local Zones Expansion: Taipei and Delhi

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-local-zones-expansion-taipei-and-delhi/

In late 2019 I told you about the AWS Local Zone in Los Angeles, California. In that post I described Local Zones as a new type of AWS infrastructure deployment that brings select AWS services very close to a particular geographic area. A year after that launch, I announced our plans to add 3 more Local Zones in 2020, and 12 more in 2021. Right now, we are working to bring Local Zones to 33 cities in 27 countries including 6 in Latin America.

Applications hosted in a Local Zone benefit from very low (single-digit millisecond) latency access to EC2 instances and other AWS services. Local Zones also give AWS customers additional choices regarding data residency, giving them the ability to store and process sensitive data (often financial or personal in nature) in-country.

Going Global
Today I am happy to announce the launch of Local Zones in Taipei (Taiwan) and Delhi (India). Like the existing Local Zones in the US, you start by enabling them in the AWS Management Console:

After you do this, you can launch Amazon Elastic Compute Cloud (Amazon EC2) instances, create Amazon Elastic Block Store (Amazon EBS) volumes,and make use of other services including Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), and Amazon Virtual Private Cloud (Amazon VPC). The new Local Zones include T3, C5, M5, R5, and G4dn instances in select sizes, along with General Purpose SSD (gp2) EBS volumes.

Local Zones in Action
AWS customers are working to put Local Zones to use. Here are a few use cases:

AdvaHealth Solutions – This digital health care & life sciences company supports radiology, oncology, ophthalmology and other medical imaging applications with AdvaPACS, a cloud-native image archive. The new Local Zones will allow them to deliver diagnostic image data with low latency in order to improve the point-of-care experience for patients and health care providers across Asia, and will also support expansion into new markets.

M2P Fintech specializes in building financial infrastructure and is an API infrastructure platform for Banking, Lending and Payments products. More than 600 Fintechs, 100 Banks & 100 Financial Institutions across MENA and APAC regions rely on M2P’s platform to power their own branded products including category leaders across ride-hailing, food delivery, and credit cards. M2P uses Local Zones instead of bearing the burden of setting up their own data centers and to meet local requirements for data processing and storage.

NaranjaX – This financial services company is the primary credit card issuer in Argentina. They are engaged in a digital transformation with the target of delivering an improved financial solution to their commercial customers, and believe that using Local Zones will give these customers a strategic advantage.

Pluto XR is the developer of the PlutoSphere OS that enables gamers, developers and operators to live stream XR applications to any XR device. In order to deliver a high quality streaming experience, they run their application as close to their end users as possible. The new Local Zones will allow them to serve millions of users in new metro areas

Riot Games is an American video game developer, publisher and entertainment company based in Los Angeles, California. Their games deliver an optimal player experience through ultra low latency for their MOBA (Multiplayer Online Battle Arena) League of Legends and their first-person tactical shooter VALORANT. By deploying their games into Local Zones, Riot is able to serve players at low latency without the need for operating on-premises compute.

Zenga Media is one of the largest media-tech companies in India. They provide live streaming and over-the-top distribution of entertainment content to millions of users globally, while using cloud-based video editing and sharing to process content destined for TV shows, sports broadcasts, news, and movies. They will use Local Zones to provide local connectivity to their editors and customers, thereby speeding processing and delivering a superior video streaming experience to customers.

Local Zones Resources
Here are a few resources to help you learn more about designing, building, and using Local Zones:

I am always interested in hearing about how our customers are making use of Local Zones. Leave me a comment or track me down online and let me know what you are working on!

Jeff;

AWS Week in Review – September 19, 2022

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-september-19-2022/

Things are heating up in Seattle, with preparation for AWS re:Invent 2022 well underway. Later this month the entire News Blog team will participate in our now-legendary “speed storming” event. Over the course of three or four days, each of the AWS service teams with a launch in the works for re:Invent will give us an overview and share their PRFAQ (Press Release + FAQ) with us. After the meetings conclude, we’ll divvy up the launches and get to work on our blog posts!

Last Week’s Launches
Here are some of the launches that caught my eye last week:

Amazon Lex Visual Conversation Builder – This new tool makes bot design easier than ever. You get a complete view of the conversation in one place, and you can manage complex conversations that have dynamic paths. To learn more and see the builder in action, read Announcing Visual Conversation Builder for Amazon Lex on the AWS Machine Learning Blog.

AWS Config Conformance Pack Price Reduction – We have reduced the price for evaluation of AWS Config Conformance Packs by up to 58%. These packs contain AWS Config rules and remediation actions that can be deployed as a single entity in account and a region, or across an entire organization. The price reduction took effect September 14, 2022; it lowers the cost per evaluation and decreases the number of evaluations needed to reach each pricing tier.

CDK (Cloud Development Kit) Tree View – The AWS CloudFormation console now includes a Constructs tree view that automatically organizes the resources that were synthesized by AWS CDK constructs. The top level of the tree view includes the named constructs and the second level includes all of the resources generated by the named construct. Read the What’s New to learn more!

AWS Incident Detection and ResponseAWS Enterprise Support customers now have access to proactive monitoring and incident management for selected workloads running on AWS. As part of the onboarding process, AWS experts review workloads for reliability and operational excellence, and work with the customer to identify critical metrics and associated alarms. Incident Management Engineers then monitor the workloads, detect critical incidents, and initiate a call bridge to accelerate recovery. Read the AWS Incident Detection and Response page and the What’s New to learn more.

ECS Cluster Scale-In Speed – Auto-Scaled ECS clusters can now scale-in (reduce capacity) faster than ever before. Previously, each scale-in would reduce the capacity within an Auto Scaling Group (ASG) by 5% at a time. Now, capacity can be reduced by up to 50%. This change makes scaling more responsive to workload changes while still maintaining availability for spiky traffic patterns. Read Faster Scaling-In for Amazon ECS Cluster Auto Scaling and the What’s New to learn more.

AWS Outposts Rack Networking – AWS Outposts racks now support local gateway ingress routing to redirect incoming traffic to an Elastic Network Interface (ENI) attached to an EC2 instance before traffic reaches workloads running on the Outpost; read Deploying Local Gateway Ingress Routing on AWS Outposts to learn more. Outposts racks now also support direct VPC routing to simplify the process of communicating with your on-premises network; read the What’s New to learn more.

Amazon SWF Console Experience Updated – The new console experience for Amazon Simple Workflow Service (SWF) gives you better visibility of your SWF domains along with additional information about your workflow executions and events. You can efficiently manage high-volume workloads and quickly find the detailed information that helps you to operate at peak efficiency. Read the What’s New to learn more.

Dynamic Intermediate Certificate Authorities – According to a post on the AWS Security Blog, public certificates issued through AWS Certificate Manager (ACM) will soon (October 11, 2022) be issued from one of several intermediate certificate authorities managed by Amazon. This change will be transparent to most customers and applications, except those that make use of certificate pinning. In some cases, older browsers will need to be updated in order to properly trust the Amazon Trust Services CAs.

X in Y – We launched existing AWS services and instance types in additional regions:

Other AWS News
AWS Open Source – Check out Installment #127 of the AWS Open Source News and Updates Newsletter to learn about new tools for AWS CloudFormation, AWS Lambda, Terraform / EKS, AWS Step Functions, AWS Identity and Access Management (IAM), and more.

New Case Study – Read this new case study to learn how the Deep Data Research Computing Center at Stanford University is creating tools designed to bridge the gap between biology and computer science in order to help researchers in precision medicine deliver tangible medical solutions.

Application Management – The AWS DevOps Blog showed you how to Implement Long-Running Deployments with AWS CloudFormation Custom Resources Using AWS Step Functions.

Architecture – The AWS Architecture Blog showed you how to Maintain Visibility Over the Use of Cloud Architecture Patterns.

Big Data – The AWS Big Data Blog showed you how to Optimize Amazon EMR Costs for Legacy and Spark Workloads.

Migration – In a two-part series on the AWS Compute Blog, Marcia showed you how to Lift and Shift a Web Application to AWS Serverless (Part 1, Part 2).

Mobile – The AWS Mobile Blog showed you how to Build Your Own Application for Route Optimization and Tracking using AWS Amplify and Amazon Location Service.

Security – The AWS Security Blog listed 10 Reasons to Import a Certificate into AWS Certificate Manager and 154 AWS Services that have achieved HITRUST Certificiation.

Training and Certification – The AWS Training and Certification Blog talked about The Value of Data and Pursuing the AWS Certified Data Analytics – Specialty Certification.

Containers – The AWS Containers Blog encouraged you to Achieve Consistent Application-Level Tagging for Cost Tracking in AWS.

Upcoming AWS Events
Check your calendar and sign up for an AWS event in your locale:

AWS Summits – Come together to connect, collaborate, and learn about AWS. Registration is open for the following in-person AWS Summits: Mexico City (September 21–22), Bogotá (October 4), and Singapore (October 6).

AWS Community DaysAWS Community Day events are community-led conferences to share and learn with one another. In September, the AWS community in the US will run events in Arlington, Virginia (September 30). In Europe, Community Day events will be held in October. Join us in Amersfoort, Netherlands (October 3), Warsaw, Poland (October 14), and Dresden, Germany (October 19).

AWS Fest – This third-party event will feature AWS influencers, community heroes, industry leaders, and AWS customers, all sharing AWS optimization secrets (September 29th), register here.

Stay Informed
I hope that you have enjoyed this look back at some of what took place in AWS-land last week! To better keep up with all of this news, please check out the following resources:

Jeff;

A Decade of Ever-Increasing Provisioned IOPS for Amazon EBS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/a-decade-of-ever-increasing-provisioned-iops-for-amazon-ebs/

Progress is often best appreciated in retrospect. It is often the case that a steady stream of incremental improvements over a long period of time ultimately adds up to a significant level of change. Today, ten years after we first launched the Provisioned IOPS feature for Amazon Elastic Block Store (EBS), I strongly believe that to be the case.

All About the IOPS
Let’s start with a quick review of IOPS, which is short for Input/Output Operations per Second. This is a number which is commonly used to characterize the performance of a storage device, and higher numbers mean better performance. In many cases, applications that generate high IOPS values will use threads, asynchronous I/O operations, and/or other forms of parallelism.

The Road to Provisioned IOPS
When we launched Amazon Elastic Compute Cloud (Amazon EC2) back in 2006 (Amazon EC2 Beta), the m1.small instances had a now-paltry 160 GiB of local disk storage. This storage had the same lifetime as the instance, and disappeared if the instance crashed or was terminated. In the run-up to the beta, potential customers told us that they could build applications even without persistent storage. During the two years between the EC2 beta and the 2008 launch of Amazon EBS, those customers were able to gain valuable experience with EC2 and to deploy powerful, scalable applications. As a reference point, these early volumes were able to deliver an average of about 100 IOPS, with bursting beyond that on a best-effort basis.

Evolution of Provisioned IOPS
As our early customers gained experience with EC2 and EBS, they asked us for more I/O performance and more flexibility. In my 2012 post (Fast Forward – Provisioned IOPS for EBS Volumes), I first told you about the then-new Provisioned IOPS (PIOPS) volumes and also introduced the concept of EBS-Optimized instances. These new volumes found a ready audience and enabled even more types of applications.

Over the years, as our customer base has become increasingly diverse, we have added new features and volume types to EBS, while also pushing forward on performance, durability, and availability. Here’s a family tree to help put some of this into context:

Today, EBS handles trillions of input/output operations daily, and supports seven distinct volume types each with a specific set of performance characteristics, maximum volume sizes, use cases, and prices. From that 2012 starting point where a single PIOPS volume could deliver up to 1000 IOPS, today’s high-end io2 Block Express volumes can deliver up to 256,000 IOPS.

Inside io2 Block Express
Let’s dive in a bit and take a closer look at io2 Block Express. These volumes make use of multiple Nitro System components including AWS Nitro SSD storage and the Nitro Card for EBS. The io2 Block Express volumes can be as large as 64 TiB, and can deliver up to 256,000 IOPS with 99.999% durability and up to 4,000 MiB/s of throughput. This performance makes them suitable for the most demanding mission-critical workloads, those that require sustained high performance and sub-millisecond latency. On the network side, the io2 Block Express volumes make use of a Scalable Reliable Datagram (SRD) protocol that is designed to deliver consistent high performance on complex, multipath networks (read A Cloud-Optimized Transport Protocol for Elastic and Scalable HPC to learn a lot more). You can use these volumes with X2idn, X2iedn, R5b, and C7g instances today, with support for additional instance types in the works.

Your Turn
Here are some resources to help you to learn more about EBS and Provisioned IOPS:

I can’t wait to see what the second decade holds for EBS and Provisioned IOPS!

Jeff;