All posts by Jeff Barr

New EC2 M5zn Instances – Fastest Intel Xeon Scalable CPU in the Cloud

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-ec2-m5zn-instances-fastest-intel-xeon-scalable-cpu-in-the-cloud/

We launched the compute-intensive z1d instances in mid-2018 for customers who asked us for extremely high per-core performance and a high memory-to-core ratio to power their front-end Electronic Design Automation (EDA), actuarial, and CPU-bound relational database workloads.

In order to address a complementary set of use cases, customers have asked us for an EC2 instance that will give them high per-core performance like z1d, with no local NVMe storage, higher networking throughput, and a reduced memory-to-vCPU ratio. They have indicated if we built an instance with this set of attributes, it would be an excellent fit for workloads such as gaming, financial applications, simulation modeling applications such as those used in the automobile, aerospace, energy and telecommunication industries, and High Performance Computing (HPC).

Introducing M5zn
Building on the success of the z1d instances, we are launching M5zn instances in seven sizes today. These instances use 2nd generation custom Intel® Xeon® Scalable (Cascade Lake) processors with a sustained all-core turbo clock frequency of up to 4.5 GHz. M5zn instances feature high frequency processing, are a variant of the general-purpose M5 instances, and are built on the AWS Nitro System. These instances also feature low latency 100 Gbps networking and the Elastic Fabric Adapter (EFA), in order to improve performance for HPC and communication-intensive applications.

Here are the M5zn instances (all VPC-only, HVM-only, and EBS-Optimized, with support for Optimize vCPU). As you can see, the memory-to-vCPU ratio on these instances is half that of the existing z1d instances:

Instance Name vCPUs
RAM
Network Bandwidth EBS-Optimized Bandwidth
m5zn.large 2 8 GiB Up to 25 Gbps Up to 3.170 Gbps
m5zn.xlarge 4 16 GiB Up to 25 Gbps Up to 3.170 Gbps
m5zn.2xlarge 8 32 GiB Up to 25 Gbps 3.170 Gbps
m5zn.3xlarge 12 48 GiB Up to 25 Gbps 4.750 Gbps
m5zn.6xlarge 24 96 GiB 50 Gbps 9.500 Gbps
m5zn.12xlarge 48 192 GiB 100 Gbps 19 Gbps
m5zn.metal 48 192 GiB 100 Gbps 19 Gbps

The Nitro Hypervisor allows M5zn instances to deliver performance that is just about indistinguishable from bare metal. Other AWS Nitro System components such as the Nitro Security Chip and hardware-based processing for EBS increase performance, while VPC encryption provides greater security.

Things To Know
Here are a couple of “fun facts” about the M5zn instances:

Placement Groups – M5zn instances can be used in Cluster (for low latency and high network throughput), Spread (to keep critical instances separate from each other), and Partition (to reduce correlated failures) placement groups.

Networking – M5zn instances support the Elastic Network Adapter (ENA) with dedicated 100 Gbps network connections and a dedicated 19 Gbps connection to EBS. If you are building distributed ML or HPC applications for use on a cluster of M5zn instances, be sure to take a look at the Elastic Fabric Adapter (EFA). Your HPC applications can use the Message Passing Interface (MPI) to communicate efficiently at high speed while scaling to thousands of nodes.

C-State Control – You can configure CPU Power Management on m5zn.6xlarge and m5zn.12xlarge instances. This is definitely an advanced feature, but one worth exploring in those situations where you need to squeeze every possible cycle of available performance from the instance.

NUMA – You can make use of Non-Uniform Memory Access on m5zn.12xlarge instances. This is also an advanced feature, but worth exploring in situations where you have an in-depth understanding of your application’s memory access patterns.

To learn more about these and other features, visit the EC2 M5 Instances page.

Available Now
As you can see, the M5zn instances are a great fit for gaming, HPC and simulation modeling workloads such as those used by the financial, automobile, aerospace, energy, and telecommunications industries.

You can launch M5zn instances today in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Ireland), Europe (Frankfurt), and Asia Pacific (Tokyo) Regions in On-Demand, Reserved Instance, Savings Plan, and Spot form. Dedicated Instances and Dedicated Hosts are also available.

Support is available in the EC2 Forum or via your usual AWS Support contact. The EC2 team is interested in your feedback and you can contact them at [email protected].

Jeff;

 

 

EC2 Update – D3 / D3en Dense Storage Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-update-d3-d3en-dense-storage-instances/

We have launched several generations of EC2 instances with dense storage including the HS1 in 2012 and the D2 in 2015. As you can guess from the name, our customers use these instances when they need massive amounts of very economical on-instance storage for their data warehouses, data lakes, network file systems, Hadoop clusters, and the like. These workloads demand plenty of I/O and network throughput, but work fine with a high ratio of storage to compute power.

New D3 and D3en Instances
Today we are launching the D3 and D3en instances. Like their predecessors, they give you access to massive amounts of low-cost on-instance HDD storage. The D3 instances are available in four sizes, with up to 32 vCPUs and 48 TB of storage. Here are the specs:

Instance Name vCPUs RAM HDD Storage Aggregate Disk Throughput
(128 KiB Blocks)
Network Bandwidth EBS-Optimized Bandwidth
d3.xlarge 4 32 GiB 6 TB (3 x 2 TB)  580 MiBps Up to 15 Gbps 850 Mbps
d3.2xlarge 8 64 GiB 12 TB (6 x 2 TB) 1,100 MiBps Up to 15 Gbps 1,700 Mbps
d3.4xlarge 16 128 GiB 24 TB (12 x 2 TB) 2,300 MiBps Up to 15 Gbps 2,800 Mbps
d3.8xlarge 32 256 GiB 48 TB (24 x 2 TB) 4,600 MiBps 25 Gbps 5,000 Mbps

As you can see from the table above, the D3 instances are available in the same configurations as the D2 instances for easy migration. You’ll get 5% more memory per vCPU, a 30% boost in compute power, and 2.5x higher network performance if you migrate from D2 to D3. The instances provide low-cost dense storage that delivers high performance sequential access to large data sets. They are perfect for distributed file systems such as HDFS and MapR FS, big data analytical workloads, data warehouses, log processing, and data processing.

The D3en instances are available in six sizes, with up to 48 vCPUs and 336 TB of storage. Here are the specs:

Instance Name vCPUs RAM HDD Storage Aggregate Disk Throughput
(128 KiB Blocks)
Network Bandwidth EBS-Optimized Bandwidth
d3en.xlarge 4 16 GiB 28 TB (2 x 14 TB) 500 MiBps Up to 25 Gbps 850 Mbps
d3en.2xlarge 8 32 GiB 56 TB (4 x 14 TB) 1,000 MiBps Up to 25 Gbps 1,700 Mbps
d3en.4xlarge 16 64 GiB 112 TB (8 x 14 TB) 2,000 MiBps 25 Gbps 2,800 Mbps
d3en.6xlarge 24 96 GiB 168 TB (12 x 14 TB) 3,100 MiBps 40 Gbps 4,000 Mbps
d3en.8xlarge 32 128 GiB  224 TB (16 x 14 TB) 4,100 MiBps 50 Gbps 5,000 Mbps
d3en.12xlarge 48 192 GiB 336 TB (24 x 14 TB) 6,200 MiBps 75 Gbps 7,000 Mbps

The D3en instances have a high ratio of storage to vCPU, and are optimized for high throughput and high sequential I/O to very large data sets, with a cost-per-TB that is 80% lower than on D2 instances. D3en instances can host Lustre, BeeGFS, GPFS, and other distributed file systems, they can store your data lakes, and they can run your Amazon EMR, Spark, and Hadoop analytical workloads.

Both of the instance types are built on the AWS Nitro System and are powered by custom Intel® Second Generation Scalable Xeon® (Cascade Lake) processors that can deliver all-core turbo performance of up to 3.1 GHz. The HDD storage is encrypted at rest using AES-256-XTS; traffic between D3 or D3en instances in the same VPC or within peered VPCs is encrypted using a 256-bit key.

Things to Know
Here are a couple of things that you should keep in mind regarding the D3 and D3en instances:

Regions – D3en instances are available in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) Regions; D3en instances are available in all of those regions and also in the US East (Ohio) Region, with more regions coming soon.

Purchase Options – You can purchase D3 and D3 instances in On-Demand, Savings Plan, Reserved Instance, Spot, and Dedicated Instance form.

AMIs – You must use AMIs that include the Elastic Network Adapter (ENA) and NVMe drivers.

Now Available
D3 and D3en instances are available now and you can start using them today!

Jeff;

New – Use Amazon EC2 Mac Instances to Build & Test macOS, iOS, ipadOS, tvOS, and watchOS Apps

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-use-mac-instances-to-build-test-macos-ios-ipados-tvos-and-watchos-apps/

Throughout the course of my career I have done my best to stay on top of new hardware and software. As a teenager I owned an Altair 8800 and an Apple II. In my first year of college someone gave me a phone number and said “call this with modem.” I did, it answered “PENTAGON TIP,” and I had access to ARPANET!

I followed the emerging PC industry with great interest, voraciously reading every new issue of Byte, InfoWorld, and several other long-gone publications. In early 1983, rumor had it that Apple Computer would soon introduce a new system that was affordable, compact, self-contained, and very easy to use. Steve Jobs unveiled the Macintosh in January 1984 and my employer ordered several right away, along with a pair of the Apple Lisa systems that were used as cross-development hosts. As a developer, I was attracted to the Mac’s rich collection of built-in APIs and services, and still treasure my phone book edition of the Inside Macintosh documentation!

New Mac Instance
Over the last couple of years, AWS users have told us that they want to be able to run macOS on Amazon Elastic Compute Cloud (EC2). We’ve asked a lot of questions to learn more about their needs, and today I am pleased to introduce you to the new Mac instance!


The original (128 KB) Mac

Powered by Mac mini hardware and the AWS Nitro System, you can use Amazon EC2 Mac instances to build, test, package, and sign Xcode applications for the Apple platform including macOS, iOS, iPadOS, tvOS, watchOS, and Safari. The instances feature an 8th generation, 6-core Intel Core i7 (Coffee Lake) processor running at 3.2 GHz, with Turbo Boost up to 4.6 GHz. There’s 32 GiB of memory and access to other AWS services including Amazon Elastic Block Store (EBS), Amazon Elastic File System (EFS), Amazon FSx for Windows File Server, Amazon Simple Storage Service (S3), AWS Systems Manager, and so forth.

On the networking side, the instances run in a Virtual Private Cloud (VPC) and include ENA networking with up to 10 Gbps of throughput. With EBS-Optimization, and the ability to deliver up to 55,000 IOPS (16KB block size) and 8 Gbps of throughput for data transfer, EBS volumes attached to the instances can deliver the performance needed to support I/O-intensive build operations.

Mac instances run macOS 10.14 (Mojave) and 10.15 (Catalina) and can be accessed via command line (SSH) or remote desktop (VNC). The AMIs (Amazon Machine Images) for EC2 Mac instances are EC2-optimized and include the AWS goodies that you would find on other AWS AMIs: An ENA driver, the AWS Command Line Interface (CLI), the CloudWatch Agent, CloudFormation Helper Scripts, support for AWS Systems Manager, and the ec2-user account. You can use these AMIs as-is, or you can install your own packages and create custom AMIs (the homebrew-aws repo contains the additional packages and documentation on how to do this).

You can use these instances to create build farms, render farms, and CI/CD farms that target all of the Apple environments that I mentioned earlier. You can provision new instances in minutes, giving you the ability to quickly & cost-effectively build code for multiple targets without having to own & operate your own hardware. You pay only for what you use, and you get to benefit from the elasticity, scalability, security, and reliability provided by EC2.

EC2 Mac Instances in Action
As always, I asked the EC2 team for access to an instance in order to put it through its paces. The instances are available in Dedicated Host form, so I started by allocating a host:

$ aws ec2 allocate-hosts --instance-type mac1.metal \
  --availability-zone us-east-1a --auto-placement on \
  --quantity 1 --region us-east-1

Then I launched my Mac instance from the command line (console, API, and CloudFormation can also be used):

$ aws ec2 run-instances --region us-east-1 \
  --instance-type mac1.metal \
  --image-id  ami-023f74f1accd0b25b \
  --key-name keys-jbarr-us-east  --associate-public-ip-address

I took Luna for a very quick walk, and returned to find that my instance was ready to go. I used the console to give it an appropriate name:

Then I connected to my instance:

From here I can install my development tools, clone my code onto the instance, and initiate my builds.

I can also start a VNC server on the instance and use a VNC client to connect to it:

Note that the VNC protocol is not considered secure, and this feature should be used with care. I used a security group that allowed access only from my desktop’s IP address:

I can also tunnel the VNC traffic over SSH; this is more secure and would not require me to open up port 5900.

Things to Know
Here are a couple of fast-facts about the Mac instances:

AMI Updates – We expect to make new AMIs available each time Apple releases major or minor versions of each supported OS. We also plan to produce AMIs with updated Amazon packages every quarter.

Dedicated Hosts – The instances are launched as EC2 Dedicated Hosts with a minimum tenancy of 24 hours. This is largely transparent to you, but it does mean that the instances cannot be used as part of an Auto Scaling Group.

Purchase Models – You can run Mac instances On-Demand and you can also purchase a Savings Plan.

Apple M1 Chip – EC2 Mac instances with the Apple M1 chip are already in the works, and planned for 2021.

Launch one Today
You can start using Mac instances in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore) Regions today, and check out this video for more information!

Jeff;

 

re:Invent 2020 Liveblog: Andy Jassy Keynote

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/reinvent-2020-liveblog-andy-jassy-keynote/

I’m always ready to try something new! This year, I am going to liveblog Andy Jassy‘s AWS re:Invent keynote address, which takes place from 8 a.m. to 11 a.m. on Tuesday, December 1 (PST). I’ll be updating this post every couple of minutes as I watch Andy’s address from the comfort of my home office. Stay tuned!

Jeff;


 

 

In the Works – AWS Region in Hyderabad, India

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-hyderabad-india/

We opened the AWS Regions in South Africa and Italy earlier this year and are currently working on regions in Indonesia, Japan, Spain, and Switzerland. Second AWS Region in India We launched the Asia Pacific (Mumbai) Region in June 2016, giving enterprises, public sector organizations, startups, and SMBs access to state-of-the-art public cloud infrastructure. In […]

New – GPU-Equipped EC2 P4 Instances for Machine Learning & HPC

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-gpu-equipped-ec2-p4-instances-for-machine-learning-hpc/

The Amazon EC2 team has been providing our customers with GPU-equipped instances for nearly a decade. The first-generation Cluster GPU instances were launched in late 2010, followed by the G2 (2013), P2 (2016), P3 (2017), G3 (2017), P3dn (2018), and G4 (2019) instances. Each successive generation incorporates increasingly-capable GPUs, along with enough CPU power, memory, […]

AWS Nitro Enclaves – Isolated EC2 Environments to Process Confidential Data

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-nitro-enclaves-isolated-ec2-environments-to-process-confidential-data/

When I first told you about the AWS Nitro System, I said: The Nitro system is a rich collection of building blocks that can be assembled in many different ways, giving us the flexibility to design and rapidly deliver EC2 instance types with an ever-broadening selection of compute, storage, memory, and networking options. To date, […]

Amazon Prime Day 2020 – Powered by AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-prime-day-2020-powered-by-aws/

Tipped off by a colleague in Denmark, I bought the LEGO Star Wars Stormtrooper Helmet, which turned out to be a Prime Day best-seller!

As I like to do every year, I would like to share a few of the many ways that AWS helped to make Prime Day a reality for our customers. Back in 2016 I wrote How AWS Powered Amazon’s Biggest Day Ever to describe how we plan for Prime Day and that post is still informative and relevant.

This time around I would like to focus on four ways that AWS helped to support Prime Day: Amazon Live and IVS, Infrastructure Event Management, Storage, and Content Delivery.

Amazon Live and IVS on Prime Day
Throughout Prime Day 2020, Amazon customers were able to shop from livestreams through Amazon Live. Shoppers were also able to use live chat to interact with influencers and hosts in real time. They were able to ask questions, share their experiences, and get a better feel for products of interest to them.

Amazon Live helped customers learn more about products and take advantage of top deals by counting down to Deal Reveals and sharing live product demonstrations. Anitta, Russell Wilson, and Ciara curated Prime Day deals as did author Elizabeth Gilbert. In addition, influencers including @SheaWhitney, @ShopDandy, and @TheDealGuy shared their top product picks with customers. In total, there were over 1,200 live streams and tens of thousands of chat messages on Amazon Live during Prime Day.

To deliver these enhanced shopping experiences for customers and for creators, low latency video is essential. It enables Amazon Live to synchronize the products featured in the live video with the products displayed in the carousel at the bottom of the video player. Low latency also allows the livestream hosts to answer customer questions in real-time. And, of course, on Prime Day in particular, all of this needed to happen at scale.

In order to do this, the Amazon Live team made use of the newly launched Amazon Interactive Video Service (IVS). As Martin explains in his recent post (Amazon Interactive Video Service – Add Live Video to Your Apps and Websites), this is a managed live streaming service that supports the creation of interactive, low-latency video experiences. It uses the same technology that powers Twitch, and allows you to deliver live content with very low latency, often three seconds or less (20 to 30 seconds is more common).

Infrastructure Event Management
AWS Infrastructure Event Management (IEM) helps our customers to plan and run large-scale business-critical events.This program is included in the Enterprise Support plan and is available to Business Support customers for an additional fee. IEM includes an assessment of operational readiness, identification and mitigation of risks, and the confidence to run an event with AWS experts standing by and ready to help.

This year, the TAMs (Technical Account Managers) that support the IEM program created a Control Room that was 100% virtual. A combination of Slack channels and Amazon Chime bridges empowered AWS service teams, AWS support, IT support, and Amazon Customer Reliability Engineering (thousands of people in all) to communicate and collaborate in real time.

Storage for Prime Day
Amazon DynamoDB powers multiple high-traffic Amazon properties and systems including Alexa, the Amazon.com sites, and all Amazon fulfillment centers. Over the course of the 66-hour Prime Day, these sources made 16.4 trillion calls to the DynamoDB API, peaking at 80.1 million requests per second.

On the block storage side, Amazon Elastic Block Store (EBS) added 241 petabytes of storage in preparation for Prime Day; the resulting fleet handled 6.2 trillion requests per day and transferred 563 petabytes per day.

Content Delivery for Prime Day
Amazon CloudFront played an important role as always, serving up web and streamed content to a world-wide audience. CloudFront handled over 280 million HTTP requests per minute, a total of 450 billion requests across all of the Amazon.com sites.

Jeff;

Public Preview – AWS Distro for OpenTelemetry

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/public-preview-aws-distro-open-telemetry/

It took me a while to figure out what observability was all about. A year or two I asked around and my colleagues told me that I needed to follow Charity Majors and to read her blog (done, and done). Just this week, Charity tweeted:

Kislay’s tweet led to his blog post, Observing is not Debugging, which I found very helpful. As Charity noted, Kislay tells us that Observability is a study of the system in motion.

Today’s large-scale distributed applications and systems are effectively always in motion. Whether serving web requests, processing streams of data or handling events, something is always happening. At world-scale, looking at individual requests or events is not always feasible. Instead, it is necessary to take a statistical approach and to watch how well a system is working, instead of simply waiting for a total failure.

New AWS Distro for OpenTelemetry
Today we are launching a preview of AWS Distro for OpenTelemetry. We are part of the Cloud Native Computing Foundation (CNCF)’s OpenTelemetry community, working to define an open standard for the collection of distributed traces and metrics. AWS Distro for OpenTelemetry is a secure and supported distribution of the APIs, libraries, agents, and collectors defined in the OpenTelemetry Specification.

One of the coolest features of the toolkit is auto instrumentation. Starting with Java and in the works for other languages and environments (.NET and JavaScript are next), the auto-instrumentation agent identifies the frameworks and languages used by your application and automatically instruments them to collect and forward metrics and traces.

Here’s how all of the pieces fit together:

The AWS Observability Collector runs within your environment. It can be launched as a sidecar or daemonset for EKS, a sidecar for ECS, or an agent on EC2. You configure the metrics and traces that you want to collect, and also which AWS services to forward them to. You can set up a central account for monitoring complex multi-account applications, and you can also control the sampling rate (what percentage of the raw data is forwarded and ultimately stored).

Partners in Action
You can make use of AWS and partner tools and applications to observe, analyze, and act on what you see. We’re working with Cisco AppDynamics, Datadog, New Relic, Splunk, and other partners and will have more information to share during the preview.

Things to Know
The preview of the AWS Distro for OpenTelemetry is available now and you can start using it today. In addition to the .NET and JavaScript support that I mentioned earlier, we plan to support Python, Ruby, Go, C++, Erlang, and Rust as well.

This is an open source project and welcome your pull requests! We will be tracking the upstream repository and plan to release a fresh version of the toolkit quarterly.

Jeff;

PS – Be sure to sign up for our upcoming webinar, Observability at AWS and AWS Distro for OpenTelemetry Deep Dive.

 

Amazon S3 Update – Three New Security & Access Control Features

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-s3-update-three-new-security-access-control-features/

A year or so after we launched Amazon S3, I was in an elevator at a tech conference and heard a couple of developers use “just throw it into S3” as the answer to their data storage challenge. I remember that moment well because the comment was made so casually, and it was one of the first times that I fully grasped just how quickly S3 had caught on.

Since that launch, we have added hundreds of features and multiple storage classes to S3, while also reducing the cost to storage a gigabyte of data for a month by almost 85% (from $0.15 to $0.023 for S3 Standard, and as low as $0.00099 for S3 Glacier Deep Archive). Today, our customers use S3 to support many different use cases including data lakes, backup and restore, disaster recovery, archiving, and cloud-native applications.

Security & Access Control
As the set of use cases for S3 has expanded, our customers have asked us for new ways to regulate access to their mission-critical buckets and objects. We added IAM policies many years ago, and Block Public Access in 2018. Last year we added S3 Access Points (Easily Manage Shared Data Sets with Amazon S3 Access Points) to help you manage access in large-scale environments that might encompass hundreds of applications and petabytes of storage.

Today we are launching S3 Object Ownership as a follow-on to two other S3 security & access control features that we launched earlier this month. All three features are designed to give you even more control and flexibility:

Object Ownership – You can now ensure that newly created objects within a bucket have the same owner as the bucket.

Bucket Owner Condition – You can now confirm the ownership of a bucket when you create a new object or perform other S3 operations.

Copy API via Access Points – You can now access S3’s Copy API through an Access Point.

You can use all of these new features in all AWS regions at no additional charge. Let’s take a look at each one!

Object Ownership
With the proper permissions in place, S3 already allows multiple AWS accounts to upload objects to the same bucket, with each account retaining ownership and control over the objects. This many-to-one upload model can be handy when using a bucket as a data lake or another type of data repository. Internal teams or external partners can all contribute to the creation of large-scale centralized resources. With this model, the bucket owner does not have full control over the objects in the bucket and cannot use bucket policies to share objects, which can lead to confusion.

You can now use a new per-bucket setting to enforce uniform object ownership within a bucket. This will simplify many applications, and will obviate the need for the Lambda-powered self-COPY that has become a popular way to do this up until now. Because this setting changes the behavior seen by the account that is uploading, the PUT request must include the bucket-owner-full-control ACL. You can also choose to use a bucket policy that requires the inclusion of this ACL.

To get started, open the S3 Console, locate the bucket and view its Permissions, click Object Ownership, and Edit:

Then select Bucket owner preferred and click Save:

As I mentioned earlier, you can use a bucket policy to enforce object ownership (read About Object Ownership and this Knowledge Center Article to learn more).

Many AWS services deliver data to the bucket of your choice, and are now equipped to take advantage of this feature. S3 Server Access Logging, S3 Inventory, S3 Storage Class Analysis, AWS CloudTrail, and AWS Config now deliver data that you own. You can also configure Amazon EMR to use this feature by setting fs.s3.canned.acl to BucketOwnerFullControl in the cluster configuration (learn more).

Keep in mind that this feature does not change the ownership of existing objects. Also, note that you will now own more S3 objects than before, which may cause changes to the numbers you see in your reports and other metrics.

AWS CloudFormation support for Object Ownership is under development and is expected to be ready before AWS re:Invent.

Bucket Owner Condition
This feature lets you confirm that you are writing to a bucket that you own.

You simply pass a numeric AWS Account ID to any of the S3 Bucket or Object APIs using the expectedBucketOwner parameter or the x-amz-expected-bucket-owner HTTP header. The ID indicates the AWS Account that you believe owns the subject bucket. If there’s a match, then the request will proceed as normal. If not, it will fail with a 403 status code.

To learn more, read Bucket Owner Condition.

Copy API via Access Points
S3 Access Points give you fine-grained control over access to your shared data sets. Instead of managing a single and possibly complex policy on a bucket, you can create an access point for each application, and then use an IAM policy to regulate the S3 operations that are made via the access point (read Easily Manage Shared Data Sets with Amazon S3 Access Points to see how they work).

You can now use S3 Access Points in conjunction with the S3 CopyObject API by using the ARN of the access point instead of the bucket name (read Using Access Points to learn more).

Use Them Today
As I mentioned earlier, you can use all of these new features in all AWS regions at no additional charge.

Jeff;

 

New EBS Volume Type (io2) – 100x Higher Durability and 10x More IOPS/GiB

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-ebs-volume-type-io2-more-iops-gib-higher-durability/

We launched EBS Volumes with Provisioned IOPS way back in 2012. These volumes are a great fit for your most I/O-hungry and latency-sensitive applications because you can dial in the level of performance that you need, and then (with the launch of Elastic Volumes in 2017) change it later.

Over the years, we have increased the ratio of IOPS per gibibyte (GiB) of SSD-backed storage several times, most recently in August 2016. This ratio started out at 10 IOPS per GiB, and has grown steadily to 50 IOPS per GiB. In other words, the bigger the EBS volume, the more IOPS it can be provisioned to deliver, with a per-volume upper bound of 64,000 IOPS. This change in ratios has reduced storage costs by a factor of 5 for throughput-centric workloads.

Also, based on your requests and your insatiable desire for more performance, we have raised the maximum number of IOPS per EBS volume multiple times:

The August, 2014 change in the I/O request size made EBS 16x more cost-effective for throughput-centric workloads.

Bringing the various numbers together, you can think of Provisioned IOPS volumes as being defined by capacity, IOPS, and the ratio of IOPS per GiB. You should also think about durability, which is expressed in percentage terms. For example, io1 volumes are designed to deliver 99.9% durability, which is 20x more reliable than typical commodity disk drives.

Higher Durability & More IOPS
Today we are launching the io2 volume type, with two important benefits, at the same price as the existing io1 volumes:

Higher Durability – The io2 volumes are designed to deliver 99.999% durability, making them 2000x more reliable than a commodity disk drive, further reducing the possibility of a storage volume failure and helping to improve the availability of your application. By the way, in the past we expressed durability in terms of an Annual Failure Rate, or AFR. The new, percentage-based model is consistent with our other storage offerings, and also communicates expectations for success, rather than for failure.

More IOPS – We are increasing the IOPS per GiB ratio yet again, this time to 500 IOPS per GiB. You can get higher performance from your EBS volumes, and you can reduce or outright eliminate any over-provisioning that you might have done in the past to achieve the desired level of performance.

Taken together, these benefits make io2 volumes a perfect fit for your high-performance, business-critical databases and workloads. This includes SAP HANA, Microsoft SQL Server, and IBM DB2.

You can create new io2 volumes and you can easily change the type of an existing volume to io2:

Or:

$aws ec2 modify-volume --volume-id vol-0b3c663aeca5aabb7 --volume-type io2

io2 volumes support all features of io1 volumes with the exception of Multi-Attach, which is on the roadmap.

Available Now
You can make use of io2 volumes in the US East (Ohio), US East (N. Virginia), US West (N. California), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Middle East (Bahrain) Regions today.

Jeff;