All posts by Jeff Barr

AWS Supply Chain update: Three new modules supporting upstream activities

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-supply-chain-update-three-new-modules-supporting-upstream-activities/

We are launching three new modules for AWS Supply Chain today. These modules are designed to help you collaborate with your suppliers across all tiers of your supply chain, with the goal of helping you to maintain optimum inventory levels at each site in the chain. Here’s an overview:

Supply Planning – This module helps you to accurately forecast and plan purchases of raw materials, components, and finished goods. It uses multiple algorithms to create supply plans that include purchase orders and inventory transfer requirements.

N-Tier Visibility – This module extends visibility and collaboration beyond your enterprise’s internal systems to multiple external tiers of trading partners.

Sustainability – this module creates a more secure and efficient way for you to request, collect, and review data for carbon emissions, as well as reports on hazardous material used in the acquisition, manufacturing, transportation, and disposal of goods. You can now send data requests to multiple tiers of trading partners, track responses, send reminders to absentees, and provide a central repository to store and view responses.

Let’s take a look at each one…

Supply Planning
AWS Supply Chain already includes a Demand Planning module which uses proprietary machine learning algorithms to forecast demand and generate a demand plan that is based on two or more years of historical order line data. The forecasts are granular and specific, including distribution centers and retail outlets.

The new Supply Planning module uses the demand plan as an input. It looks at existing inventory, accounts for uncertainties, and supports additional business input including stocking strategies, ultimately generating purchase orders for components and raw materials, ready for review and approval. Here is the main page of the Supply Planning module:

The module also supports auto replenishment and manufacturing plans. The manufacturing plans work backward from a Bill of Materials (BoM) which is broken down (exploded) into individual components that are sourced from multiple direct and indirect upstream sources.

Supply Planning is done with respect to a planning horizon and on a plan schedule, both of which are defined in the organization profile:

The settings for this module also allow for customization of purchase order review and approval:

N-Tier Visibility
This module helps you to work in a collaborative fashion with your vendors, the vendors that supply your vendors, and so forth. It automatically detects vendors and sets them up for on-boarding into AWS Supply Chain. The module supports manual and EDI-powered collaboration on purchase orders, while also helping to identify discrepancies and risks, and to find substitute vendors if necessary.

The main page of the module displays an overview of my trading partners:

The Portal status column indicates that some of these partners have already onboarded, others have been invited (and one let the invite expire), and others have yet to be invited. I can click Invite partners to extend invitations. I select the partners (these have generally been auto-discovered using data in the Supply Chain Data Lake), and click Continue:

Then I enter the contact information for each partner that I selected, and click Send invites:

The contact receives an invitation via email and can then accept the invite. After they have accepted, they can receive and respond to supply plans and purchase orders electronically (via email or EDI).

Sustainability
The Sustainability module helps you to request, receive, and review compliance and sustainability data from your partners. It builds on the partner network that I already described, and tracks requests for data:

To request data, I select the type of data that I need and the partners that I need it from, then click Continue:

Then I enter the details that define my request, including a due date. I can ask the chosen partners for a text response and/or a file response:

The responses and files provided by each partner are written to the Supply Chain Data Lake and can also be exported to an Amazon Simple Storage Service (Amazon S3) bucket.

AWS Supply Chain Resources
If you are new to AWS Supply Chain and would like to learn more, here are some resources to get you started:

Jeff;

Use AWS Fault Injection Service to demonstrate multi-region and multi-AZ application resilience

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/use-aws-fault-injection-service-to-demonstrate-multi-region-and-multi-az-application-resilience/

AWS Fault Injection Service (FIS) helps you to put chaos engineering into practice at scale. Today we are launching new scenarios that will let you demonstrate that your applications perform as intended if an AWS Availability Zone experiences a full power interruption or connectivity from one AWS region to another is lost.

You can use the scenarios to conduct experiments that will build confidence that your application (whether single-region or multi-region) works as expected when something goes wrong, help you to gain a better understanding of direct and indirect dependencies, and test recovery time. After you have put your application through its paces and know that it works as expected, you can use the results of the experiment for compliance purposes. When used in conjunction with other parts of AWS Resilience Hub, FIS can help you to fully understand the overall resilience posture of your applications.

Intro to Scenarios
We launched FIS in 2021 to help you perform controlled experiments on your AWS applications. In the post that I wrote to announce that launch, I showed you how to create experiment templates and to use them to conduct experiments. The experiments are built using powerful, low-level actions that affect specified groups of AWS resources of a particular type. For example, the following actions operate on EC2 instances and Auto Scaling Groups:

With these actions as building blocks, we recently launched the AWS FIS Scenario Library. Each scenario in the library defines events or conditions that you can use to test the resilience of your applications:

Each scenario is used to create an experiment template. You can use the scenarios as-is, or you can take any template as a starting point and customize or enhance it as desired.

The scenarios can target resources in the same AWS account or in other AWS accounts:

New Scenarios
With all of that as background, let’s take a look at the new scenarios.

AZ Availability: Power Interruption – This scenario temporarily “pulls the plug” on a targeted set of your resources in a single Availability Zone including EC2 instances (including those in EKS and ECS clusters), EBS volumes, Auto Scaling Groups, VPC subnets, Amazon ElastiCache for Redis clusters, and Amazon Relational Database Service (RDS) clusters. In most cases you will run it on an application that has resources in more than one Availability Zone, but you can run it on a single-AZ app with an outage as the expected outcome. It targets a single AZ, and also allows you to disallow a specified set of IAM roles or Auto Scaling Groups from being able to launch fresh instances or start stopped instances during the experiment.

The New actions and targets experience makes it easy to see everything at a glance — the actions in the scenario and the types of AWS resources that they affect:

The scenarios include parameters that are used to customize the experiment template:

The Advanced parameters – targeting tags lets you control the tag keys and values that will be used to locate the resources targeted by experiments:

Cross-Region: Connectivity – This scenario prevents your application in a test region from being able to access resources in a target region. This includes traffic from EC2 instances, ECS tasks, EKS pods, and Lambda functions attached to a VPC. It also includes traffic flowing across Transit Gateways and VPC peering connections, as well as cross-region S3 and DynamoDB replication. The scenario looks like this out of the box:

This scenario runs for 3 hours (unless you change the disruptionDuration parameter), and isolates the test region from the target region in the specified ways, with advanced parameters to control the tags that are used to select the affected AWS resources in the isolated region:

You might also find that the Disrupt and Pause actions used in this scenario useful on their own:

For example, the aws:s3:bucket-pause-replication action can be used to pause replication within a region.

Things to Know
Here are a couple of things to know about the new scenarios:

Regions – The new scenarios are available in all commercial AWS Regions where FIS is available, at no additional cost.

Pricing – You pay for the action-minutes consumed by the experiments that you run; see the AWS Fault Injection Service Pricing Page for more info.

Naming – This service was formerly called AWS Fault Injection Simulator.

Jeff;

Introducing highly durable Amazon OpenSearch Service clusters with 30% price/performance improvement

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/introducing-highly-durable-amazon-opensearch-service-clusters-with-30-price-performance-improvement/

You can use the new OR1 instances to create Amazon OpenSearch Service clusters that use Amazon Simple Storage Service (Amazon S3) for primary storage. You can ingest, store, index, and access just about any imaginable amount of data, while also enjoying a 30% price/performance improvement over existing instance types, eleven nines of data durability, and a zero-time Recovery Point Objective (RPO). You can use this to perform interactive log analytics, monitor application in real time, and more.

New OR1 Instances
These benefits are all made possible by the new OR1 instances, which are available in eight sizes and used for the data nodes of the cluster:

Instance Name vCPUs
Memory
EBS Storage Max (gp3)
or1.medium.search 1 8 GiB 400 GiB
or1.large.search 2 16 GiB 800 GiB
or1.xlarge.search 4 32 GiB 1.5 TiB
or1.2xlarge.search 8 64 GiB 3 TiB
or1.4xlarge.search 16 128 GiB 6 TiB
or1.8xlarge.search 32 256 GiB 12 TiB
or1.12xlarge.search 48 384 GiB 18 TiB
or1.16xlarge.search 64 512 GiB 24 TiB

To choose a suitable instance size, read Sizing Amazon OpenSearch Service domains.

The Amazon Elastic Block Store (Amazon EBS) volumes are used for primary storage, with data copied synchronously to S3 as it arrives. The data in S3 is used to create replicas and to rehydrate EBS after shards are moved between instances as a result of a node failure or a routine rebalancing operation. This is made possible by the remote-backed storage and segment replication features that were recently released for OpenSearch.

Creating a Domain
To create a domain I open the Amazon OpenSearch Service Console, select Managed clusters, and click Create domain:

I enter a name for my domain (my-domain), select Standard create, and use the Production template:

Then I choose the Domain with standby deployment option. This option will create active data nodes in two Availability Zones and a standby one in a third. I also choose the latest engine version:

Then I select the OR1 instance family and (for my use case) configure 500 GiB of EBS storage per data node:

I set the other settings as needed, and click Create to proceed:

I take a quick lunch break and when i come back my domain is ready:

Things to Know
Here are a couple of things to know about this new storage option:

Engine Versions – Amazon OpenSearch Service engines version 2.11 and above support OR1 instances.

Regions – The OR1 instance family is available for use with OpenSearch in the US East (Ohio, N. Virginia), US West (N. California, Oregon), Asia Pacific (Mumbai, Singapore, Sydney, Tokyo), and Europe (Frankfurt, Ireland, Spain, Stockholm) AWS Regions.

Pricing – You pay On-Demand or Reserved prices for data nodes, and you also pay for EBS storage. See the Amazon OpenSearch Service Pricing page for more information.

Jeff;

Join the preview for new memory-optimized, AWS Graviton4-powered Amazon EC2 instances (R8g)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/join-the-preview-for-new-memory-optimized-aws-graviton4-powered-amazon-ec2-instances-r8g/

We are opening up a preview of the next generation of Amazon Elastic Compute Cloud (Amazon EC2) instances. Equipped with brand-new Graviton4 processors, the new R8g instances will deliver better price performance than any existing memory-optimized instance. The R8g instances are suitable for your most demanding memory-intensive workloads: big data analytics, high-performance databases, in-memory caches and so forth.

Graviton history
Let’s take a quick look back in time and recap the evolution of the Graviton processors:

November 2018 – The Graviton processor made its debut in the A1 instances, optimized for both performance and cost, and delivering cost reductions of up to 45% for scale-out workloads.

December 2019 – The Graviton2 processor debuted with the announcement of M6g, M6gd, C6g, C6gd, R6g, and R6gd instances with up to 40% better price performance than equivalent non-Graviton instances. The second-generation processor delivered up to 7x performance of the first one, including twice the floating point performance.

November 2021 – The Graviton3 processor made its debut with the announcement of the compute-optimized C7g instances. In addition to up to 25% better compute performance, this generation of processors once again doubled floating point and cryptographic performance when compared to the previous generation.

November 2022 – The Graviton 3E processor was announced, for use in the Hpc7g and C7gn instances, with up to 35% higher vector instruction processing performance than the Graviton3.

Today, every one of the top 100 Amazon Elastic Compute Cloud (EC2) customers makes use of Graviton, choosing between more than 150 Graviton-powered instances.

New Graviton4
I’m happy to be able to tell you about the latest in our series of innovative custom chip designs, the energy-efficient AWS Graviton4 processor.

96 Neoverse V2 cores, 2 MB of L2 cache per core, and 12 DDR5-5600 channels work together to make the Graviton4 up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than the Graviton3.

Graviton4 processors also support all of the security features from the previous generations, and includes some important new ones including encrypted high-speed hardware interfaces and Branch Target Identification (BTI).

R8g instance sizes
The 8th generation R8g instances will be available in multiple sizes with up to triple the number of vCPUs and triple the amount of memory of the 7th generation (R7g) of memory-optimized, Graviton3-powered instances.

Join the preview
R8g instances with Graviton4 processors

Jeff;

Announcing the new Amazon S3 Express One Zone high performance storage class

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-amazon-s3-express-one-zone-high-performance-storage-class/

The new Amazon S3 Express One Zone storage class is designed to deliver up to 10x better performance than the S3 Standard storage class while handling hundreds of thousands of requests per second with consistent single-digit millisecond latency, making it a great fit for your most frequently accessed data and your most demanding applications. Objects are stored and replicated on purpose built hardware within a single AWS Availability Zone, allowing you to co-locate storage and compute (Amazon EC2, Amazon ECS, and Amazon EKS) resources to further reduce latency.

Amazon S3 Express One Zone
With very low latency between compute and storage, the Amazon S3 Express One Zone storage class can help to deliver a significant reduction in runtime for data-intensive applications, especially those that use hundreds or thousands of parallel compute nodes to process large amounts of data for AI/ML training, financial modeling, media processing, real-time ad placement, high performance computing, and so forth. These applications typically keep the data around for a relatively short period of time, but access it very frequently during that time.

This new storage class can handle objects of any size, but is especially awesome for smaller objects. This is because for smaller objects the time to first byte is very close to the time for last byte. In all storage systems, larger objects take longer to stream because there is more data to download during the transfer, and therefore the storage latency has less impact on the total time to read the object. As a result, smaller objects receive an outsized benefit from lower storage latency compared to large objects. Because of S3 Express One Zone’s consistent very low latency, small objects can be read up to 10x faster compared to S3 Standard.

The extremely low latency provided by Amazon S3 Express One Zone, combined with request costs that are 50% lower than for the S3 Standard storage class, means that your Spot and On-Demand compute resources are used more efficiently and can be shut down earlier, leading to an overall reduction in processing costs.

Each Amazon S3 Express One Zone directory bucket exists in a single Availability Zone that you choose, and can be accessed using the usual set of S3 API functions: CreateBucket, PutObject, GetObject, ListObjectsV2, and so forth. The buckets also support a carefully chosen set of S3 features including byte-range fetches, multi-part upload, multi-part copy, presigned URLs, and Access Analyzer for S3. You can upload objects directly, write code that uses CopyObject, or use S3 Batch Operations,

In order to reduce latency and to make this storage class as efficient & scalable as possible, we are introducing a new bucket type, a new authentication model, and a bucket naming convention:

New bucket type – The new directory buckets are specific to this storage class, and support hundreds of thousands of requests per second. They have a hierarchical namespace and store object key names in a directory-like manner. The path delimiter must be “/“, and any prefixes that you supply to ListObjectsV2 must end with a delimiter. Also, list operations return results without first sorting them, so you cannot do a “start after” retrieval.

New authentication model – The new CreateSession function returns a session token that grants access to a specific bucket for five minutes. You must include this token in the requests that you make to other S3 API functions that operate on the bucket or the objects in it, with the exception of CopyObject, which requires IAM credentials. The newest versions of the AWS SDKs handle session creation automatically.

Bucket naming – Directory bucket names must be unique within their AWS Region, and must specify an Availability Zone ID in a specially formed suffix. If my base bucket name is jbarr and it exists in Availability Zone use1-az5 (Availability Zone 5 in the US East (N. Virginia) Region) the name that I supply to CreateBucket would be jbarr--use1-az5--x-s3. Although the bucket exists within a specific Availability Zone, it is accessible from the other zones in the region, and there are no data transfer charges for requests from compute resources in one Availability Zone to directory buckets in another one in the same region.

Amazon S3 Express One Zone in action
Let’s put this new storage class to use. I will focus on the command line, but AWS Management Console and API access are also available.

My EC2 instance is running in my us-east-1f Availability Zone. I use jq to map this value to an Availability Zone Id:

$ aws ec2 describe-availability-zones --output json | \
  jq -r  '.AvailabilityZones[] | select(.ZoneName == "us-east-1f") | .ZoneId'
use1-az5

I create a bucket configuration (s3express-bucket-config.json) and include the Id:

{
        "Location" :
        {
                "Type" : "AvailabilityZone",
                "Name" : "use1-az5"
        },
        "Bucket":
        {
                "DataRedundancy" : "SingleAvailabilityZone",
                "Type"           : "Directory"
        }
}

After installing the newest version of the AWS Command Line Interface (AWS CLI), I create my directory bucket:

$ aws s3api create-bucket --bucket jbarr--use1-az5--x-s3 \
  --create-bucket-configuration file://s3express-bucket-config.json \
  --region us-east-1
-------------------------------------------------------------------------------------------
|                                       CreateBucket                                      |
+----------+------------------------------------------------------------------------------+
|  Location|  https://jbarr--use1-az5--x-s3.s3express-use1-az5.us-east-1.amazonaws.com/   |
+----------+------------------------------------------------------------------------------+

Then I can use the directory bucket as the destination for other CLI commands as usual (the second aws is the directory where I unzipped the AWS CLI):

$ aws s3 sync aws s3://jbarr--use1-az5--x-s3

When I list the directory bucket’s contents, I see that the StorageClass is EXPRESS_ONEZONE:

$ aws s3api list-objects-v2 --bucket jbarr--use1-az5--x-s3 --output json | \
  jq -r '.Contents[] | {Key: .Key, StorageClass: .StorageClass}'
...
{
  "Key": "install",
  "StorageClass": "EXPRESS_ONEZONE"
}
...

The Management Console for S3 shows General purpose buckets and Directory buckets on separate tabs:

I can import the contents of an existing bucket (or a prefixed subset of the contents) into a directory bucket using the Import button, as seen above. I select a source bucket, click Import, and enter the parameters that will be used to generate an inventory of the source bucket and to create and an S3 Batch Operations job.

The job is created and begins to execute:

Things to know
Here are some important things to know about this new S3 storage class:

Regions – Amazon S3 Express One Zone is available in the US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Stockholm) Regions, with plans to expand to others over time.

Other AWS Services – You can use Amazon S3 Express One Zone with other AWS services including Amazon SageMaker Model Training, Amazon Athena, Amazon EMR, and AWS Glue Data Catalog to accelerate your machine learning and analytics workloads. You can also use Mountpoint for Amazon S3 to process your S3 objects in file-oriented fashion.

Pricing – Pricing, like the other S3 storage classes, is on a pay-as-you-go basis. You pay $0.16/GB/month in the US East (N. Virginia) Region, with a one-hour minimum billing time for each object, and additional charges for certain request types. You pay an additional per-GB fee for the portion of any request that exceeds 512 KB. For more information, see the Amazon S3 Pricing page.

Durability – In the unlikely case of the loss or damage to all or part of an AWS Availability Zone, data in a One Zone storage class may be lost. For example, events like fire and water damage could result in data loss. Apart from these types of events, our One Zone storage classes use similar engineering designs as our Regional storage classes to protect objects from independent disk, host, and rack-level failures, and each are designed to deliver 99.999999999% data durability.

SLA – Amazon S3 Express One Zone is designed to deliver 99.95% availability with an availability SLA of 99.9%; for information see the Amazon S3 Service Level Agreement page.

This new storage class is available now and you can start using it today!

Learn more
Amazon S3 Express One Zone

Jeff;

Announcing new diagnostic tools for AWS Partner-Led Support (PLS) participants

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/announcing-new-diagnostic-tools-for-aws-partner-led-support-pls-participants/

We have added a set of diagnostic tools that will give participants in the AWS Partner-Led Support program access to diagnostic tools that will empower them to do an even better job of supporting their customers.

Intro to AWS Partner-Led Support
This AWS Partner Network (APN) program enables AWS Partners to act as the customer’s sole point of contact for technical support. Customers contact their support partner for technical assistance instead of directly contacting AWS. In many cases the partner can resolve the issue directly. If the partner cannot do this, they get guidance from AWS via their AWS Support plan.

Diagnostic tools
These are the same tools that AWS Support Engineers use to assist AWS customers.

When a customer contacts their partner for support, the partner will federate into the customer’s AWS account. Then they will use the new diagnostic tools to access the customer metadata that will help them to identify and diagnose the issue.

The tools are enabled by a set of IAM roles set up by the customer. The tools can access and organize metadata and CloudWatch metrics, but they cannot access customer data and they cannot make any changes to any of the customer’s AWS resources. Here is a small sample of the types of information that partners will be able to access:

  • EC2 Capacity Reservations
  • Lambda Functions List
  • GuardDuty Findings
  • Load Balancer Responses
  • RDS and Redshift Clusters

Each tool operates on a list of regions selected when the tool is run, all invocations of each tool are logged and are easily accessible for review, and the output from each invocation can be directed to one of several different regions.

The tools can be invoked from the AWS Management Console, with API access available in order to support in-house tools, automation, and integration.

Learn more

The service is available today for partners that have joined the Partner-Led Support program. For more information, see the AWS Partner Led Support page.

If you are a current AWS Partner and would like to learn more about this program with an eye toward qualifying and participating, please visit AWS Partner Central.

Learn more about AWS Diagnostic Tools here.

Jeff;

FlexGroup Volume Management for Amazon FSx for NetApp ONTAP is now available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/flexgroup-volume-management-for-amazon-fsx-for-netapp-ontap-is-now-available/

You can now create, manage, and back up your Amazon FSx for NetApp ONTAP FlexGroup volumes using the AWS Management Console, the Amazon FSx CLI, and the AWS SDK. FlexGroups can be as large as 20 petabytes and offer greater performance for demanding workloads. Before this launch you could only create them using the ONTAP CLI and the ONTAP REST API (these options remain available). Also new to this launch is the ability to create Amazon FSx backups of your FlexGroup volumes.

FlexVol and FlexGroup
FSx for ONTAP supports two volume styles:

FlexVol – Support for up to 300 TiB of storage, making these volumes a good fit for general-purpose workloads.

FlexGroup – Support for up to 20 PiB of storage and billions of files per volume, making these volumes a good fit for more demanding electronic design automation (EDA), seismic analysis, and software build/test workloads.

Using FlexGroups
I will use the AWS Management Console to create a new file system. I select Amazon FSx for NetApp ONTAP, and click Next:

I select Standard create, enter a name for my file system (FS-Jeff-1), and select Single-AZ as the deployment type:

I can use the recommended throughput capacity, or I can specify it explicitly:

As you can surmise from the values above, the throughput is determined by the number of high availability (HA) pairs that will be used to host your file system. A single-AZ file system can be hosted on up to 6 such pairs; a multi-AZ file system must reside on a single pair. To learn more about these options visit New – Scale-out file systems for Amazon FSx for NetApp ONTAP.

After making my selections for Network & security, Encryption, and Default storage virtual machine configuration, I select the FlexGroup volume style, assign a name to the initial volume, and either accept the recommended number of constituents or specify it myself:

On the next page I review my choices and click Create file system:

The creation process is a good time for a lunch break. When I return the initial volume (Vol1) of my file system is ready to use. I can create additional FlexVol or FlexGroup volumes as needed:

Things to Know
Here are a couple of things to keep in mind about FlexGroup volumes:

Constituents – Although each FlexGroup volume can have as many as 200 constituents, we recommend 8 per HA pair. Given the 300 TiB per-constituent size limit, this allows you to create volumes with up to 2.4 PiB of storage per HA pair. ONTAP will balance your files across constituents automatically.

File Counts – If you are using NFSv3 and expect to store many billions of files on a FlexGroup volume, be sure to enable 64-bit identifiers on the storage virtual machine associated with the file system.

Backups – Starting today you can also create backups of FlexGroup volumes, giving you the same fully-managed built-in options that you already have for FlexVol volumes.

NetApp System Manager – You can use the ONTAP CLI and the browser-based NetApp System Manager to perform advanced operations on your ONTAP file systems, storage virtual machines, and volumes. The management endpoint and administrator credentials are available on the File system details page:

Regions – Both volume styles are available in all AWS Regions where Amazon FSx for NetApp ONTAP is supported.

Pricing – You pay for the SSD storage, SSD IOPS, and throughput capacity that you provision, with separate charges for capacity pool usage, backups, and SnapLock licensing; see the Amazon FSx for NetApp ONTAP Pricing page to learn more.

Jeff;

New – Scale-out file systems for Amazon FSx for NetApp ONTAP

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-scale-out-file-systems-for-amazon-fsx-for-netapp-ontap/

You can now create Amazon FSx for NetApp ONTAP file systems that are up to 9x faster than even before. As is already the case with this service, the file systems are fully managed, with latency to primary storage at the sub-millisecond level and latency to the capacity pool in the tens of milliseconds. This new level of performance will allow you to bring even more of your most demanding electronic design automation (EDA), visual effects (VFX), and statistical computing workloads (to name just a few) to the cloud.

Existing FSx for ONTAP scale-up file systems are powered by a single pair of servers in an active/passive high availability (HA) configuration, and can support a maximum of 4 GBps of throughput and 192 TiB of SSD storage. The server pair can be deployed across distinct fault domains of a single Availability Zone, or in two separate Availability Zones to provide continuous availability even when an Availability Zone is unavailable.

With today’s launch you can now create scale-out FSx for ONTAP file systems that are powered by two to six HA pairs. Here are the specs for the scale-up and scale-out file systems (these are all listed as “up to” since you can specify your desired values for each one when you create your file system):

Deployment Type
Read
Throughput
Write
Throughput
SSD IOPS
SSD Storage
Availability Zones
Scale-up Up to 4 GBps Up to 1.8 GBps Up to 160K Up to 192 TiB Single or Multiple
Scale-out Up to 36 GBps Up to 6.6 GBps Up to 1.2M Up to 1 PiB Single

The amount of throughput that you specify for your file system will determine the actual server configuration as follows:

Specified Throughput
Deployment
Type
HA Pairs
Throughput
(Per Server)
SSD Storage
(Per Server)
SSD IOPS
(Per Server)
 4 GBps or less Scale-up Single Up to 4 GBps Read
Up to 1.1 GBps Write (Single-AZ)
Up to 1.8 GBps Write (Multi-AZ)
1-192 TiB Up to 160K
More than 4 GBps Scale-out Up to 6 Up to 6 GBps Read
Up to 1.1 GBps Write
1-512 TiB Up to 200K

To learn more about your choices, visit Amazon FSx for NetApp ONTAP performance.

Creating a Scale-Out File System
I can create a scale-out file system using the AWS Management Console, AWS Command Line Interface (AWS CLI), or by writing code that calls the Amazon FSx CreateFileSystem function. I will use the console, and start by choosing Amazon FSx for NetApp ONTAP:

I choose Standard create, enter a name, select a Single-AZ deployment, and enter my desired SSD storage capacity. I can accepted the recommended throughput capacity, choose a value from the dropdown, or enter a value and see my options (more on that in a sec):

The dropdown has a helpful magical feature. If I type my desired throughput capacity it will show me one or more options:

The console will sometimes show me several options for the same amount of desired throughput capacity. Here are some guidelines to help you make a choice that will be a good fit for your workload:

Low Throughput – If you choose an option that provides 4 GBps or less, you will be running on a single HA pair. This is the simplest option to choose if you don’t need a high degree of throughput.

High Throughput and/or High Storage – Maximum throughput scales with the number of HA pairs that you provision. Also, choosing an option with more pairs will maximize your headroom for future growth in provisioned storage.

I make selections and enter values for the remaining options as usual, and click Next to review my settings. I check to make sure that I have made good choices for any of the attributes that can’t be edited after creation, and click Create file system.

I take a break to see what’s happening upstairs, feed my dog, and when I return my new 2 TB file system is ready to go:

I can increase the storage capacity and I can change provisioned IOPS as frequently as every 6 hours:

At the moment, the provisioned throughput capacity cannot be changed after creation if the file system uses more than one HA pair.

Available Now
Scale-out file systems are available in the US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland) Regions and you can start to create and use them today.

Learn more

Jeff;

Introducing shared VPC support for Amazon FSx for NetApp ONTAP

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/introducing-shared-vpc-support-for-amazon-fsx-for-netapp-ontap/

You can now create Multi-AZ FSx for ONTAP file systems in VPCs that have been shared with you by other accounts in the same AWS Organization. This highly requested feature enables a clean separation of duties between network administrators and storage administrators, and makes it possible to create storage that’s durable, highly available, and accessible from multiple VPCs.

Shared VPC support
Before today’s launch, you had the ability to create Single-AZ FSx for ONTAP file systems in subnets that were shared with you by another AWS account, as well as both Single – and Multi-AZ file systems in subnets that you own.

With today’s launch you can now do the same for file systems in multiple Availability Zones. Multi-AZ FSx for ONTAP file systems offer even higher availability than Single-AZ file systems, and are a great way to address and support large-scale enterprise storage needs. This new support for shared VPCs gives enterprises, many of which make use of multiple VPCs for technical and organizational reasons, to use FSx for ONTAP in Multi-AZ deployments, while allowing network administrators and storage administrators to work independently.

This is easy to set up, but you do need to make sure that there are no IP address conflicts between subnets that are not shared between VPCs. I don’t have an AWS Organization set up, so I will hand-wave through part of this process. As a network administrator (the owner account), I use the AWS Resource Access Manager (RAM) to share the appropriate subnets of my VPC with the desired participant accounts in my Organization:

Then I (or the administrators for those accounts) accept the resource shares.

Next, I use the new FSx for ONTAP Settings to enable route table updates from participant accounts, and click Submit (this gives the FSx ONTAP service permission to modify route table entries in shared subnets on behalf of participant accounts):

At this point, the storage administrators for the participant accounts can create Multi-AZ FSx for ONTAP file systems in the subnets that have been shared with them by the owner accounts.

There is no additional charge for this feature and it is available in all AWS Regions where FSx for ONTAP is supported.

Jeff;

IAM Access Analyzer updates: Find unused access, check policies before deployment

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/iam-access-analyzer-updates-find-unused-access-check-policies-before-deployment/

We are launching two new features for AWS Identity and Access Management (IAM) Access Analyzer today:

Unused Access Analyzer – A new analyzer that continuously monitors roles and users looking for permissions that are granted but not actually used. Central security teams can take advantage of a dashboard view that will help them to find the accounts that can most benefit from a review of unused permissions, roles, and IAM users.

Custom Policy Checks – Validation that newly authored policies do not grant additional (and perhaps unintended) permissions. You can exercise tighter control over your IAM policies and accelerate the process of moving AWS applications from development to production by adding automated policy reviews to your CI/CD pipelines and custom policy tools.

Let’s take a look at today’s launches!

Unused Access Analyzer
You can already create an analyzer that monitors for external access. With today’s launch you can create one that looks for access permissions that are either overly generous or that have fallen into disuse. This includes unused IAM roles, unused access keys for IAM users, unused passwords for IAM users, and unused services and actions for active IAM roles and users.

After reviewing the findings generated by an organization-wide or account-specific analyzer, you can take action by removing permissions that you don’t need. You can create analyzers and analyze findings from the AWS Management Console, CLI, or API. Let’s start with the IAM Console. I click Analyzers and settings in the left-side navigation:

I can see my current analyzers (none, in this case). I click Create analyzer to proceed:

I specify Unused access analysis, leave the default tracking period of 90 days as-is, and opt to check my account rather than my Organization, then I click Create analyzer:

My analyzer is created, and I check back a little while later to see what it finds. My findings were available within a minute, but this will vary. Here are some of the findings:

As you can see, I have lots of unused IAM roles and permissions (clearly I am a bad Role model). I can click on a Finding to learn more:

If this is a role that I need, I can click Archive to remove it from the list of active findings. I can also create archive rules that will do the same for similar findings:

The external access analyzer works in a similar way, and is a perfect place to start when you are new to Access Analyzer and are ready to find and remove extra permissions:

The dashboard gives me an overview of all active findings:

If I create an analyzer and specify my Organization as the Zone of trust, I can also view a list that shows the accounts that have the largest number of active findings:

This feature is also available from the command line. I can create a new analyzer like this:

$ aws access-analyzer create-analyzer --type ACCOUNT_UNUSED_ACCESS \
  --analyzer-name OneWeek \
  --configuration '{"unusedAccess" : {"unusedAccessAge" : 90}}'
----------------------------------------------------------------------------
|                              CreateAnalyzer                              |
+-----+--------------------------------------------------------------------+
|  arn|  arn:aws:access-analyzer:us-east-1:348414629041:analyzer/OneWeek   |
+-----+--------------------------------------------------------------------+

I can list the findings, perhaps all I want is the actual resource Ids to start:

$  aws access-analyzer list-findings-v2 \
  --analyzer-arn  arn:aws:access-analyzer:us-east-1:123456789012:analyzer/OneWeek 
  --output json |
 jq -r '.findings[] | .resource'

arn:aws:iam::123456789012:role/MobileHub_Service_Role
arn:aws:iam::123456789012:role/EKSClusterRole
arn:aws:iam::123456789012:role/service-role/AWSDataSyncS3BucketAccess-jbarr-data
arn:aws:iam::123456789012:role/rds-monitoring-role
arn:aws:iam::123456789012:role/IsengardRoleForDependencyAssuranceIamAnalyzer
arn:aws:iam::123456789012:role/service-role/s3crr_role_for_rep-src_to_rep-dest
arn:aws:iam::123456789012:role/service-role/AWSDeepRacerServiceRole
...

I can archive findings by Id:

$ aws access-analyzer update-findings  \
  --analyzer-arn arn:aws:access-analyzer:us-east-1:123456789012:analyzer/OneWeek 
  --status ARCHIVED --ids "f0492061-8638-48ac-b91a-f0583cc839bf"

And I can perform the same operations using the IAM Access Analyzer API.

This feature is priced based on the number of IAM roles analyzed each month and is available in all AWS Regions where IAM is available.

Custom Policy Checks
You can now validate that IAM policies adhere to your security standards ahead of deployments and proactively detect non-conformant updates to policies. This will help you to innovate more quickly, move apps from development to production more efficiently, and to have confidence that any changes you make represent your intent.

Let’s start with my allow-all-ssm policy:

For illustrative purposes, I edit it to add S3 access:

Then I click Check for new access, confirm that I understand that a charge will be made, and click Check policy:

The automated reasoning verifies the policy and tells me that I did enable new access. If that was my intent I click Next to proceed, otherwise I rethink my changes to the policy:

This is a very simple and contrived example, but I am confident that you can see how useful and valuable this can be to your security efforts. You can also access this from the CLI (check-no-new-access) and API (CheckNoNewAccess).

There’s also another command and function that is designed to be used in your CI/CD pipelines, AWS CloudFormation hooks, and custom policy tools. check-access-not-granted and CheckAccessNotGranted accept a policy document and a permission such as s3:Get*, and check to make sure that the policy does not grant the permission. You could use this, for example, to make sure that a policy which specifies that Security Hub should be disabled cannot be deployed. This will help you to move from development to production with the confidence that your policies adhere to your organization’s security standards.

This feature is priced based on the number of checks that are performed each month and is available in all AWS commercial and AWS GovCloud Regions.

Learn more
AWS Identity and Access Management (IAM) Access Analyzer

Jeff;

New Amazon WorkSpaces Thin Client provides cost-effective, secure access to virtual desktops

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-amazon-workspaces-thin-client/

The new Amazon WorkSpaces Thin Client improves end-user and IT staff productivity with cost-effective, secure, easy-to-manage access to virtual desktops. The devices are preconfigured and shipped directly to the end user, ready to deploy, connect, and use.

Here’s my testing setup:

The Thin Client is a small cube that connects directly to a monitor, keyboard, mouse, and other USB peripherals such as headsets, microphones, and cameras. With the optional hub it can also drive a second monitor. The administrator can create environments that give users access to Amazon WorkSpaces, Amazon WorkSpaces Web, or Amazon AppStream 2.0, with multiple options for managing user identities and credentials using Active Directory.

Thin Clients in action
As a very long-time user of Amazon WorkSpaces via a thin client, I am thrilled to be able to tell you about this device and the administrative service behind it. While my priority is the ease with which I can switch from client to client while maintaining my working context (running apps, browser tabs, and so forth), administrators will find it attractive for other reasons. For example:

Cost – The device itself is low cost ($195 in the United States), far less expensive than a laptop and the associated operating system. Because the working environments are centrally configured and administered, there’s less work to be done in the field, leading to further cost savings. Further, the devices are far simpler than laptops, with less parts to break, wear out, or replace.

Security – The devices are shipped with a secure “secret” that is used to establish a trust relationship with the administrative service. There’s no data storage on the device, and it cannot host rogue apps that could attempt to exfiltrate data. It also helps to reduce risk of data leakage should a worker leave their job without returning their employer-supplied laptop.

Ease of Management – Administrators can easily create new environments for users or groups of users, distribute activation codes to them, and manage the environment via the AWS Management Console. They can set schedules for software updates and patches, verify compliance, and manage users over time.

Ease of Use – Users can unpack and connect the devices in minutes, enter their activation codes, log in to their virtual desktop environment, and start to work right away. They don’t have to take responsibility for installing software patches or updates, and can focus on their jobs.

There are lots of great use cases for these devices! First, there are situations where there’s a long-term need for regular access: call centers, task workers, training centers, and so forth. Second, there are other situations, where there’s a transient or short-term need for access: registration systems at large events, call centers stood up on a temporary basis for a special event or an emergency, disaster response, and the like. Given that some employees do not return laptops to their employers when they leave their job, providing them with inexpensive devices that do not have local storage makes a lot of sense.

Let’s walk through the process of getting set up, first as an administrator and then as a user.

Getting started as an administrator
The first step is to order some devices and have them shipped to my users, along with any desired peripherals.

Next, in my position as administrator, I open the Amazon WorkSpaces Thin Client Console, and click Get started:

Each Amazon WorkSpaces Thin Client environment provides access to a specific virtual desktop service (WorkSpaces, WorkSpaces Web, or Amazon AppStream 2.0). I click Create environment to move ahead:

I give my environment a name, indicate that I want patches applied automatically, and select WorkSpaces Web:

Next, I click Create WorkSpaces Web portal and go through those steps (not shown, basically choosing a VPC and two or more subnets, a security group, and an optional Private Certificate Authority):

I refresh, and my new portal is visible. I select it, enter a tag for tracking, and click Create environment:

My environment is ready right away. I keep a copy of the activation code (aci3a5yj) at hand to use later in the post:

I am using AWS Identity Center as my identity provider. I already set up my first user, and assigned myself to the MyWebPortal app (the precise steps that you take to do this will vary depending on your choice of identity provider):

Finally, as my last step in this process in my role as administrator, I share the activation code with my users (that would be me, in this case).

Getting started as a user
In my role as a user I return to my testing setup, power-on, go through a couple of quick steps to select my keyboard and connect to my home Wi-Fi, and enter my activation code:

Then I sign in using my AWS Identity Center user name and password:

And my WorkSpace is ready to use:

Administrator tools
As an administrator, I can manage environments, devices, and device software updates from the Thin Client Console. For example, I can review the list of devices that I manage:

Things to know
Here are a couple of things that are important to know:

Regions – The Thin Client Console is available in the US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Canada (Central), and Europe (Frankfurt, Ireland, Ireland, London) Regions.

Device Sales – The Amazon WorkSpaces Thin Clients are available in the United States now, with availability in other countries in early 2024.

Pricing – Devices are priced at $195, or $280 with an optional hub that allows you to use a second monitor. There’s a $6 per month fee to manage, maintain, and monitor each device, and you also pay for the underlying virtual desktop service.

Learn more
Visit the WorkSpaces Thin Client web page and Amazon Business Marketplace to learn more.

Jeff;

Introducing Amazon EC2 high memory U7i Instances for large in-memory databases (preview)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/introducing-amazon-ec2-high-memory-u7i-instances-for-large-in-memory-databases-preview/

The new U7i instances are designed to support large, in-memory databases including SAP HANA, Oracle, and SQL Server. Powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids), the instances are now available in multiple AWS regions in preview form, in the US West (Oregon), Asia Pacific (Seoul), and Europe (Frankfurt) AWS Regions, as follows:

Instance Name vCPUs
Memory (DDR5)
EBS Bandwidth
Network Bandwidth
u7in-16tb.224xlarge 896 16,384 GiB 100 Gbps 100 Gbps
u7in-24tb.224xlarge 896 24,576 GiB 100 Gbps 100 Gbps
u7in-32tb.224xlarge 896 32,768 GiB 100 Gbps 100 Gbps

We are also working on a smaller instance:

Instance Name vCPUs
Memory (DDR5)
EBS Bandwidth
Network Bandwidth
u7i-12tb.224xlarge 896 12,288 GiB 60 Gbps 100 Gbps

Here’s what 32 TiB of memory looks like:

And here are the 896 vCPUs (and lots of other info):

When compared to the first generation of High Memory instances, the U7i instances offer up to 125% more compute performance and up to 120% more memory performance. They also provide 2.5x as much EBS bandwidth, giving you the ability to hydrate in-memory databases at a rate of up to 44 terabytes per hour.

Each U7i instance supports attachment of up to 128 General Purpose (gp2 and gp3) or Provisioned IOPS (io1 and io2 Block Express) EBS volumes. Each io2 Block Express volume can be as big as 64 TiB and can deliver up to 256K IOPS at up to 32 Gbps, making them a great match for the U7i instance.

On the network side, the instances support ENA Express and deliver up to 25 Gbps of bandwidth per network flow.

Supported operating systems include Red Hat Enterprise Linux and SUSE Enterprise Linux Server.

Join the Preview
If you are ready to put the U7i instances to the test in your environment, join the preview.

Jeff;

Use anomaly detection with AWS Glue to improve data quality (preview)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/use-anomaly-detection-with-aws-glue-to-improve-data-quality-preview/

We are launching a preview of a new AWS Glue Data Quality feature that will help to improve your data quality by using machine learning to detect statistical anomalies and unusual patterns. You get deep insights into data quality issues, data quality scores, and recommendations for rules that you can use to continuously monitor for anomalies, all without having to write any code.

Data quality counts
AWS customers already build data integration pipelines to extract and transform data. They set up data quality rules to ensure that the resulting data is of high quality and can be used to make accurate business decisions. In many cases, these rules assess the data based on criteria that were chosen and locked in at a specific point in time, reflecting the current state of the business. However, as the business environment changes and the properties of the data shift, the rules are not always reviewed and updated.

For example, a rule could be set to verify that daily sales are at least ten thousand dollars for an early-stage business. As the business succeeds and grows, the rule should be checked and updated from time to time, but in practice this rarely happens. As a result, if there’s an unexpected drop in sales, the outdated rule does not activate, and no one is happy.

Anomaly detection in action
To detect unusual patterns and to gain deeper insights into data, organizations try to create their own adaptive systems or turn to costly commercial solutions that require specific technical skills and specialized business knowledge.

To address this widespread challenge, Glue Data Quality now makes use of machine learning (ML).

Once activated, this cool new addition to Glue Data Quality gathers statistics as fresh data arrives, using ML and dynamic thresholds to learn from past patterns while looking outliers and unusual data patterns. This process produces observations and also visualizes trends so that you can quickly gain a better understanding of the anomaly.

You will also get rule recommendations as part of the Observations, and you can easily and progressively add them to your data pipelines. Rules can enforce an action such as stopping your data pipelines. In the past, you could only write static rules. Now, you can write Dynamic rules that have auto-adjusting thresholds and AnomalyDetection Rules that grasp recurring patterns and spot deviations. When you use rules as part of data pipelines, they can stop the data flow so that a data engineer can review, fix and resume.

To use anomaly detection, I add an Evaluate Data Quality node to my job:

I select the node and click Add analyzer to choose a statistic and the columns:

Glue Data Quality learns from the data to recognize patterns and then generates observations that will be shown in the Data quality tab:

And a visualization:

After I review the observations I add new rules. The first one sets adaptive thresholds that check the row count is between the smallest of the last 10 runs and the largest of the last 20 runs. The second one looks for unusual patters, for example RowCount being abnormally high on weekends:

Join the preview
This new capability is available in preview in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland). To learn more, read Data Quality Anomaly Detection]].

Stay tuned for a detailed blog post when this feature launches!

Learn more

Data Quality Anomaly Detection

Jeff;

Use Amazon CloudWatch to consolidate hybrid, multicloud, and on-premises metrics

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-use-amazon-cloudwatch-to-consolidate-hybrid-multi-cloud-and-on-premises-metrics/

You can now consolidate metrics from your hybrid, multicloud, and on-premises data sources using Amazon CloudWatch and process them in a consistent, unified fashion. You can query, visualize, and alarm on any and all of the metrics, regardless of their source. In addition to giving you a unified view, this new feature will help you to identify trends and issues that span multiple parts and aspects of your infrastructure.

When I first heard about this new feature, I thought, “Wait, I can do that with PutMetricData, what’s the big deal?” Quite a bit, as it turns out. PutMetricData stores the metrics in CloudWatch, but this cool new feature fetches them on demand, directly from the source.

Instead of storing data, you select and configure connectors that pull data from Amazon Managed Service for Prometheus, generic Prometheus, Amazon OpenSearch Service, Amazon RDS for MySQL, Amazon RDS for PostgreSQL, CSV files stored in Amazon Simple Storage Service (Amazon S3), and Microsoft Azure Monitor. Each connector is a AWS Lambda function that is deployed from a AWS CloudFormation template. CloudWatch invokes the appropriate Lambda functions as needed and makes use of the returned metrics immediately — they are not buffered or kept around.

Creating and using connectors
To get started I open the CloudWatch Console, click All metrics, and activate the Multi source query tab, then I click Create and manage data sources:

And then I do it again:

Then I choose a data source type:

CloudWatch will then prompt me for the details that it needs to create and set up the connector for my data source. For example, if I select Amazon RDS – MySQL, I give my data source a name, choose the RDS database instance, and specify the connection info:

When I click Create data source, a Lambda function, a Lambda Permission, an IAM role, a Secrets Manager Secret, a Log Group, and a AWS CloudFormation Stack will be created in my account:

Then, when I am ready to reference the data source and make use of the metrics that it provides, I enter a SQL query that returns timestamps and values for the metric:

Inside the Lambda function
The code for the Custom – getting started template is short, simple, and easy to understand. It implements handlers for two events:

DescribeGetMetricData – This handler returns a string that includes the name of the connector, default values for the arguments to the other handler, and a text description in Markdown format that is displayed in the custom data source query builder in the CloudWatch console.

GetMetricData – This handler returns a metric name, 1-dimensional array of timestamps and metric values, all for a time range that is provided as arguments to the handler.

If you spend a few minutes examining this code you should be able to see how to write functions to connect to your own data sources.

Things to know
Here are a couple of things to keep in mind about this powerful new feature:

Regions – You can create and use data connectors in all commercial AWS Regions; a connector that is running in one region can connect to and retrieve data from services and endpoints in other regions and other AWS accounts.

Pricing – There is no extra charge for the connectors. You pay for the invocations of the Lambda functions and for any other AWS infrastructure that you create.

Jeff;

Announcing cross-region data replication for Amazon WorkSpaces

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/cross-region-data-replication-for-amazon-workspaces/

You can now use cross-region data replication to provide business continuity for your Amazon WorkSpaces users. Snapshots are taken every 12 hours, replicated to the desired destination region, and are used to provide a recovery point objective (RPO) of 12-24 hours.

Multi-Region Resilience Review
In her 2022 post Advancing business continuity with Amazon WorkSpaces Multi-Region Resilience, my colleague Ariel introduced you to the initial version of this feature and showed you how to use it to set up standby virtual desktops available for your users. After it has been set up, users log in with Amazon WorkSpaces registration codes that include fully qualified domain names (FQDNs). If the WorkSpaces in the primary region are unavailable, the users are redirected to standby WorkSpaces in the secondary region.

The standby WorkSpaces are available for a small, fixed monthly fee for infrastructure and storage, with a low flat rate change for each hour of usage during the month. Together, this feature and this business model make it easy and economical for you to maintain a standby deployment.

Cross-Region Data Replication
Today we are making this feature even more powerful by adding one-way cross-region data replication. Applications, documents, and other resources stored on the primary WorkSpace are snapshotted every 12 hours and copied to the region hosting the secondary WorkSpace. You get an additional layer of redundancy, enhanced data protection, and can minimize productivity that would otherwise be lost to disruptions. This is particularly helpful if users have installed and configured applications on top of the base image since they won’t have to repeat these steps on the secondary WorkSpace.

Here’s how it all works:

Normal Operation – During normal operation, the users in your WorkSpaces fleet are using the primary region. EBS snapshots of the system (C:) and data (D:) drives are created every 12 hours. Multi-Region Resilience runs in the secondary region and checks for fresh snapshots regularly. When it finds them, it initiates a copy to the secondary region. As the copies arrive in the secondary region they are used to update the secondary WorkSpace.

Failover Detection – As part of the setup process, you will follow Configure your DNS service and setup DNS routing policies to set up DNS routing policies and optional Amazon Route 53 health checks to manage cross-Region redirection.

Failover – If a large-scale event (LSE) affects the primary region and the primary WorkSpace, the failover detection that I just described goes in to effect. When users try to reconnect, they are redirected to the secondary region, the latest snapshots are used to launch a WorkSpace for them, and they are back up and running with access to data and apps that are between 12 and 24 hours old.

Failback – At the conclusion of the LSE, the users manually back up any data that they have created on the secondary WorkSpace and log out of it. Then they log in again, and this time they will be directed to the primary region and WorkSpace, where they can restore their backups and continue to work.

Getting Set Up
As a WorkSpaces administrator, I start by locating the desired primary WorkSpace:

I select it and choose Create Standby WorkSpaces from the Actions menu:

I select the desired region for the secondary WorkSpace and click Next:

Then I choose the right directory in the region, and again click Next:

If the primary WorkSpace is encrypted, I must enter the ARN of the KMS key in the secondary region (or, even better, use a multi-Region key). I check Enable data replication and confirm that I am authorizing an additional monthly charge:

On the next page I review my choices and click Create to initiate the creation of the secondary WorkSpace in the region that I selected.

As mentioned earlier I also need to set up Multi-Region Resilience. This includes setting up a domain name to use as a WorkSpaces registration code, setting up Route 53 health checks, and using them to power routing policies.

Things to Know
Here are a couple of important things to know about Cross-Region Data Replication:

Directories – You can use a self-managed Active Directory, AWS Managed AD or AD Connector configured as described in this post. Simple AD is not supported.

Snapshots – The first-ever EBS snapshot for a particular data volume is full, and subsequent snapshots are incremental. As a result, the first replication for a given WorkSpace will likely take longer than subsequent ones. Snapshots are initiated on a schedule that is internal to WorkSpaces and you cannot control the timing.

Encryption – You can use this feature with Encrypted WorkSpaces as long as you use the same AWS Key Management Service (AWS KMS) keys in the primary and secondary regions. You can also use multi-Region keys.

Bundles – You can use the Windows 10 and Windows 11 bundles, and you can also BYOL.

Accounts – Every AWS account has a fixed limit on the number of pending EBS snapshots. This may affect your ability to use this feature with large fleets of WorkSpaces.

Pricing – You pay a fixed monthly fee based on the amount of storage configured for each primary WorkSpace. See the Amazon WorkSpaces Pricing page for more information.

Jeff;

New – Long-Form voices for Amazon Polly

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-long-form-voices-for-amazon-polly/

We are launching three new voices for Polly. Powered by a new long-form engine, the voices are natural and expressive, with appropriate pauses, emphasis, and tone.

New Voices
The new long-form voices are perfect for blog posts, news articles, training videos, and marketing content. The underlying Machine Learning model extracts meaning from the text, learning about speech segments, prosody (the pattern of rhythm and pauses), intonation, and other aspects of expressive speech, allowing the synthesized audio to express emotions, especially in dialogs. The new long-form engine uses a deep learning text-to-speech (TTS) model trained to acquire a contextual understanding of the text that allows it to express prosody in an appropriate way. This allows the intention of the story to drive the vocal performance and create the correct emphasis, pauses, and tones of a realistic human voice.

Here are the new voices:

Name Locale Gender Language Sample
Danielle en_US Female English (US)
Gregory en_US Male English (US)
Ruth en_US Female English (US)

Using the New Voices
You can access the new voices using the AWS Management Console, AWS Command Line Interface (AWS CLI), or the AWS SDKs. Using the CLI, I start by listing the voices that use the new long-form engine:

$ aws --region us-east-1 polly describe-voices --output json \
  | jq -r '.Voices[] | select(.SupportedEngines | index("long-form")) | .Name'
Danielle
Gregory
Ruth

I can pick one, or I can try all of them:

for v in `aws polly describe-voices --output json \
          | jq -r '.Voices[] | select(.SupportedEngines | index("long-form")) | .Name'`; do
    Text="Hello my name is $v and I can read blog posts, articles, \
and other long-form content for you. I am the best\!"
    aws polly synthesize-speech --output-format 'mp3' \
    --text "$Text" --voice-id $v $v.mp3 --engine long-form; \
    aws s3 cp $v.mp3 s3://jbarr-voices; \
done

My shell script had a small quoting bug, but the resulting audio was too funny not to include!

Programmatically, you can reproduce my example by writing code that calls the DescribeVoices and SynthesizeSpeech functions.

Things to Know
Here are some interesting things that you should know about the new voices:

Pricing – Long-form voices are priced at $100 per million characters or Speech Marks requests. Check out the Amazon Polly Pricing page to learn more.

Engines & Voices – Some of the voices that I listed above can be used with more than one engine. For example, the Danielle voice can be used with the new long-form engine and the existing neural engine.

Regions – The new engine and voices are available in the US East (N. Virginia) Region.

Check out the new voices, build something awesome, and let me know what you think!

Jeff;

Build AI apps with PartyRock and Amazon Bedrock

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/build-ai-apps-with-partyrock-and-amazon-bedrock/

If you are ready to learn more about Generative AI while having fun and building cool stuff, check out PartyRock.aws . You can experiment, learn all about prompt engineering, build mini-apps, and share them with your friends — all without writing any code or creating an AWS account. You can also start with an app that has been shared with you and remix it to further enhance and customize it.

Using PartyRock
To get started, I visit https://partyrock.aws/ , click Sign in, and log in using my Apple, Amazon, or Google account:

After authenticating, I am on the front page of PartyRock. I can review some sample apps, or I can click Build your own app to get started:

I can enter a description of the app that I want to build and use PartyRock’s Generative AI to get a running start, or I can built it myself on a widget-by-widget basis:

I am knee-deep in blog posts for AWS re:Invent right now. While most of my colleagues patiently wait for their draft to be ready, a few of them are impatient and keep asking me (this is the adult version of “Are we there yet?”). While I try to maintain a sense of humor about this, I sometimes get just a bit snarky with them. Let’s see if PartyRock can help. I enter my prompt and click Generate App:

My app (Snarky Patient Blogger) is ready within a few seconds, and I enter some input to see if the output has sufficient snark for my needs:

That looks great, so let’s take it apart and see how it works!

The application (Snarky Patient Blogger) has two widgets: User Input and Snarky Response. I click the Edit icon on the first widget, and see that it has a title, loading text, and a default value. The title allows widgets to reference each other by name:

This simple widget encapsulates a call to the Amazon Bedrock InvokeModel function. The widget specifies the use of the Claude v2 model, and a simple prompt, which references the User Input widget. I can experiment by changing either one, saving the change, and waiting a second or two for the result. For example, changing the model to Claude Instant gives me a slightly different response:

Now I want a visual representation of the reply. I’ll use a Text Generation widget to find the most important nouns in the response, and an Image Generation widget to visualize the results. I add the first widget and use a simple prompt:

I test it by clicking the Retry icon and the output looks perfect:

I add the Image Generation widget, and fiddle with the prompt a bit. After a minute or two I have what I want:

The finished app looks like this:

Once I am happy with the app, I can Make public and Share:

The finished app is at https://partyrock.aws/u/jeffbarr/E-FXPUkO7/Snarky-Patient-Blogger and you are welcome to play with it. You can also log in and click Remix to use it as the starting point for an even better app of your own.

But Wait, There’s More
As is usually the case with my blog posts, I did not have room to show you every last feature in detail. Here are a few that I skipped:

Empty App – I used the App Builder in my example, but I can also choose Start from an Empty App, select my widgets, and set them up as desired:

Remix – I can start with an existing app (mine or another public one) and Remix it to customize or enhance it:

Chatbot Widget – I can interact with my app using a prompt as a starting point:

@ Referencing – I can use the “@” to reference other widgets by name while I am building my app:

Advanced Settings – Some of the widgets offer advanced settings. For example, the Text Generation widget gives me the option to control the Temperature and Top P parameters to the model:

Backstage – The PartyRock Backstage lets me see my apps and my cumulative consumption of PartyRock credits:

Things to Know
Here are a couple of things that you should know about PartyRock:

Pricing – For a limited time, AWS offers new PartyRock users a free trial without the need to provide a credit card or sign up for an AWS account, so that you can begin learning fundamental skills without the worry of incurring costs. You can track your credit consumption in the Backstage, as shown above. Credit usage is calculated based on your input tokens, output tokens, and generated images; for full details read the Billing and Support section of the PartyRock FAQ.

Model Access – We plan to give you access to additional models over time.

In the Works – We are working on even more widgets and features, so stay tuned for more information.

Learning Resources – To learn more, check out these resources:

Party Time
The next step is up to you. Log in to PartyRock, create something cool, and share it with everyone you know. Let me know what you come up with!

Jeff;

New – Amazon EBS Snapshot Lock

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-amazon-ebs-snapshot-lock/

You can now lock individual Amazon Elastic Block Store (Amazon EBS) snapshots in order to enforce better compliance with your data retention policies. Locked snapshots cannot be deleted until the lock is expired or released, giving you the power to keep critical backups safe from accidental or malicious deletion, including ransomware attacks.

The Need for Locking
AWS customers use EBS snapshots for backups, disaster recovery, data migration, and compliance. Customers in financial services and health care often need to meet specific compliance requirements, with prescribed time frames for retention, and also need to ensure that the snapshots are truly Write Once Read Many (WORM). In order to meet these requirements, customers have implemented solutions that use multiple AWS accounts with one-way “air gaps” between them.

EBS Snapshot Lock
The new EBS Snapshot Lock feature helps you to meet your retention and compliance requirements without the need for custom solutions. You can lock new and existing EBS snapshots using a lock duration that can range from one day to about 100 years. The snapshot is locked for the specified duration and cannot be deleted.

There are two lock modes:

Governance – This mode protects snapshots from deletions by all users. However, with the proper IAM permissions, the lock duration can be extended or shortened, the lock can be deleted, and the mode can be changed from Governance mode to Compliance mode.

Compliance – This mode protects snapshots from actions by the root user and all IAM users. After a cooling-off period of up to 72 hours, neither the snapshot nor the lock can be deleted until the lock duration expires, and the mode cannot be changed. With the proper IAM permissions the lock duration can be extended, but it cannot be shortened.

Snapshots in either mode can still be shared or copied. They can be archived to the low-cost Amazon EBS Snapshots Archive tier, and locks can be applied to snapshots that have already been archived.

Using Snapshot Lock
From the EBS Console I select a snapshot (Snap-Monthly-2023-09) and choose Manage snapshot lock from Snapshot Settings in the Actions menu:

This is a monthly snapshot and I want to lock it for one year. I choose Governance mode and select the duration, then click Save lock settings:

I try to delete it, and the deletion fails, as it should:

Now I would like to lock one of my annual snapshots for 5 years, using Compliance mode this time:

I set my cooling-off period to 24 hours, just in case I change my mind. Perhaps I have to run some kind of audit or final date validation on the snapshot before committing to keeping it around for five years.

Programmatically, I can use new API functions to establish and control locks on my EBS snapshots:

LockSnapshot – Lock a snapshot in governance or compliance mode, or modify the settings of a snapshot that is already locked.

UnlockSnapshot – Unlock a snapshot that is is governance mode, or is in compliance mode but within the cooling-off period.

DescribeLockedSnapshots – Get information about the lock status of my snapshots, with optional filtering based on the state of the lock.

IAM users must have the appropriate permissions (ec2:lockSnapshot, ec2:UnlockSnapshot, and ec2:DescribeLockedSnapshots) in order to use these functions.

Things to Know
Here are a couple of things to keep in mind about this new feature:

AWS BackupAWS Backup independently manages retention for the snapshots that it creates. We do not recommend locking them.

Pricing – There is no extra charge for the use of this feature. You pay the usual rates for storage of snapshots and archived snapshots.

Regions – EBS Snapshot Locking is available in all commercial AWS Regions.

KMS Key Retention – If you are using customer-managed AWS Key Management Service (AWS KMS) keys to encrypt your EBS volumes and snapshots, you need to make sure that the key will remain valid for the lifetime of the snapshot.

Jeff;

New – Block Public Sharing of Amazon EBS Snapshots

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-block-public-sharing-of-amazon-ebs-snapshots/

You now have the ability to disable public sharing of new, and optionally existing, Amazon Elastic Block Store (Amazon EBS) snapshots on a per-region, per-account basis. This provides you with another level of protection against accidental or inadvertent data leakage.

EBS Snapshot Review
You have had the power to create EBS snapshots since the launch of EBS in 2008, and have been able to share them privately or publicly since 2009. The vast majority of snapshots are kept private and are used for periodic backups, data migration, and disaster recovery. Software vendors use public snapshots to share trial-use software and test data.

Block Public Sharing
EBS snapshots have always been private by default, with the option to make individual snaps public as needed. If you do not currently use and do not plan to use public snapshots, you can now disable public sharing using the AWS Management Console, AWS Command Line Interface (AWS CLI), or the new EnableSnapshotBlockPublicAccess function. Using the Console, I visit the EC2 Dashboard and click Data protection and security in the Account attributes box:

Then I scroll down to the new Block public access for EBS snapshots section, review the current status, and click Manage:

I click Block public access, choose Block all public sharing, and click Update:

This is a per-region setting, and it takes effect within minutes. I can see the updated status in the console:

I inspect one of my snapshots in the region, and see that I cannot share it publicly:

As you can see, I still have the ability to share the snapshot with specific AWS accounts.

If I have chosen Block all public sharing, any snapshots that I have previously shared will no longer be listed when another AWS customer calls DescribeSnapshots in pursuit of publicly accessible snapshots.

Things to Know
Here are a couple of really important things to know about this new feature:

Region-Level – This is a regional setting, and must be applied in each region where you want to block the ability to share snapshots publicly.

API Functions & IAM Permissions – In addition to EnableSnapshotBlockPublicAccess, other functions for managing this feature include DisableSnapshotBlockPublicAccess and GetSnapshotBlockPublicAccessState. To use these functions (or their console/CLI equivalents) you must have the ec2:EnableSnapshotBlockPublicAccess, ec2:DisableSnapshotBlockPublicAccess, and ec2:GetSnapshotBlockPublicAccessState IAM permissions.

AMIs – This does not affect the ability to share Amazon Machine Images (AMIs) publicly. To learn how to manage public sharing of AMIs, visit Block public access to your AMIs.

Jeff;

New – Create application-consistent snapshots using Amazon Data Lifecycle Manager and custom scripts

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-create-application-consistent-snapshots-using-amazon-data-lifecycle-manager-and-custom-scripts/

Amazon Data Lifecycle Manager now supports the use of pre-snapshot and post-snapshot scripts embedded in AWS Systems Manager documents. You can use these scripts to ensure that Amazon Elastic Block Store (Amazon EBS) snapshots created by Data Lifecycle Manager are application-consistent. Scripts can pause and resume I/O operations, flush buffered data to EBS volumes, and so forth. As part of this launch we are also publishing a set of detailed blog posts that show you how to use this feature with self-managed relational databases and Windows Volume Shadow Copy Service (VSS).

Data Lifecycle Manager (DLM) Recap
As a quick recap, Data Lifecycle Manager helps you to automate the creation, retention, and deletion of Amazon EBS volume snapshots. Once you have completed the prerequisite steps such as onboarding your EC2 instance to AWS Systems Manager, setting up an IAM role for DLM, and tagging your SSM documents, you simply create a lifecycle policy and indicate (via tags) the applicable Amazon Elastic Compute Cloud (Amazon EC2) instances, set a retention model, and let DLM do the rest. The policies specify when they are to be run, what is to be backed up, and how long the snapshots must be retained. For a full walk-through of DLM, read my 2018 blog post, New – Lifecycle Management for Amazon EBS Snapshots.

Application Consistent Snapshots
EBS snapshots are crash-consistent, meaning that they represent the state of the associated EBS volume at the time that the snapshot was created. This is sufficient for many types of applications, including those that do not use snapshots to capture the state of an active relational database. To make a snapshot that is application-consistent, it is necessary to take pending transactions into account (either waiting for them to finish or causing them to fail), momentarily pause further write operations, take the snapshot, and then resume normal operations.

And that’s where today’s launch comes in. DLM now has the ability to tell the instance to prepare for an application-consistent backup. The pre-snapshot script can manage pending transactions, flush in-memory data to persistent storage, freeze the filesystem, or even bring the application or database to a stop. Then the post-snapshot script can bring the application or database back to life, reload in-memory caches from persistent storage, thaw the filesystem, and so forth.

In addition to the base-level support for custom scripts, you can also use this feature to automate the creation of VSS Backup snapshots:

Pre and Post Scripts
The new scripts apply to DLM policies for instances. Let’s assume that I have created a policy that references SSM documents with pre-snapshot and post-snapshot scripts, and that it applies to a single instance. Here’s what happens when the policy is run per its schedule:

  1. The pre-snapshot script is started from the SSM document.
  2. Each command in the script is run and the script-level status (success or failure) is captured. If enabled in the policy, DLM will retry failed scripts.
  3. Multi-volume EBS snapshots are initiated for EBS volumes attached to the instance, with further control via the policy.
  4. The post-snapshot script is started from the SSM document,
  5. Each command in the script is run and and the script-level status (success or failure) is captured.

The policy contains options that give you control over the actions that are taken (retry, continue, or skip) when either of the scripts times out or fails. The status is logged, Amazon CloudWatch metrics are published, Amazon EventBridge events are emitted, and the status is also encoded in tags that are automatically assigned to each snapshot.

The pre-snapshot and post-snapshot scripts can perform any of the actions that are allowed in a command document: running shell scripts, running PowerShell scripts, and so forth. The actions must complete within the timeout specified in the policy, with an allowable range of 10 seconds to 120 seconds.

Getting Started
You will need to have a detailed understanding of your application or database in order to build a robust pair of scripts. In addition to handling the “happy path” when all goes well, your scripts need to plan for several failure scenarios. For example, a pre-snapshot script should fork a background task that will serve as a failsafe in case the post-snapshot script does not work as expected. Each script must return a shell-level status code, as detailed here.

Once I have written and tested my scripts and packaged them as SSM documents, I open the Data Lifecycle Manager page in the EC2 Console, select EBS snapshot policy, and click Next step:

I target all of my instances that are tagged with a Mode of Production, and use the default IAM role (if you use a different role, it must enable access to SSM), leave the rest of the values as-is, and proceed:

On the next page I scroll down to Pre and post scripts and expand the section. I click Enable pre and post scripts, choose Custom SSM document, and then select my SSM document from the menu. I also set the timeout and retry options, and choose to default to a crash-consistent backup if one of my scripts fails. I click Review policy, do one final check, and click Create policy on the following page:

My policy is created, and will take effect right away. After it has run at least once, I can inspect the CloudWatch metrics to check for starts, completions, and failures:

Additional Reading
Here are the first of the detailed blog posts that I promised you earlier:

We have more in the works for later this year and I will update the list above when they are published.

You can also read the documentation to learn more.

DLM Videos
While I’ve got your attention, I would like to share a couple of helpful videos with you:

This new feature is available now and you can start using it today!

Jeff;