All posts by Jeff Barr

In the Works – AWS Region in New Zealand

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-new-zealand/

We are currently working on regions in Australia, India, Indonesia, Israel, Spain , Switzerland, and the United Arab Emirates.

Auckland, New Zealand in the Works
Today I am happy to announce that the new AWS Asia Pacific (Auckland) Region is in the works and will open in 2024. This region will have three Availability Zones and will give AWS customers in New Zealand the ability to run workloads and store data that must remain in-country.

There are 81 Availability Zones within 25 AWS Regions in operation today, with 24 more Availability Zones and eight announced regions (including this one) underway.

Each of the Availability Zones will be physically independent of the others in region, close enough to support applications that need low latency, yet sufficiently distant to significantly reduce the risk that an AZ-level event will have an impact on business continuity. The AZs in this region will be connected together via high-bandwidth, low-latency network connections over dedicated, fully redundant fiber. This connectivity supports applications that need synchronous replication between AZs for availability or redundancy; you can take a peek at the AWS Global Infrastructure page to learn more about how we design and build regions and AZs.

AWS in New Zealand
According to an economic impact study (EIS) that we released as part of this launch, we estimate that our NZ$ 7.5 billion (5.3 billion USD) investment will create 1,000 new jobs and will have an estimated economic impact of NZ$ 10.8 billion (7.7 billion USD) over the next fifteen years.

The first AWS office in New Zealand opened in 2013 and now employs over 100 solution architects, account managers, sales representatives, professional services consultants, and cloud experts.

Other AWS infrastructure includes a pair of Amazon CloudFront edge locations in Auckland along with access to the AWS global backbone through multiple, redundant submarine cables. For more information about connectivity options, be sure to check out New Zealand Internet Connectivity to AWS.

Stay Tuned
We’ll announce the opening of this and the other regions in future blog posts, so be sure to stay tuned!

Jeff;

PS – The Amazon Polly Aria voice (New Zealand English) was launched earlier this year and should be of interest to New Zealanders. Visit the Amazon Polly Console to get started!

New – Amazon FSx for NetApp ONTAP

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-amazon-fsx-for-netapp-ontap/

Back in 2018 I wrote about the first two members of the Amazon FSx family of fully-managed, highly-reliable, and highly-performant file systems, Amazon FSx for Lustre and Amazon FSx for Windows File Server. Both of these services give you the ability to use popular open source and commercially-licensed file systems without having to deal with hardware provisioning, software configuration, patching, backups, and so forth. Since those launches, we have added many new features to both services in response to your requests:

Amazon FSx for Lustre now supports Persistent file systems with SSD- and HDD-based storage for longer-term storage and workloads, storage capacity scaling, crash-consistent backups, data compression, and storage quotas.

Amazon FSx for Windows File Server now supports many enterprise-ready features including Multi-AZ file systems, self-managed Active Directories, fine-grained file restoration, file access auditing, storage size and capacity throughput scaling, and a low cost HDD storage option.

Because these services support the file access and storage paradigms that are already well understood by Lustre and Windows File Server users, it is easy to migrate existing applications and to fine-tune existing operational regimens when you put them to use. While migration is important, so are new applications! All of the Amazon FSx systems make it easy for you to build applications that need high-performance fully managed storage along with the rich set of features provided by the file systems.

Amazon FSx for NetApp ONTAP
As I often tell you, we are always looking for more ways to meet the needs of our customers. To this end, we are launching Amazon FSx for NetApp ONTAP today. You get the popular features, performance, and APIs of ONTAP file systems with the agility, scalability, security, and resiliency of AWS, making it easier for you to migrate on-premises applications that rely on network-attached storage (NAS) appliances to AWS.

ONTAP (a NetApp product) is an enterprise data management offering designed to provide high-performance storage suitable for use with Oracle, SAP, VMware, Microsoft SQL Server, and so forth. ONTAP is flexible and scalable, with support for multi-protocol access and file systems that can scale up to 176 PiB. It supports a wide variety of features that are designed to make data management cheaper and easier including inline data compression, deduplication, compaction, thin provisioning, replication (SnapMirror), and point-in-time cloning (FlexClone).

FSx for ONTAP is fully managed so you can start to enjoy all of these features in minutes. AWS provisions the file servers and storage volumes, manages replication, installs software updates & patches, replaces misbehaving infrastructure components, manages failover, and much more. Whether you are migrating data from your on-premises NAS environment or building brand-new cloud native applications, you will find a lot to like! If you are migrating, you can enjoy all of the benefits of a fully-managed file system while taking advantage of your existing tools, workflows, processes, and operational expertise. If you are building brand-new applications, you can create a cloud-native experience that makes use of ONTAP’s rich feature set. Either way, you can scale to support hundreds of thousands of IOPS and benefit from the continued, behind-the-scenes evolution of the compute, storage, and networking components.

There are two storage tiers, and you can enable intelligent tiering to move data back and forth between them on an as-needed basis:

Primary Storage is built on high performance solid state drives (SSD), and is designed to hold the part of your data set that is active and/or sensitive to latency. You can provision up to 192 TiB of primary storage per file system.

Capacity Pool Storage grows and shrinks as needed, and can scale to pebibytes. It is cost-optimized and designed to hold data that is accessed infrequently.

Within each Amazon FSx for NetApp ONTAP file system you can create one or more Storage Virtual Machines (SVMs), each of which supports one or more Volumes. Volumes can be accessed via NFS, SMB, or as iSCSI LUNs for shared block storage. As you can see from this diagram, you can access each volume from AWS compute services, VMware Cloud on AWS, and from your on-premises applications:

If your on-premises applications are already making use of ONTAP in your own data center, you can easily create an ONTAP file system in the cloud, replicate your data using NetApp SnapMirror, and take advantage of all that Amazon FSx for NetApp ONTAP has to offer.

Getting Started with Amazon FSx for NetApp ONTAP
I can create my first file system from the command line, AWS Management Console, or the NetApp Cloud Manager. I can also make an API call or use a CloudFormation template. I’ll use the Management Console.

Each file system runs within a Virtual Private Cloud (VPC), so I start by choosing a VPC and a pair of subnets (preferred and standby). Every SVM has an endpoint in the Availability Zones associated with both of the subnets, with continuous monitoring, automated failover, and automated failback to ensure high availability.

I open the Amazon FSx Console, click Create file system, select Amazon FSx for NetApp ONTAP, and click Next:

I can choose Quick create and use a set of best practices, or Standard create and set all of the options myself. I’ll go for the first option, since I can change all of the configuration options later if necessary. I select Quick create, enter a name for my file system (jb-fsx-ontap-1), and set the storage capacity in GiB. I also choose the VPC, and enable ONTAP’s storage efficiency features:

I confirm all of my choices, and note that this option will also create a Storage Virtual Machine (fsx) and a volume (vol1) for me. Then I click Create file system to “make it so”:

The file system Status starts out as Creating, then transitions to Available within 20 minutes or so:

My first SVM transitions from Pending to Created shortly thereafter, and my first volume transitions from Pending to Created as well. I can click on the SVM to learn more about it and to see the full set of management and access endpoints that it provides:

I can click Volumes in the left-side navigation and see all of my volumes. The root volume (fsx_root) is created automatically and represents all of the storage on the SVM:

I can select a volume and click Attach to get customized instructions for attaching it to an EC2 instance running Linux or Windows:

I can select a volume and then choose Update volume from the Action menu to change the volume’s path, size, storage efficiency, or tiering policy:

To learn more about the tiering policy, read about Amazon FSx for NetApp ONTAP Storage.

I can click Create volume and create additional volumes within any of my file systems:

There’s a lot more than I have space to show you, so be sure to open up the Console and try it out for yourself.

Things to Know
Here are a couple of things to know about Amazon FSx for NetApp ONTAP:

Regions – The new file system is available in most AWS regions and in GovCloud; check out the AWS Regional Service list for more information.

Pricing – Pricing is based on multiple usage dimensions including the Primary Storage, Capacity Pool Storage, throughput capacity, additional SSD IOPS, and backup storage consumption; consult the Amazon FSx for NetApp ONTAP Pricing page for more information.

Connectivity – You can use AWS Direct Connect to connect your on-premises applications to your new file systems. You can use Transit Gateway to connect to VPCs in other accounts and/or regions.

Availability – As I mentioned earlier, each file system is powered by AWS infrastructure in a pair of Availability Zones. Amazon FSx for NetApp ONTAP automatically replicates data between the zones and monitors the AWS infrastructure, initiating a failover (typically within 60 seconds), and then replacing infrastructure components as necessary. There’s a 99.99% availability SLA for each file system.

Jeff;

Happy 15th Birthday Amazon EC2

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/happy-15th-birthday-amazon-ec2/

Fifteen years ago today I wrote the blog post that launched the Amazon EC2 Beta. As I recall, the launch was imminent for quite some time as we worked to finalize the feature set, the pricing model, and innumerable other details. The launch date was finally chosen and it happened to fall in the middle of a long-planned family vacation to Cabo San Lucas, Mexico. Undaunted, I brought my laptop along on vacation, and had to cover it with a towel so that I could see the screen as I wrote. I am not 100% sure, but I believe that I actually clicked Publish while sitting on a lounge chair near the pool! I spent the remainder of the week offline, totally unaware of just how much excitement had been created by this launch.

Preparing for the Launch
When Andy Jassy formed the AWS group and began writing his narrative, he showed me a document that proposed the construction of something called the Amazon Execution Service and asked me if developers would find it useful, and if they would pay to use it. I read the document with great interest and responded with an enthusiastic “yes” to both of his questions. Earlier in my career I had built and run several projects hosted at various colo sites, and was all too familiar with the inflexible long-term commitments and the difficulties of scaling on demand; the proposed service would address both of these fundamental issues and make it easier for developers like me to address widely fluctuating changes in demand.

The EC2 team had to make a lot of decisions in order to build a service to meet the needs of developers, entrepreneurs, and larger organizations. While I was not part of the decision-making process, it seems to me that they had to make decisions in at least three principal areas: features, pricing, and level of detail.

Features – Let’s start by reviewing the features that EC2 launched with. There was one instance type, one region (US East (N. Virginia)), and we had not yet exposed the concept of Availability Zones. There was a small selection of prebuilt Linux kernels to choose from, and IP addresses were allocated as instances were launched. All storage was transient and had the same lifetime as the instance. There was no block storage and the root disk image (AMI) was stored in an S3 bundle. It would be easy to make the case that any or all of these features were must-haves for the launch, but none of them were, and our customers started to put EC2 to use right away. Over the years I have seen that this strategy of creating services that are minimal-yet-useful allows us to launch quickly and to iterate (and add new features) rapidly in response to customer feedback.

Pricing – While it was always obvious that we would charge for the use of EC2, we had to decide on the units that we would charge for, and ultimately settled on instance hours. This was a huge step forward when compared to the old model of buying a server outright and depreciating it over a 3 or 5 year term, or paying monthly as part of an annual commitment. Even so, our customers had use cases that could benefit from more fine-grained billing, and we launched per-second billing for EC2 and EBS back in 2017. Behind the scenes, the AWS team also had to build the infrastructure to measure, track, tabulate, and bill our customers for their usage.

Level of Detail – This might not be as obvious as the first two, but it is something that I regularly think about when I write my posts. At launch time I shared the fact that the EC2 instance (which we later called the m1.small) provided compute power equivalent to a 1.7 GHz Intel Xeon processor, but I did not share the actual model number or other details. I did share the fact that we built EC2 on Xen. Over the years, customers told us that they wanted to take advantage of specialized processor features and we began to share that information.

Some Memorable EC2 Launches
Looking back on the last 15 years, I think we got a lot of things right, and we also left a lot of room for the service to grow. While I don’t have one favorite launch, here are some of the more memorable ones:

EC2 Launch (2006) – This was the launch that started it all. One of our more notable early scaling successes took place in early 2008, when Animoto scaled their usage from less than 100 instances all the way up to 3400 in the course of a week (read Animoto – Scaling Through Viral Growth for the full story).

Amazon Elastic Block Store (2008) – This launch allowed our customers to make use of persistent block storage with EC2. If you take a look at the post, you can see some historic screen shots of the once-popular ElasticFox extension for Firefox.

Elastic Load Balancing / Auto Scaling / CloudWatch (2009) – This launch made it easier for our customers to build applications that were scalable and highly available. To quote myself, “Amazon CloudWatch monitors your Amazon EC2 capacity, Auto Scaling dynamically scales it based on demand, and Elastic Load Balancing distributes load across multiple instances in one or more Availability Zones.”

Virtual Private Cloud / VPC (2009) – This launch gave our customers the ability to create logically isolated sets of EC2 instances and to connect them to existing networks via an IPsec VPN connection. It gave our customers additional control over network addressing and routing, and opened the door to many additional networking features over the coming years.

Nitro System (2017) – This launch represented the culmination of many years of work to reimagine and rebuild our virtualization infrastructure in pursuit of higher performance and enhanced security (read more).

Graviton (2018) -This launch marked the launch of Amazon-built custom CPUs that were designed for cost-sensitive scale-out workloads. Since that launch we have continued this evolutionary line, launching general purpose, compute-optimized, memory-optimized, and burstable instances powered by Graviton2 processors.

Instance Types (2006 – Present) -We launched with one instance type and now have over four hundred, each one designed to empower our customers to address a particular use case.

Celebrate with Us
To celebrate the incredible innovation we’ve seen from our customers over the last 15 years, we’re hosting a 2-day live event on August 23rd and 24th covering a range of topics. We kick off the event with today at 9am PDT with Vice President of Amazon EC2 Dave Brown’s keynote “Lessons from 15 Years of Innovation.

Event Agenda

August 23 August 24
Lessons from 15 Years of Innovation AWS Everywhere: A Fireside Chat on Hybrid Cloud
15 Years of AWS Silicon Innovation Deep Dive on Real-World AWS Hybrid Examples
Choose the Right Instance for the Right Workload AWS Outposts: Extending AWS On-Premises for a Truly Consistent Hybrid Experience
Optimize Compute for Cost and Capacity Connect Your Network to AWS with Hybrid Connectivity Solutions
The Evolution of Amazon Virtual Private Cloud Accelerating ADAS and Autonomous Vehicle Development on AWS
Accelerating AI/ML Innovation with AWS ML Infrastructure Services Accelerate AI/ML Adoption with AWS ML Silicon
Using Machine Learning and HPC to Accelerate Product Design Digital Twins: Connecting the Physical to the Digital World

Register here and join us starting at 9 AM PT to learn more about EC2 and to celebrate along with us!

Jeff;

Heads-Up: AWS Support for Internet Explorer 11 is Ending

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/heads-up-aws-support-for-internet-explorer-11-is-ending/

If you are using Internet Explorer 11 (IE 11) to access the AWS Management Console, web-based services such as Amazon Chime or Amazon Honeycode, or other parts of the AWS web site (AWS Documentation, AWS Marketing, AWS Marketplace, or AWS Support), it is time to upgrade to a more modern & secure browser such as Edge, Firefox, or Chrome.

Here are the dates that you need to know:

July 31, 2021 – Effective today, we will no longer ensure that new AWS Management Console features and web pages function properly on IE 11. We will fix bugs that are specific to IE 11 for existing features and pages.

Late 2021 – Later this year you will receive a pop-up notification (and a reminder to upgrade your browser) if you use IE 11 to access any of the services or pages listed above.

July 31, 2022 – One year from today we will discontinue our support for IE 11 and you will need to use a browser that we support.

Jeff;

 

EC2-Classic is Retiring – Here’s How to Prepare

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-classic-is-retiring-heres-how-to-prepare/

Let’s go back to the summer of 2006 and the launch of EC2. We started out with one instance type (the venerable m1.small), security groups, and the venerable US East (N. Virginia) Region. The EC2-Classic network model was flat, with public IP addresses that were assigned at launch time.

Our initial customers saw the value right away and started to put EC2 to use in many different ways. We hosted web sites, supported the launch of Justin.TV, and helped Animoto to scale to a then-amazing 3400 instances when their Facebook app went viral.

Some of the early enhancements to EC2 focused on networking. For example, we added Elastic IP addresses in early 2008 so that addresses could be long-lived and associated with different instances over time if necessary. After that we added Auto Scaling, Load Balancing, and CloudWatch to help you to build highly scalable applications.

Early customers wanted to connect their EC2 instances to their corporate networks, exercise additional control over IP address ranges, and to construct more sophisticated network topologies. We launched Amazon Virtual Private Cloud in 2009, and in 2013 we made the VPC model essentially transparent with Virtual Private Clouds for Everyone.

Retiring EC2-Classic
EC2-Classic has served us well, but we’re going to give it a gold watch and a well-deserved sendoff! This post will tell you what you need to know, what you need to do, and when you need to do it.

Before I dive in, rest assured that we are going to make this as smooth and as non-disruptive as possible. We are not planning to disrupt any workloads and we are giving you plenty of lead time so that you can plan, test, and perform your migration. In addition to this blog post, we have tools, documentation, and people that are all designed to help.

Timing
We are already notifying the remaining EC2-Classic customers via their account teams, and will soon start to issue notices in the Personal Health Dashboard. Here are the important dates for your calendar:

  • All AWS accounts created after December 4, 2013 are already VPC-only, unless EC2-Classic was enabled as a result of a support request.
  • On October 30, 2021 we will disable EC2-Classic in Regions for AWS accounts that have no active EC2-Classic resources in the Region, as listed below. We will also stop selling 1-year and 3-year Reserved Instances for EC2-Classic.
  • On August 15, 2022 we expect all migrations to be complete, with no remaining EC2-Classic resources present in any AWS account.

Again, we don’t plan to disrupt any workloads and will do our best to help you to meet these dates.

Affected Resources
In order to fully migrate from EC2-Classic to VPC, you need to find, examine, and migrate all of the following resources:

In preparation for your migration, be sure to read Migrate from EC2-Classic to a VPC.

You may need to create (or re-create, if you deleted it) the default VPC for your account. To learn how to do this, read Creating a Default VPC.

In some cases you will be able to modify the existing resources; in others you will need to create new and equivalent resources in a VPC.

Finding EC2-Classic Resources
Use the EC2 Classic Resource Finder script to find all of the EC2-Classic resources in your account. You can run this directly in a single AWS account, or you can use the included multi-account-wrapper to run it against each account of an AWS Organization. The Resource Finder visits each AWS Region, looks for specific resources, and generates a set of CSV files. Here’s the first part of the output from my run:

The script takes a few minutes to run. I inspect the list of CSV files to get a sense of how much work I need to do:

And then I take a look inside to learn more:

Turns out that I have some stopped OpsWorks Stacks that I can either migrate or delete:

Migration Tools
Here’s an overview of the migration tools that you can use to migrate your AWS resources:

AWS Application Migration Service – Use AWS MGN to migrate your instances and your databases from EC2-Classic to VPC with minimal downtime. This service uses block-level replication and runs on multiple versions of Linux and Windows (read How to Use the New AWS Application Migration Service for Lift-and-Shift Migrations to learn more). The first 90 days of replication are free for each server that you migrate; see the AWS Application Service Pricing page for more information.

Support Automation Workflow – This workflow supports simple, instance-level migration. It converts the source instance to an AMI, creates mirrors of the security groups, and launches new instances in the destination VPC.

After you have migrated all of the resources within a particular region, you can disable EC2-Classic by creating a support case. You can do this if you want to avoid accidentally creating new EC2-Classic resources in the region, but it is definitely not required.

Disabling EC2-Classic in a region is intended to be a one-way door, but you can contact AWS Support if you run it and then find that you need to re-enable EC2-Classic for a region. Be sure to run the Resource Finder that mentioned earlier and make sure that you have not left any resources behind. These resources will continue to run and to accrue charges even after the account status has been changed.

IP Address Migration – If you are migrating an EC2 instance and any Elastic IP addresses associated with the instance, you can use move-address-to-vpc then attach the Elastic IP to the migrated instance. This will allow you to continue to reference the instance by the original DNS name.

Classic Load Balancers – If you plan to migrate a Classic Load Balancer and need to preserve the original DNS names, please contact AWS Support or your AWS account team.

Updating Instance Types
All of the instance types that are available in EC2-Classic are also available in VPC. However, many newer instance types are available only in VPC, and you may want to consider an update as part of your overall migration plan. Here’s a map to get you started:

EC2-Classic VPC
M1, T1 T2
M3 M5
C1, C3 C5
CC2 AWS ParallelCluster
CR1, M2, R3 R4
D2 D3
HS1 D2
I2 I3
G2 G3

Count on Us
My colleagues in AWS Support are ready to help you with your migration to VPC. I am also planning to update this post with additional information and other migration resources as soon as they become available.

Jeff;

 

Amazon Simple Queue Service (SQS) – 15 Years and Still Queueing!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-sqs-15-years-and-still-queueing/

Time sure does fly! I wrote about the production launch of Amazon Simple Queue Service (SQS) back in 2006, with no inkling that I would still be blogging fifteen years later, or that this service would still be growing rapidly while remaining fundamental to the architecture of so many different types of web-scale applications.

The first beta of SQS was announced with little fanfare in late 2004. Even though we have added many features since that beta, the original description (“a reliable, highly scalable hosted queue for buffering messages between distributed application components”) still applies. Our customers think of SQS as an infinite buffer in the cloud and use SQS queues to implement the connections between the functional parts of their application architecture.

Over the years, we have worked hard to keep the SQS interface simple and easy to use, even though there’s a lot of complexity inside. In order to make SQS scalable, durable, and reliable, messages are stored in a fleet that consists of thousands of servers in each AWS Region. Within a region, we save three copies of each message, taking care to distribute the messages across storage nodes and Availability Zones. In addition to this built-in redundant storage, SQS is self-healing and resilient to host failures & network interruptions. Even though SQS is now 15 years old, we continue to improve our scaling and load management applications so that you can always count on SQS to handle your mission-critical workloads.

SQS runs at an incredible scale. For example:

Amazon Product Fulfillment – During Prime Day 2021, traffic to SQS set a new record, peaking at 47.7 million messages per second.

Rapid7 – Amazon customer Rapid7 makes extensive use of SQS. According to Jeff Myers (Platform Software Architect):

Amazon SQS provides us with a simple to use, highly performant, and highly scalable messaging service without any management headaches. It is a crucial component of our architecture that has allowed us to scale to handle 10s of billions of messages per day.

You can visit the Amazon SQS home page to read about other high-scale use cases from NASA, BMW Group, Capital One, and other AWS customers.

Serverless Office Hours
Be sure to join us today (June 13th) for Serverless Office Hours at Noon PT on Twitch.tv. Rumor has it that there will be cake!

Back in Time
Here’s a quick recap of some of the major SQS milestones:

2006Production launch. An unlimited number of queues per account and items per queue, with each item up to 8 KB in length. Pay as you go pricing based on messages sent and data transferred.

2007Additional functions including control of the visibility timeout and access to the approximate number of queued messages.

2009SQS in Europe, and additional control over queue access via AddPermission and RemovePermission. This launch also marked the debut of what we then called the Access Policy Language, which has evolved into today’s IAM Policies.

2010A new free tier (100K requests/month then, since expanded to 1 million requests/month), configurable maximum message length (up to 64 KB), and configurable message retention time.

2011Additional CloudWatch Metrics for each SQS queue. Later that year we added batch operations (SendMessageBatch and DeleteMessageBatch), delay queues, and message timers.

2012Support for long polling, along with SDK support for request batching and client-side buffering.

2013Support for even larger payloads (256 KB) and a 50% price reduction.

2014 – Support for dead letter queues, to accept messages that have become stuck due to a timeout or to a processing error within your code.

2015 – Support for extended payloads (up to 2 GB) using the Extended Client Library.

2016 – Support for FIFO queues with exactly-once processing and deduplication and another price reduction.

2017 – Support for server-side encryption of messages and cost allocation tags.

2018 – Support for Amazon VPC Endpoints using AWS PrivateLink and the ability to invoke Lambda functions.

2019 – Support for Tag-on-Create and X-Ray tracing.

2020 – Support for 1-minute metrics for more fine-grained queue monitoring, a new console experience, and result pagination for the ListQueues and ListDeadLetterSourceQueues functions.

2021Tiered pricing so that you save money as your usage grows, and a High Throughput mode for FIFO queues.

Today, SQS remains simple, scalable, cost-effective, and highly reliable.

From the AWS Heroes
We asked some of the AWS Heroes to reflect on the success of SQS and to share some of their success stories. Here’s what we learned:

Eric Hammond (Serverless Hero) uses AWS Lambda Dead Letter Queues. They put alarms on the queues and send internal emails as alerts when problems need to be investigated.

Tom McLaughlin (Serverless Hero) has been using SQS since 2015. He said “My favorite use case is anytime someone wants a queue and I don’t want to manage a queuing platform. Which is always.”

Ken Collins (Serverless Hero) was not entirely sure how long he had been using SQS, and offered a range of 2 to 8 years! He uses it to power the Lambdakiq gem, which implements ActiveJob on SQS & Lambda.

Kyuhyun Byun (Serverless Hero) has been using SQS to power a push system that must sustain massive amounts of traffic, and tells us “Thanks to SQS, I don’t consider building a queuing system anymore.”

Prashanth HN (Serverless Hero) has been using SQS since 2017, and considers himself “late to the party.” He used SQS as part of his first serverless application, and finds that it is ideal for connecting services with differing throughput.

Ben Kehoe (Serverless Hero) told us that “I first saw the power of SQS when a colleague showed me how to retain state in a fleet of EC2 Spot instances by storing the state in SQS when an instance was getting shut down, and having new instances check the queue for state on startup.”

Jeremy Daly (Serverless Hero) started using SQS in 2010 as a lightweight queue that fed a facial recognition process on user-supplied images. Today, he often uses it as a buffer to throttle requests to downstream services that are not yet serverless.

Casey Lee (Container Hero) also started to use SQS in 2010 as a replacement for ActiveMQ between loosely coupled Java services. Casey implements auto scaling based on queue depth, and has found it to be an accurate way to handle the load.

Vlad Ionecsu (Container Hero) began his AWS journey with SQS back in 2014. Vlad found that the API was very easy to understand, and used SQS to power his thesis project.

Sheen Brisals (Serverless Hero) started to use SQS in 2018 while building a proof-of-concept that also used Lambda and S3. Sheen appreciates the ability to adjust the characteristics of each queue to create a good match for the message processing functions, and also makes use of both high and low priority queues.

Gojko Adzic (Serverless Hero) began to use SQS in 2013 as a task dispatch for exporters in MindMup. This online mind-mapping application allows large groups of users to collaborate, and requires strict ordering of updates to each document. Gojko used FIFO queues to allow them to process messages for different documents in parallel, with sequential order within each document.

Sebastian Müller (Serverless Hero) started to use SQS in 2016 by building a notification center for website-builder jimdo.com . The center ensures that customers are kept aware of events (orders, support messages, and contact requests) on a timely basis.

Luca Bianchi (Serverless Hero) first used SQS in 2012. He decoupled a pair of microservices running on AWS Elastic Beanstalk, and also created a fan-out processing system for a gamification platform. Today, his favorite SQS use case stores inference jobs and makes them available to a worker process running on Amazon SageMaker.

Peter Hanssens (Serverless Hero) uses SQS to offload tasks that do not need to be processed immediately. Several years ago, while assisting some data scientists, he created an event-driven batch-processing system that used a Lambda function to check a queue every 5 minutes and fire up EC2 instances to build models, while keeping strict control over the maximum number of running instances.

Serkan Ozal (Serverless Hero) has been using SQS since 2013 or so. He focuses on asynchronous message processing and counts on the ability of SQS to handle a huge number of events. He also makes uses of the message visibility feature so that he can re-process messages if necessary.

Matthieu Napoli (Serverless Hero) has been using SQS for about five years, starting out with EC2, and as a replacement for other queues. As he says, “Paired with Lambda, it gives massive parallelism out of the box without having to think too much about it. Plus the built-in failure handling makes it very robust.”

As you can see, there are a multitude of use cases for SQS.

SQS Resources
If you are not already using SQS, then it is time to get in the queue and get started. Here are some resources to help you find your way:

Happy queueing!

Jeff;

 

 

 

Prime Day 2021 – Two Chart-Topping Days

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/prime-day-2021-two-chart-topping-days/

In what has now become an annual tradition (check out my 2016, 2017, 2019, and 2020 posts for a look back), I am happy to share some of the metrics from this year’s Prime Day and to tell you how AWS helped to make it happen.

This year I bought all sorts of useful goodies including a Toshiba 43 Inch Smart TV that I plan to use as a MagicMirror, some watering cans, and a Dremel Rotary Tool Kit for my workshop.

Powered by AWS
As in years past, AWS played a critical role in making Prime Day a success. A multitude of two-pizza teams worked together to make sure that every part of our infrastructure was scaled, tested, and ready to serve our customers. Here are a few examples:

Amazon EC2 – Our internal measure of compute power is an NI, or a normalized instance. We use this unit to allow us to make meaningful comparisons across different types and sizes of EC2 instances. For Prime Day 2021, we increased our number of NIs by 12.5%. Interestingly enough, due to increased efficiency (more on that in a moment), we actually used about 6,000 fewer physical servers than we did in Cyber Monday 2020.

Graviton2 Instances – Graviton2-powered EC2 instances supported 12 core retail services. This was our first peak event that was supported at scale by the AWS Graviton2 instances, and is a strong indicator that the Arm architecture is well-suited to the data center.

An internal service called Datapath is a key part of the Amazon site. It is highly optimized for our peculiar needs, and supports lookups, queries, and joins across structured blobs of data. After an in-depth evaluation and consideration of all of the alternatives, the team decided to port Datapath to Graviton and to run it on a three-Region cluster composed of over 53,200 C6g instances.

At this scale, the price-performance advantage of the Graviton2 of up to 40% versus the comparable fifth-generation x86-based instances, along with the 20% lower cost, turns into a big win for us and for our customers. As a bonus, the power efficiency of the Graviton2 helps us to achieve our goals for addressing climate change. If you are thinking about moving your workloads to Graviton2, be sure to study our very detailed Getting Started with AWS Graviton2 document, and also consider entering the Graviton Challenge! You can also use Graviton2 database instances on Amazon RDS and Amazon Aurora; read about Key Considerations in Moving to Graviton2 for Amazon RDS and Amazon Aurora Databases to learn more.

Amazon CloudFront – Fast, efficient content delivery is essential when millions of customers are shopping and making purchases. Amazon CloudFront handled a peak load of over 290 million HTTP requests per minute, for a total of over 600 billion HTTP requests during Prime Day.

Amazon Simple Queue Service – The fulfillment process for every order depends on Amazon Simple Queue Service (SQS). This year, traffic set a new record, processing 47.7 million messages per second at the peak.

Amazon Elastic Block Store – In preparation for Prime Day, the team added 159 petabytes of EBS storage. The resulting fleet handled 11.1 trillion requests per day and transferred 614 petabytes per day.

Amazon AuroraAmazon Fulfillment Technologies (AFT) powers physical fulfillment for purchases made on Amazon. On Prime Day, 3,715 instances of AFT’s PostgreSQL-compatible edition of Amazon Aurora processed 233 billion transactions, stored 1,595 terabytes of data, and transferred 615 terabytes of data

Amazon DynamoDBDynamoDB powers multiple high-traffic Amazon properties and systems including Alexa, the Amazon.com sites, and all Amazon fulfillment centers. Over the course of the 66-hour Prime Day, these sources made trillions of API calls while maintaining high availability with single-digit millisecond performance, and peaking at 89.2 million requests per second.

Prepare to Scale
As I have detailed in my previous posts, rigorous preparation is key to the success of Prime Day and our other large-scale events. If your are preparing for a similar event of your own, I can heartily recommend AWS Infrastructure Event Management. As part of an IEM engagement, AWS experts will provide you with architectural and operational guidance that will help you to execute your event with confidence.

Jeff;

Heads Up – AWS News Blog RSS Feed Change

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/heads-up-aws-news-blog-rss-feed-change/

TL;DR – If you are using the ancient Feedburner feed for this blog, please change to the new one (https://aws.amazon.com/blogs/aws/feed/) ASAP.

Back in the early days of AWS, I paid a visit to the Chicago headquarters of a cool startup called FeedBurner. My friend Matt Shobe demo’ed their new product to me and showed me how “burning” a raw RSS feed could add value to it in various ways. Upon my return to Seattle I promptly burned the RSS feed of the original (TypePad-hosted) version of this blog, and shared that feed for many years.

FeedBurner has served us well over the years, but its time has passed. The company was acquired many years ago and the new owners are now putting the product into maintenance mode.

While the existing feed will continue to work for the foreseeable future, I would like to encourage you to use the one directly generated by this blog instead. Simply update your feed reader to refer to https://aws.amazon.com/blogs/aws/feed/ .

Jeff;

In the Works – AWS Region in Tel Aviv, Israel

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-tel-aviv-israel/

We launched three AWS Regions (Italy, South Africa, and Japan) in the last 12 months, and are working on regions in Australia, Indonesia, Spain, India, Switzerland, and the United Arab Emirates.

Tel Aviv, Israel in the Works
Today I am happy to announce that the AWS Israel (Tel Aviv) Region is in the works and will open in the first half of 2023. This region will have three Availability Zones and will give AWS customers in Israel the ability to run workloads and store data that must remain in-country.

There are 81 Availability Zones within 25 AWS Regions in operation today, with 21 more Availability Zones and seven announced regions (including this one) underway.

As is always the case with an AWS Region, each of the Availability Zones will be a fully isolated part of the AWS infrastructure. The AZs in this region will be connected together via high-bandwidth, low-latency network connections over dedicated, fully redundant metro fiber. This connectivity supports applications that need synchronous replication between AZs for availability or redundancy. You can take a peek at the AWS Global Infrastructure page to learn more about how we design and build regions and AZs.

AWS in Israel
I first visited Israel in 2013 and have been back several (but definitely not enough) times since then. I have spoken at several AWS Summits and visited many of early customers in the area. Today, AWS has the following resources on the ground in Israel:

Israel is also home to Annapurna Labs, an Amazon.com subsidiary that is responsible for developing much of the innovative hardware that powers AWS.

In addition, the government of Israel announced that it has selected AWS as the primary cloud provider for the Nimbus Project. As part of this project, government ministries and subsidiaries in Israel will use cloud computing to power a digital transformation and to provide new digital services for the citizens of Israel.

Stay Tuned
We’ll announce the opening of the AWS Israel (Tel Aviv) Region in a forthcoming blog post, so be sure to stay tuned!

Jeff;

 

Now Open Third Availability Zone in the AWS China (Beijing) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-third-availability-zone-in-the-aws-china-beijing-region/

I made my first trip to China in late 2008. I was able to speak to developers and entrepreneurs and to get a sense of the then-nascent market for cloud computing. With over 900 million Internet users as of 2020 (according to a recent report from China Internet Network Information Center), China now has the largest user base in the world.

A limited preview of the China (Beijing) Region was launched in 2013 and brought to general availability in 2016. A year later the AWS China (Ningxia) Region launched. In order to comply with China’s legal and regulatory requirements, we collaborated with local Chinese partners. These partners have the proper telecom licenses to provide cloud services in Mainland China. Today, developers can deploy cloud-based applications inside of using the same APIs, protocols, services, and operational practices used by our customers in other parts of the world. This commonality has been particularly attractive to multinational companies that can take advantage of their existing AWS experience when they expand their cloud infrastructure into Mainland China.

Third Availability Zone in Beijing
Today I am happy to announce that we are adding a third Availability Zone (AZ) to the China (Beijing) Region operated by Sinnet in order to support the demands of our growing customer base in China. As is the case with all AWS Availability Zones, this one encompasses one or more discrete data centers in separate facilities, each with redundant power, networking, and connectivity. With this launch, both AWS Regions in China offer three AZs and allow customers to build applications that are scalable, fault-tolerant, and highly available.

AWS Customers in the Beijing Region
Many enterprise customers in China are using the China (Beijing) Region to support their digital transformations. For example:

Yantai Shinho was founded in 1992 and now manufactures 13 popular condiments. They now have a presence in over 100 countries and supply products that tens of millions of families enjoy. They are already using the region to support their front-end and big data efforts, and plan to make use of the additional architectural options made possible by the new AZ.

Kingdee International Software Group was founded in 1993 and now provides corporate management and cloud services for more than 6.8 million enterprises and government organizations. They now have over 8,000 employees AND are committed to changing the way that hundreds of millions of people work.

As I noted earlier, our multinational customers are using the AWS Regions in China to expand their global presence. Here are a few examples:

Australian independent software vendor Canva offers its design-on-demand application to 150 million active users in 190 countries. They launched their Chinese products in August 2018, and have since built in into a first-class design platform that includes tens of millions of high-resolution pictures, Chinese fonts, original templates, and more. Chinese users have already created over 50 million designs on the Canva.cn platform.

Swire is a 200 year old business group that spans the aviation, beverage, food, industrial, marine services, and property industries. Their Swire Coca-Cola division has the exclusive right to manufacture, market, and distribute Coca-Cola products in eleven Chinese provinces, the Shanghai Municipality, Hong Kong, Taiwan, and part of the Western United States — a total customer base of 728 million people. Swire Coca-Cola’s systems primarily operate in the China (Beijing) Region and will soon make use of the third AZ.

Finally, startups are using the region to power their fast-growing businesses:

CraiditX applies machine learning technology originally developed for search engines to the financial services industry. Established in 2015, they use behavioral language processing, natural language processing, neural networks, and integrated modeling to build risk management systems.

Founded in 2016, Momenta is a Chinese startup that is building a “brain” for autonomous vehicles. Powered by deep learning and data-driven path planning, they are working on autonomous driving for passenger vehicles and full autonomy for mobile service vehicles, all deployed in the China (Beijing) Region.

81 and 25
This launch raises the global AWS footprint to a total of 81 Availability Zones across 25 geographic regions, with plans to launch 18 additional Availability Zones and six more regions in Australia, India, Indonesia, Spain, Switzerland, and United Arab Emirates (UAE).

–Jeff, with lots of help from Lillian Shao;

PS – The operator and service provider for the AWS China (Beijing) Region is Beijing Sinnet Technology Co., Ltd. The operator and service provider for the AWS China (Ningxia) Region is Ningxia Western Cloud Data Technology Co., Ltd.

In the Works – AWS Region in the United Arab Emirates (UAE)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-the-united-arab-emirates-uae/

We are currently building AWS regions in Australia, Indonesia, Spain, India, and Switzerland.

UAE in the Works
I am happy to announce that the AWS Middle East (UAE) Region is in the works and will open in the first half of 2022. The new region is an extension of our existing investment, which already includes two AWS Direct Connect locations and two Amazon CloudFront edge locations, all of which have been in place since 2018. The new region will give AWS customers in the UAE the ability to run workloads and to store data that must remain in-country, in addition to the ability to serve local customers with even lower latency.

The new region will have three Availability Zones, and will be the second AWS Region in the Middle East, joining the existing AWS Region in Bahrain. There are 80 Availability Zones within 25 AWS Regions in operation today, with 15 more Availability Zones and five announced regions underway in the locations that I listed earlier.

As is always the case with an AWS Region, each of the Availability Zones will be a fully isolated part of the AWS infrastructure. The AZs in this region will be connected together via high-bandwidth, low-latency network connections to support applications that need synchronous replication between AZs for availability or redundancy.

AWS in the UAE
In addition to the upcoming AWS Region and the Direct Connect and CloudFront edge locations, we continue to build our team of account managers, partner managers, data center technicians, systems engineers, solutions architects, professional service providers, and more (check out our current positions).

We also plan to continue our on-going investments in education initiatives, training, and start-up enablement to support the UAE’s plans for economic development and digital transformation.

Our customers in the UAE are already using AWS to drive innovation! For example:

Mohammed Bin Rashid Space Centre (MBRSC) – Founded in 2006, MBSRC is home to the UAE’s National Space Program. The Hope Probe was launched last year and reached Mars in February of this year. Data from the probe’s instruments is processed and analyzed on AWS, and made available to the global scientific community in less than 20 minutes.

Anghami is the leading music platform in the Middle East and North Africa, giving over 70 million users access to 57 million songs. They have been hosting their infrastructure on AWS since their days as a tiny startup,. and have benefited from the ability to scale up by as much as 300% when new music is launched.

Sarwa is an investment bank and personal finance platform that was born on the AWS cloud in 2017. They grew by a factor of four in 2020 while processing hundreds of thousands of transactions. Recent AWS-powered innovations from Sarwa include the Sarwa App (design to market in 3 months) and the upcoming Sarwa Trade platform.

Stay Tuned
We’ll be announcing the opening of the Middle East (UAE) Region in a forthcoming blog post, so be sure to stay tuned!

Jeff;

AWS Asia Pacific (Osaka) Region Now Open to All, with Three AZs and More Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-asia-pacific-osaka-region-now-open-to-all-with-three-azs-more-services/

AWS has had a presence in Japan for a long time! We opened the Asia Pacific (Tokyo) Region in March 2011, added a third Availability Zone (AZ) in 2012, and a fourth in 2018. Since that launch, customers in Japan and around the world have used the region to host an incredibly wide variety of applications!

We opened the Osaka Local Region in 2018 to give our customers in Japan a disaster recovery option for their workloads. Located 400 km from Tokyo, the Osaka Local Region used an isolated, fault-tolerant design contained within a single data center.

From Local to Standard
I am happy to announce that the Osaka Local Region has been expanded and is a now a standard AWS region, complete with three Availability Zones. As is always the case with AWS, the AZs are designed to provide physical redundancy, and are able to withstand power outages, internet downtime, floods, and other natural disasters.

The following services are available, with more in the works: Amazon Elastic Kubernetes Service (EKS), Amazon API Gateway, Auto Scaling, Application Auto Scaling, Amazon Aurora, AWS Config, AWS Personal Health Dashboard, AWS IQ, AWS Organizations, AWS Secrets Manager, AWS Shield Standard (regional), AWS Snowball Edge, AWS Step Functions, AWS Systems Manager, AWS Trusted Advisor, AWS Certificate Manager, CloudEndure Migration, CloudEndure Disaster Recovery, AWS CloudFormation, Amazon CloudFront, AWS CloudTrail, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, AWS CodeDeploy, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Elastic Container Registry, Amazon Elastic Container Service (ECS), AWS Elastic Beanstalk, Amazon Elastic Block Store (EBS), Amazon Elastic Compute Cloud (EC2), EC2 Image Builder, Elastic Load Balancing, Amazon EMR, Amazon ElastiCache, Amazon EventBridge, AWS Fargate, Amazon Glacier, AWS Glue, AWS Identity and Access Management (IAM), AWS Snowball, AWS Key Management Service (KMS), Amazon Kinesis Data Firehose, Amazon Kinesis Data Streams, AWS Lambda, AWS Marketplace, AWS Mobile SDK, Network Load Balancer, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), AWS VPN, VM Import/Export, AWS X-Ray, AWS Artifact, AWS PrivateLink, and Amazon Virtual Private Cloud (VPC).

The Asia Pacific (Osaka) Region supports the C5, C5d, D2, I3, I3en, M5, M5d, R5d, and T3 instance types, in On-Demand, Spot, and Reserved Instance form. X1 and X1e instances are available in a single AZ.

In addition to the AWS regions in Tokyo and Osaka, customers in Japan also benefit from:

  • 16 CloudFront edge locations in Tokyo.
  • One CloudFront edge location in Osaka.
  • One CloudFront Regional Edge Cache in Tokyo.
  • Two AWS Direct Connect locations in Tokyo.
  • One Direct Connect location in Osaka.

Here are typical latency values from the Asia Pacific (Osaka) Region to other cities in the area:

City Latency
Nagoya 2-5 ms
Hiroshima 2-5 ms
Tokyo 5-8 ms
Fukuoka 11-13 ms
Sendai 12-15 ms
Sapporo 14-17 ms
Seoul 27 ms
Taipei 29 ms
Hong Kong 38 ms
Manila 49 ms

AWS Customers in Japan
As I mentioned earlier, our customers are using the AWS regions in Tokyo and Osaka to host an incredibly wide variety of applications. Here’s a sampling:

Mitsubishi UFJ Financial Group (MUFG) – This financial services company adopted a cloud-first strategy and did their first AWS deployment in 2017. They have built a data platform for their banking and group data that helps them to streamline administrative processes, and also migrated a market risk management system. MUFG has been using the Osaka Local Region and is planning to use the Asia Pacific (Osaka) Region to run more workloads and to support their ongoing digital transformation.

KDDI Corporation (KDDI) – This diversified (telecommunication, financial services, Internet, electricity distribution, consumer appliance, and more) company started using AWS in 2016 after AWS met KDDI’s stringent internal security standards. They currently build and run more than 60 services on AWS, including the backend of the au Denki application, used by consumers to monitor electricity usage and rates. They plan to use the Asia Pacific (Osaka) Region to initiate multi-region service to their customers in Japan.

OGIS-RI – Founded in 1983, this global IT consulting firm is a part of the Daigas Group of companies. OSIS-RI provides information strategy, systems integration, systems development, network construction, support, and security. They use AWS to provide their enterprise customers with ekul, a data measurement service that measures and visualizes gas and electricity usage in real time and send it to corporate customers across Japan.

Sony Bank – Founded in 2001 as an asset management bank for individuals, Sony Bank provides services that include foreign currency deposits, home loans, investment trusts, and debit cards. Their gradual migration of internal banking systems to AWS began in 2013 and was 80% complete at the end of 2019. This migration reduced their infrastructure costs by 60% and more than halved the time it once took to procure and build out new infrastructure.

AWS Resources in Japan
As a quick reminder, enterprises, government and research organizations, small and medium businesses, educators, and startups in Japan have access to a wide variety of AWS and community resources. Here’s a sampling:

Available Now
The new region is open to all AWS customers and you can start to use it today!

Jeff;

 

Amazon Location – Add Maps and Location Awareness to Your Applications

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-location-add-maps-and-location-awareness-to-your-applications/

We want to make it easier and more cost-effective for you to add maps, location awareness, and other location-based features to your web and mobile applications. Until now, doing this has been somewhat complex and expensive, and also tied you to the business and programming models of a single provider.

Introducing Amazon Location Service
Today we are making Amazon Location available in preview form and you can start using it today. Priced at a fraction of common alternatives, Amazon Location Service gives you access to maps and location-based services from multiple providers on an economical, pay-as-you-go basis.

You can use Amazon Location Service to build applications that know where they are and respond accordingly. You can display maps, validate addresses, perform geocoding (turn an address into a location), track the movement of packages and devices, and much more. You can easily set up geofences and receive notifications when tracked items enter or leave a geofenced area. You can even overlay your own data on the map while retaining full control.

You can access Amazon Location Service from the AWS Management Console, AWS Command Line Interface (CLI), or via a set of APIs. You can also use existing map libraries such as Mapbox GL and Tangram.

All About Amazon Location
Let’s take a look at the types of resources that Amazon Location Service makes available to you, and then talk about how you can use them in your applications.

MapsAmazon Location Service lets you create maps that make use of data from our partners. You can choose between maps and map styles provided by Esri and by HERE Technologies, with the potential for more maps & more styles from these and other partners in the future. After you create a map, you can retrieve a tile (at one of up to 16 zoom levels) using the GetMapTile function. You won’t do this directly, but will use Mapbox GL, Tangram, or another library instead.

Place Indexes – You can choose between indexes provided by Esri and HERE. The indexes support the SearchPlaceIndexForPosition function which returns places, such as residential addresses or points of interest (often known as POI) that are closest to the position that you supply, while also performing reverse geocoding to turn the position (a pair of coordinates) into a legible address. Indexes also support the SearchPlaceIndexForText function, which searches for addresses, businesses, and points of interest using free-form text such as an address, a name, a city, or a region.

Trackers –Trackers receive location updates from one or more devices via the BatchUpdateDevicePosition function, and can be queried for the current position (GetDevicePosition) or location history (GetDevicePositionHistory) of a device. Trackers can also be linked to Geofence Collections to implement monitoring of devices as they move in and out of geofences.

Geofence Collections – Each collection contains a list of geofences that define geographic boundaries. Here’s a geofence (created with geojson.io) that outlines a park near me:

Amazon Location in Action
I can use the AWS Management Console to get started with Amazon Location and then move on to the AWS Command Line Interface (CLI) or the APIs if necessary. I open the Amazon Location Service Console, and I can either click Try it! to create a set of starter resources, or I can open up the navigation on the left and create them one-by-one. I’ll go for one-by-one, and click Maps:

Then I click Create map to proceed:

I enter a Name and a Description:

Then I choose the desired map and click Create map:

The map is created and ready to be added to my application right away:

Now I am ready to embed the map in my application, and I have several options including the Amplify JavaScript SDK, the Amplify Android SDK, the Amplify iOS SDK, Tangram, and Mapbox GL (read the Developer Guide to learn more about each option).

Next, I want to track the position of devices so that I can be notified when they enter or exit a given region. I use a GeoJSON editing tool such as geojson.io to create a geofence that is built from polygons, and save (download) the resulting file:

I click Create geofence collection in the left-side navigation, and in Step 1, I add my GeoJSON file, enter a Name and Description, and click Next:

Now I enter a Name and a Description for my tracker, and click Next. It will be linked to the geofence collection that I just created:

The next step is to arrange for the tracker to send events to Amazon EventBridge so that I can monitor them in CloudWatch Logs. I leave the settings as-is, and click Next to proceed:

I review all of my choices, and click Finalize to move ahead:

The resources are created, set up, and ready to go:

I can then write code or use the CLI to update the positions of my devices:

$ aws location batch-update-device-position \
   --tracker-name MyTracker1 \
   --updates "DeviceId=Jeff1,Position=-122.33805,47.62748,SampleTime=2020-11-05T02:59:07+0000"

After I do this a time or two, I can retrieve the position history for the device:

$ aws location get-device-position-history \
  -tracker-name MyTracker1 --device-id Jeff1
------------------------------------------------
|           GetDevicePositionHistory           |
+----------------------------------------------+
||               DevicePositions              ||
|+---------------+----------------------------+|
||  DeviceId     |  Jeff1                     ||
||  ReceivedTime |  2020-11-05T02:59:17.246Z  ||
||  SampleTime   |  2020-11-05T02:59:07Z      ||
|+---------------+----------------------------+|
|||                 Position                 |||
||+------------------------------------------+||
|||  -122.33805                              |||
|||  47.62748                                |||
||+------------------------------------------+||
||               DevicePositions              ||
|+---------------+----------------------------+|
||  DeviceId     |  Jeff1                     ||
||  ReceivedTime |  2020-11-05T03:02:08.002Z  ||
||  SampleTime   |  2020-11-05T03:01:29Z      ||
|+---------------+----------------------------+|
|||                 Position                 |||
||+------------------------------------------+||
|||  -122.43805                              |||
|||  47.52748                                |||
||+------------------------------------------+||

I can write Amazon EventBridge rules that watch for the events, and use them to perform any desired processing. Events are published when a device enters or leaves a geofenced area, and look like this:

{
  "version": "0",
  "id": "7cb6afa8-cbf0-e1d9-e585-fd5169025ee0",
  "detail-type": "Location Geofence Event",
  "source": "aws.geo",
  "account": "123456789012",
  "time": "2020-11-05T02:59:17.246Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:geo:us-east-1:123456789012:geofence-collection/MyGeoFences1",
    "arn:aws:geo:us-east-1:123456789012:tracker/MyTracker1"
  ],
  "detail": {
        "EventType": "ENTER",
        "GeofenceId": "LakeUnionPark",
        "DeviceId": "Jeff1",
        "SampleTime": "2020-11-05T02:59:07Z",
        "Position": [-122.33805, 47.52748]
  }
}

Finally, I can create and use place indexes so that I can work with geographical objects. I’ll use the CLI for a change of pace. I create the index:

$ aws location create-place-index \
  --index-name MyIndex1 --data-source Here

Then I query it to find the addresses and points of interest near the location:

$ aws location search-place-index-for-position --index-name MyIndex1 \
  --position "[-122.33805,47.62748]" --output json \
  |  jq .Results[].Place.Label
"Terry Ave N, Seattle, WA 98109, United States"
"900 Westlake Ave N, Seattle, WA 98109-3523, United States"
"851 Terry Ave N, Seattle, WA 98109-4348, United States"
"860 Terry Ave N, Seattle, WA 98109-4330, United States"
"Seattle Fireboat Duwamish, 860 Terry Ave N, Seattle, WA 98109-4330, United States"
"824 Terry Ave N, Seattle, WA 98109-4330, United States"
"9th Ave N, Seattle, WA 98109, United States"
...

I can also do a text-based search:

$ aws location search-place-index-for-text --index-name MyIndex1 \
  --text Coffee --bias-position "[-122.33805,47.62748]" \
  --output json | jq .Results[].Place.Label
"Mohai Cafe, 860 Terry Ave N, Seattle, WA 98109, United States"
"Starbucks, 1200 Westlake Ave N, Seattle, WA 98109, United States"
"Metropolitan Deli and Cafe, 903 Dexter Ave N, Seattle, WA 98109, United States"
"Top Pot Doughnuts, 590 Terry Ave N, Seattle, WA 98109, United States"
"Caffe Umbria, 1201 Westlake Ave N, Seattle, WA 98109, United States"
"Starbucks, 515 Westlake Ave N, Seattle, WA 98109, United States"
"Cafe 815 Mercer, 815 9th Ave N, Seattle, WA 98109, United States"
"Victrola Coffee Roasters, 500 Boren Ave N, Seattle, WA 98109, United States"
"Specialty's, 520 Terry Ave N, Seattle, WA 98109, United States"
...

Both of the searches have other options; read the Geocoding, Reverse Geocoding, and Search to learn more.

Things to Know
Amazon Location is launching today as a preview, and you can get started with it right away. During the preview we plan to add an API for routing, and will also do our best to respond to customer feedback and feature requests as they arrive.

Pricing is based on usage, with an initial evaluation period that lasts for three months and lets you make numerous calls to the Amazon Location APIs at no charge. After the evaluation period you pay the prices listed on the Amazon Location Pricing page.

Amazon Location is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo) Regions.

Jeff;

 

Join the Preview – Amazon Managed Service for Prometheus (AMP)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/join-the-preview-amazon-managed-service-for-prometheus-amp/

Observability is an essential aspect of running cloud infrastructure at scale. You need to know that your resources are healthy and performing as expected, and that your system is delivering the desired level of performance to your customers.

A lot of challenges arise when monitoring container-based applications. First, because container resources are transient and there are lots of metrics to watch, the monitoring data has strikingly high cardinality. In plain language this means that there are lots of unique values, which can make it harder to define a space-efficient storage model and to create queries that return meaningful results. Second, because a well-architected container-based system is composed using a large number of moving parts, ingesting, processing, and storing the monitoring data can become an infrastructure challenge of its own.

Prometheus is a leading open-source monitoring solution with an active developer and user community. It has a multi-dimensional data model that is a great fit for time series data collected from containers.

Introducing Amazon Managed Service for Prometheus (AMP)
Today we are launching a preview of Amazon Managed Service for Prometheus (AMP). This fully-managed service is 100% compatible with Prometheus. It supports the same metrics, the same PromQL queries, and can also make use of the 150+ Prometheus exporters. AMP runs across multiple Availability Zones for high availability, and is powered by CNCF Cortex for horizontal scalability. AMP will easily scale to ingest, store, and query millions of time series metrics.

The preview includes support for Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic Container Service (ECS). It can also be used to monitor your self-managed Kubernetes clusters that are running in the cloud or on-premises.

Getting Started with Amazon Managed Service for Prometheus (AMP)
After joining the preview, I open the AMP Console, enter a name for my AMP workspace, and click Create to get started (API and CLI support is also available):

My workspace is active within a minute or so. The console provides me with the endpoints that I can use to write data to my workspace, and to issue queries:

It also provides guidance on how to configure an existing Prometheus server to send metrics to the AMP workspace:

I can also use AWS Distro for OpenTelemetry to scrape Prometheus metrics and send them to my AMP workspace.

Once I have stored some metrics in my workspace, I can run PromQL queries and I can use Grafana to create dashboards and other visualizations. Here’s a sample Grafana dashboard:

Join the Preview
As noted earlier, we’re launching Amazon Managed Service for Prometheus (AMP) in preview form and you are welcome to try it out today.

We’ll have more info (and a more detailed blog post) at launch time.

Jeff;

AWS CloudShell – Command-Line Access to AWS Resources

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-cloudshell-command-line-access-to-aws-resources/

No matter how much automation you have built, no matter how great you are at practicing Infrastructure as Code (IAC), and no matter how successfully you have transitioned from pets to cattle, you sometimes need to interact with your AWS resources at the command line. You might need to check or adjust a configuration file, make a quick fix to a production environment, or even experiment with some new AWS services or features.

Some of our customers feel most at home when working from within a web browser and have yet to set up or customize their own command-line interface (CLI). They tell is that they don’t want to deal with client applications, public keys, AWS credentials, tooling, and so forth. While none of these steps are difficult or overly time-consuming, they do add complexity and friction and we always like to help you to avoid both.

Introducing AWS CloudShell
Today we are launching AWS CloudShell, with the goal of making the process of getting to an AWS-enabled shell prompt simple and secure, with as little friction as possible. Every shell environment that you run with CloudShell has the AWS Command Line Interface (CLI) (v2) installed and configured so you can run aws commands fresh out of the box. The environments also include the Python and Node runtimes, with many more to come in the future.

To get started, I simply click the CloudShell icon in the AWS Management Console:

My shell sets itself up in a matter of seconds and I can issue my first aws command immediately:

The shell environment is based on Amazon Linux 2. I can store up to 1 GB of files per region in my home directory and they’ll be available each time I open a shell in the region. This includes shell configuration files such as .bashrc and shell history files.

I can access the shell via SSO or as any IAM principal that can login to the AWS Management Console, including federated roles. In order to access CloudShell, the AWSCloudShellFullAccess policy must be in effect. The shell runs as a normal (non-privileged) user, but I can sudo and install packages if necessary.

Here are a couple of features that you should know about:

Themes & Font Sizes – You can switch between light and dark color themes, and choose any one of five font sizes:

Tabs and Sessions – You can have multiple sessions open within the same region, and you can control the tabbing behavior, with options to split horizontally and vertically:

You can also download files from the shell environment to your desktop, and upload them from your desktop to the shell.

Things to Know
Here are a couple of important things to keep in mind when you are evaluating CloudShell:

Timeouts & Persistence – Each CloudShell session will timeout after 20 minutes or so of inactivity, and can be reestablished by refreshing the window:

RegionsCloudShell is available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo) Regions, with the remaining regions on the near-term roadmap.

Persistent Storage – Files stored within $HOME persist between invocations of CloudShell with a limit of 1 GB per region; all other storage is ephemeral. This means that any software that is installed outside of $HOME will not persist, and that no matter what you change (or break), you can always begin anew with a fresh CloudShell environment.

Network Access – Sessions can make outbound connections to the Internet, but do not allow any type of inbound connections. Sessions cannot currently connect to resources inside of private VPC subnets, but that’s also on the near-term roadmap.

Runtimes – In addition to the Python and Node runtimes, Bash, PowerShell, jq, git, the ECS CLI, the SAM CLI, npm, and pip already installed and ready to use.

Pricing – You can use up to 10 concurrent shells in each region at no charge. You only pay for other AWS resources you use with CloudShell to create and run your applications.

Try it Out
AWS CloudShell is available now and you can start using it today. Launch one and give it a try, and let us know what you think!

Jeff;

PennyLane on Braket + Progress Toward Fault-Tolerant Quantum Computing + Tensor Network Simulator

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/pennylane-on-braket-progress-toward-fault-tolerant-quantum-computing-tensor-network-simulator/

I first wrote about Amazon Braket last year and invited you to Get Started with Quantum Computing! Since that launch we have continued to push forward, and have added several important & powerful new features to Amazon Braket:

August 2020 – General Availability of Amazon Braket with access to quantum computing hardware from D-Wave, IonQ, and Rigetti.

September 2020 – Access to D-Wave’s Advantage Quantum Processing Unit (QPU), which includes more than 5,000 qubits and 15-way connectivity.

November 2020 – Support for resource tagging, AWS PrivateLink, and manual qubit allocation. The first two features make it easy for you to connect your existing AWS applications to the new ones that you build with Amazon Braket, and should help you to envision what a production-class cloud-based quantum computing application will look like in the future. The last feature is particularly interesting to researchers; from what I understand, certain qubits within a given piece of quantum computing hardware can have individual physical and connectivity properties that might make them perform somewhat better when used as part of a quantum circuit. You can read about Allocating Qubits on QPU Devices to learn more (this is somewhat similar to the way that a compiler allocates CPU registers to frequently used variables).

In my initial blog post I also announced the formation of the AWS Center for Quantum Computing adjacent to Caltech.

As I write this, we are in the Noisy Intermediate Scale Quantum (NISQ) era. This description captures the state of the art in quantum computers: each gate in a quantum computing circuit introduces a certain amount of accuracy-destroying noise, and the cumulative effect of this noise imposes some practical limits on the scale of the problems.

Update Time
We are working to address this challenge, as are many others in the quantum computing field. Today I would like to give you an update on what we are doing at the practical and the theoretical level.

Similar to the way that CPUs and GPUs work hand-in-hand to address large scale classical computing problems, the emerging field of hybrid quantum algorithms joins CPUs and QPUs to speed up specific calculations within a classical algorithm. This allows for shorter quantum executions that are less susceptible to the cumulative effects of noise and that run well on today’s devices.

Variational quantum algorithms are an important type of hybrid quantum algorithm. The classical code (in the CPU) iteratively adjusts the parameters of a parameterized quantum circuit, in a manner reminiscent of the way that a neural network is built by repeatedly processing batches of training data and adjusting the parameters based on the results of an objective function. The output of the objective function provides the classical code with guidance that helps to steer the process of tuning the parameters in the desired direction. Mathematically (I’m way past the edge of my comfort zone here), this is called differentiable quantum computing.

So, with this rather lengthy introduction, what are we doing?

First, we are making the PennyLane library available so that you can build hybrid quantum-classical algorithms and run them on Amazon Braket. This library lets you “follow the gradient” and write code to address problems in computational chemistry (by way of the included Q-Chem library), machine learning, and optimization. My AWS colleagues have been working with the PennyLane team to create an integrated experience when PennyLane is used together with Amazon Braket.

PennyLane is pre-installed in Braket notebooks and you can also install the Braket-PennyLane plugin in your IDE. Once you do this, you can train quantum circuits as you would train neural networks, while also making use of familiar machine learning libraries such as PyTorch and TensorFlow. When you use PennyLane on the managed simulators that are included in Amazon Braket, you can train your circuits up to 10 times faster by using parallel circuit execution.

Second, the AWS Center for Quantum Computing is working to address the noise issue in two different ways: we are investigating ways to make the gates themselves more accurate, while also working on the development of more efficient ways to encode information redundantly across multiple qubits. Our new paper, Building a Fault-Tolerant Quantum Computer Using Concatenated Cat Codes speaks to both of these efforts. While not light reading, the 100+ page paper proposes the construction of a 2-D grid of micron-scale electro-acoustic qubits that are coupled via superconducting circuits:

Interestingly, this proposed qubit design was used to model a Toffoli gate, and then tested via simulations that ran for 170 hours on c5.18xlarge instances. In a very real sense, the classical computers are being used to design and then simulate their future quantum companions.

The proposed hybrid electro-acoustic qubits are far smaller than what is available today, and also offer a > 10x reduction in overhead (measured in the number of physical qubits required per error-corrected qubit and the associated control lines). In addition to working on the experimental development of this architecture based around hybrid electro-acoustic qubits, the AWS CQC team will also continue to explore other promising alternatives for fault-tolerant quantum computing to bring new, more powerful computing resources to the world.

And Third, we are expanding the choice of managed simulators that are available on Amazon Braket. In addition to the state vector simulator (which can simulate up to 34 qubits), you can use the new tensor network simulator that can simulate up to 50 qubits for certain circuits. This simulator builds a graph representation of the quantum circuit and uses the graph to find an optimized way to process it.

Help Wanted
If you are ready to help us to push the state of the art in quantum computing, take a look at our open positions. We are looking for Quantum Research Scientists, Software Developers, Hardware Developers, and Solutions Architects.

Time to Learn
It is still Day One (as we often say at Amazon) when it comes to quantum computing and now is the time to learn more and to get some experience with. Be sure to check out the Braket Tutorials repository and let me know what you think.

Jeff;

PS – If you are ready to start exploring ways that you can put quantum computing to work in your organization, be sure to take a look at the Amazon Quantum Solutions Lab.

In the Works – AWS Region in Melbourne, Australia

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-melbourne-australia/

We launched new AWS Regions in Italy and South Africa in 2020, and are working on regions in Indonesia, Japan, Spain, India, and Switzerland.

Melbourne, Australia in 2020
Today I am happy to announce that the Asia Pacific (Melbourne) region is in the works, and will open in the second half of 2022 with three Availability Zones. In addition to the Asia Pacific (Sydney) Region, there are already seven Amazon CloudFront Edge locations in Australia, backed by a Regional Edge cache in Sydney.

This will be our second region in Australia, and our ninth in Asia Pacific, joining the existing region in Australia along with those in China, India, Japan, Korea, and Singapore. There are 77 Availability Zones within 24 AWS Regions in operation today, with 18 more Availability Zones and six more Regions (including this one) underway.

As part of our commitment to the Climate Pledge, Amazon is on a path to powering our operations with 100% renewable energy by 2025 as part of our goal to reach net zero carbon by 2040. To this end, we have invested in two renewable energy projects in Australia with a combined 165 MW capacity and the ability to generate 392,000 MWh annually.

The new region will give you (and hundreds of thousand of other active AWS customers in Australia) additional architectural options including the ability to store backup data in geographically separated locations within Australia.

AWS in Australia
I have made several trips to Australia on behalf of AWS over the last 4 or 5 years and I always enjoy meeting our customers while I am there.

Our Australian customers use AWS to accelerate innovation, increase agility, and to drive cost savings. Here are a few examples:

Commonwealth Bank of Australia (CBA) – As Australia’s leading provider of personal, business, and institutional banking services, CBA counts on AWS to provide infrastructure that is safe, resilient, and secure. They are long-time advocates of cloud computing and have been using AWS since 2012.

Swinburne University – The university focuses on innovation, industry engagement, and social inclusion. They started using AWS in 2016 and have collaborated on innovations that support communities in Victoria. The Swinburne Data for Social Good Cloud Innovation Centre uses cloud technologies and intelligent data analytics to solve real-world problems.

XY Sense – Based in Melbourne, this startup is using smart sensors and ML-powered analytics to create technology-enabled workplaces. Their sensor platform takes advantage of multiple AWS services including IoT and serverless, and processes over 7 billion anonymous data points each month.

AWS Partner Network (APN) Partners in Australia are also doing some amazing work with AWS. Again, a few examples:

Versent – Also based in Melbourne, this partner comprises a group of specialist consultants and a product company by the name of Stax. Versent recently helped Land Services South Australia to modernize their full tech stack as part of a shift to AWS (ready the case study to learn more).

Deloitte Australia – As an AWS Strategic Global Premier Partner since 2015, Deloitte Australia works with business and public sector agencies, with a focus on delivery of advanced products and services. As part of their work, over 4,000 employees across Deloitte have participated in the Deloitte Cloud Guild and have strengthened their cloud computing skills as a result.

Investing in Developers
Several AWS programs are designed to help to create and upskill the next generation of developers and students so that they are ready to become part of the next generation of IT leadership. AWS re/Start prepares unemployed, underemployed, and transitioning individuals for a career in cloud computing. AWS Academy provides higher education institutions with a free, ready-to-teach cloud computing curriculum. AWS Educate gives students access to AWS services and content that are designed to help them build knowledge and skills in cloud computing.

Stay Tuned
As I noted earlier, the Asia Pacific (Melbourne) Region is scheduled to open in the second half of 2022. As always, we’ll announce the opening in a post on this blog, so stay tuned!

Jeff;

Amazon S3 Update – Strong Read-After-Write Consistency

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-s3-update-strong-read-after-write-consistency/

When we launched S3 back in 2006, I discussed its virtually unlimited capacity (“…easily store any number of blocks…”), the fact that it was designed to provide 99.99% availability, and that it offered durable storage, with data transparently stored in multiple locations. Since that launch, our customers have used S3 in an amazing diverse set of ways: backup and restore, data archiving, enterprise applications, web sites, big data, and (at last count) over 10,000 data lakes.

One of the more interesting (and sometimes a bit confusing) aspects of S3 and other large-scale distributed systems is commonly known as eventual consistency. In a nutshell, after a call to an S3 API function such as PUT that stores or modifies data, there’s a small time window where the data has been accepted and durably stored, but not yet visible to all GET or LIST requests. Here’s how I see it:

This aspect of S3 can become very challenging for big data workloads (many of which use Amazon EMR) and for data lakes, both of which require access to the most recent data immediately after a write. To help customers run big data workloads in the cloud, Amazon EMR built EMRFS Consistent View and open source Hadoop developers built S3Guard, which provided a layer of strong consistency for these applications.

S3 is Now Strongly Consistent
After that overly-long introduction, I am ready to share some good news!

Effective immediately, all S3 GET, PUT, and LIST operations, as well as operations that change object tags, ACLs, or metadata, are now strongly consistent. What you write is what you will read, and the results of a LIST will be an accurate reflection of what’s in the bucket. This applies to all existing and new S3 objects, works in all regions, and is available to you at no extra charge! There’s no impact on performance, you can update an object hundreds of times per second if you’d like, and there are no global dependencies.

This improvement is great for data lakes, but other types of applications will also benefit. Because S3 now has strong consistency, migration of on-premises workloads and storage to AWS should now be easier than ever before.

We’ve been working with the Amazon EMR team and developers in the open-source community to ensure that customers can take advantage of this update with their big data workloads. As a result of that you no longer need to use EMRFS Consistent View or S3Guard, further reducing the cost to run big data workloads in AWS.

A Word From Dropbox
Long-time AWS customer Dropbox recently migrated a 34 PB analytics data lake from on-premises Hadoop clusters to S3. Watch this video to learn more about strong consistency and how it has allowed Dropbox to simplify their data lake:

Jeff;

 

 

In the Works – 3 More AWS Local Zones in 2020, and 12 More in 2021

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-more-aws-local-zones/

We launched the first AWS Local Zone in Los Angeles last December, and added a second one (also in Los Angeles) in August of 2020. In my original post, I quoted Andy Jassy’s statement that we would be giving consideration to adding Local Zones in more geographic areas.

Our customers are using the EC2 instances and other compute services in these zones to host artist workstations, local rendering, sports broadcasting, online gaming, financial transaction processing, machine learning inferencing, virtual reality, and augmented reality applications, among others. These applications benefit from the extremely low latency made possible by geographic proximity.

More Local Zones
I’m happy to be able to announce that we are opening three more Local Zones today and plan to open twelve more in 2021.

Local Zones in Boston, Houston, and Miami are now available in preview form and you can request access now. In 2021, we plan to open Local Zones in other key cities and metropolitan areas including New York City, Chicago, and Atlanta.

We are choosing the target cities with the goal of allowing you to provide access with single-digit millisecond latency to the vast majority of users in the Continental United States. You can deploy the parts of your application that are the most sensitive to latency in Local Zones, and deliver amazing performance to your users. In addition to the use cases that I mentioned above, I expect to see many more that have yet to be imagined or built.

Using Local Zones
I stepped through the process of using a Local Zone in my original post, and all that I said there still applies. Here’s what you need to do:

  1. Request access to the preview and await a reply.
  2. Create a new VPC subnet for the Local Zone.
  3. Launch EC2 instances, create EBS volumes, and deploy your application.

Things to Know
Here are a couple of things that you should know about the new and upcoming Local Zones:

Instance Types – The Local Zones will have a wide selection of EC2 instance types including C5, R5, T3, and G4 instances..

Purchasing Models – You can use compute capacity in Local Zones on an On-Demand basis and you can also purchase a Savings Plan in order to receive discounts. Some of the Local Zones also support the use of Spot Instances, .

AWS Services – Local Zones support Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), Amazon Elastic Kubernetes Service (EKS), and Amazon Virtual Private Cloud, with the door open for other services in the future. You can use services such as Auto Scaling, AWS CloudFormation, and Amazon CloudWatch in the parent region to launch, control, and monitor the AWS resources in a Local Zone.

Direct Connect – As I mentioned earlier, some of our customers are using AWS Direct Connect to establish private connections between Local Zones and their existing on-premises or colo IT infrastructure. We are working with our Direct Connect Partners to make Direct Connect available for the new zones and the specifics will vary on a zone-by-zone basis.

The AWS Local Zones Features page contains additional zone-by-zone information on all of the items listed above.

Learn More
Here are some resources to help you to learn more about Local Zones:

Blog PostLow-Latency Computing with AWS Local Zones.

SitesAWS Local Zones home page, AWS Local Zones FAQ.

Jeff;

re:Invent 2020 – Preannouncements for Tuesday, December 1

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/reinvent-2020-preannouncements-for-tuesday-december-1/

Andy Jassy just gave you a hint about some upcoming AWS launches, and I’ll have more to say about them when they are ready. To tide you over until then, here’s a summary of what he pre-announced:

Smaller AWS Outpost Form Factors – We are introducing two new sizes of AWS Outposts, suitable for locations such as branch offices, factories, retail stores, health clinics, hospitals, and cell sites that are space-constrained and need access to low-latency compute capacity. The 1U (rack unit) Outposts server will be equipped with AWS Graviton 2 processors; the 2U Outposts server will be equipped with Intel® processors. Both sizes will be able to run EC2, ECS, and EKS workloads locally, all provisioned and managed by AWS (including automated patching and updates).

Amazon ECS Anywhere – You will soon be able to run Amazon Elastic Container Service (ECS) in your own data center, giving you the power to select and standardize on a single container orchestrator that runs both on-premises and in the cloud. You will have access to the same ECS APIs, and you will be able to manage all of your ECS resources with the same cluster management, workload scheduling, and monitoring tools and utilities. Amazon ECS Anywhere will also make it easy for you to containerize your existing on-premises workloads, run them locally, and then connect them to the AWS Cloud.

Amazon EKS Anywhere – You will also soon be able to run Amazon Elastic Kubernetes Service (EKS) in your own data center, making it easy for you to set up, upgrade, and operate Kubernetes clusters. The default configuration for each new cluster will include logging, monitoring, networking, and storage, all optimized for the environment that will host the cluster. You will be able to spin up clusters on demand, and you will be able to backup, recover, patch, and upgrade production clusters with minimal disruption.

Again, I’ll have more to say about these when they are ready, so stay tuned, and enjoy the rest of AWS re:Invent!

Jeff;