All posts by Jeff Barr

Amazon Simple Queue Service (SQS) – 15 Years and Still Queueing!

Post Syndicated from Jeff Barr original

Time sure does fly! I wrote about the production launch of Amazon Simple Queue Service (SQS) back in 2006, with no inkling that I would still be blogging fifteen years later, or that this service would still be growing rapidly while remaining fundamental to the architecture of so many different types of web-scale applications.

The first beta of SQS was announced with little fanfare in late 2004. Even though we have added many features since that beta, the original description (“a reliable, highly scalable hosted queue for buffering messages between distributed application components”) still applies. Our customers think of SQS as an infinite buffer in the cloud and use SQS queues to implement the connections between the functional parts of their application architecture.

Over the years, we have worked hard to keep the SQS interface simple and easy to use, even though there’s a lot of complexity inside. In order to make SQS scalable, durable, and reliable, messages are stored in a fleet that consists of thousands of servers in each AWS Region. Within a region, we save three copies of each message, taking care to distribute the messages across storage nodes and Availability Zones. In addition to this built-in redundant storage, SQS is self-healing and resilient to host failures & network interruptions. Even though SQS is now 15 years old, we continue to improve our scaling and load management applications so that you can always count on SQS to handle your mission-critical workloads.

SQS runs at an incredible scale. For example:

Amazon Product Fulfillment – During Prime Day 2021, traffic to SQS set a new record, peaking at 47.7 million messages per second.

Rapid7 – Amazon customer Rapid7 makes extensive use of SQS. According to Jeff Myers (Platform Software Architect):

Amazon SQS provides us with a simple to use, highly performant, and highly scalable messaging service without any management headaches. It is a crucial component of our architecture that has allowed us to scale to handle 10s of billions of messages per day.

You can visit the Amazon SQS home page to read about other high-scale use cases from NASA, BMW Group, Capital One, and other AWS customers.

Serverless Office Hours
Be sure to join us today (June 13th) for Serverless Office Hours at Noon PT on Rumor has it that there will be cake!

Back in Time
Here’s a quick recap of some of the major SQS milestones:

2006Production launch. An unlimited number of queues per account and items per queue, with each item up to 8 KB in length. Pay as you go pricing based on messages sent and data transferred.

2007Additional functions including control of the visibility timeout and access to the approximate number of queued messages.

2009SQS in Europe, and additional control over queue access via AddPermission and RemovePermission. This launch also marked the debut of what we then called the Access Policy Language, which has evolved into today’s IAM Policies.

2010A new free tier (100K requests/month then, since expanded to 1 million requests/month), configurable maximum message length (up to 64 KB), and configurable message retention time.

2011Additional CloudWatch Metrics for each SQS queue. Later that year we added batch operations (SendMessageBatch and DeleteMessageBatch), delay queues, and message timers.

2012Support for long polling, along with SDK support for request batching and client-side buffering.

2013Support for even larger payloads (256 KB) and a 50% price reduction.

2014 – Support for dead letter queues, to accept messages that have become stuck due to a timeout or to a processing error within your code.

2015 – Support for extended payloads (up to 2 GB) using the Extended Client Library.

2016 – Support for FIFO queues with exactly-once processing and deduplication and another price reduction.

2017 – Support for server-side encryption of messages and cost allocation tags.

2018 – Support for Amazon VPC Endpoints using AWS PrivateLink and the ability to invoke Lambda functions.

2019 – Support for Tag-on-Create and X-Ray tracing.

2020 – Support for 1-minute metrics for more fine-grained queue monitoring, a new console experience, and result pagination for the ListQueues and ListDeadLetterSourceQueues functions.

2021Tiered pricing so that you save money as your usage grows, and a High Throughput mode for FIFO queues.

Today, SQS remains simple, scalable, cost-effective, and highly reliable.

From the AWS Heroes
We asked some of the AWS Heroes to reflect on the success of SQS and to share some of their success stories. Here’s what we learned:

Eric Hammond (Serverless Hero) uses AWS Lambda Dead Letter Queues. They put alarms on the queues and send internal emails as alerts when problems need to be investigated.

Tom McLaughlin (Serverless Hero) has been using SQS since 2015. He said “My favorite use case is anytime someone wants a queue and I don’t want to manage a queuing platform. Which is always.”

Ken Collins (Serverless Hero) was not entirely sure how long he had been using SQS, and offered a range of 2 to 8 years! He uses it to power the Lambdakiq gem, which implements ActiveJob on SQS & Lambda.

Kyuhyun Byun (Serverless Hero) has been using SQS to power a push system that must sustain massive amounts of traffic, and tells us “Thanks to SQS, I don’t consider building a queuing system anymore.”

Prashanth HN (Serverless Hero) has been using SQS since 2017, and considers himself “late to the party.” He used SQS as part of his first serverless application, and finds that it is ideal for connecting services with differing throughput.

Ben Kehoe (Serverless Hero) told us that “I first saw the power of SQS when a colleague showed me how to retain state in a fleet of EC2 Spot instances by storing the state in SQS when an instance was getting shut down, and having new instances check the queue for state on startup.”

Jeremy Daly (Serverless Hero) started using SQS in 2010 as a lightweight queue that fed a facial recognition process on user-supplied images. Today, he often uses it as a buffer to throttle requests to downstream services that are not yet serverless.

Casey Lee (Container Hero) also started to use SQS in 2010 as a replacement for ActiveMQ between loosely coupled Java services. Casey implements auto scaling based on queue depth, and has found it to be an accurate way to handle the load.

Vlad Ionecsu (Container Hero) began his AWS journey with SQS back in 2014. Vlad found that the API was very easy to understand, and used SQS to power his thesis project.

Sheen Brisals (Serverless Hero) started to use SQS in 2018 while building a proof-of-concept that also used Lambda and S3. Sheen appreciates the ability to adjust the characteristics of each queue to create a good match for the message processing functions, and also makes use of both high and low priority queues.

Gojko Adzic (Serverless Hero) began to use SQS in 2013 as a task dispatch for exporters in MindMup. This online mind-mapping application allows large groups of users to collaborate, and requires strict ordering of updates to each document. Gojko used FIFO queues to allow them to process messages for different documents in parallel, with sequential order within each document.

Sebastian Müller (Serverless Hero) started to use SQS in 2016 by building a notification center for website-builder . The center ensures that customers are kept aware of events (orders, support messages, and contact requests) on a timely basis.

Luca Bianchi (Serverless Hero) first used SQS in 2012. He decoupled a pair of microservices running on AWS Elastic Beanstalk, and also created a fan-out processing system for a gamification platform. Today, his favorite SQS use case stores inference jobs and makes them available to a worker process running on Amazon SageMaker.

Peter Hanssens (Serverless Hero) uses SQS to offload tasks that do not need to be processed immediately. Several years ago, while assisting some data scientists, he created an event-driven batch-processing system that used a Lambda function to check a queue every 5 minutes and fire up EC2 instances to build models, while keeping strict control over the maximum number of running instances.

Serkan Ozal (Serverless Hero) has been using SQS since 2013 or so. He focuses on asynchronous message processing and counts on the ability of SQS to handle a huge number of events. He also makes uses of the message visibility feature so that he can re-process messages if necessary.

Matthieu Napoli (Serverless Hero) has been using SQS for about five years, starting out with EC2, and as a replacement for other queues. As he says, “Paired with Lambda, it gives massive parallelism out of the box without having to think too much about it. Plus the built-in failure handling makes it very robust.”

As you can see, there are a multitude of use cases for SQS.

SQS Resources
If you are not already using SQS, then it is time to get in the queue and get started. Here are some resources to help you find your way:

Happy queueing!





Prime Day 2021 – Two Chart-Topping Days

Post Syndicated from Jeff Barr original

In what has now become an annual tradition (check out my 2016, 2017, 2019, and 2020 posts for a look back), I am happy to share some of the metrics from this year’s Prime Day and to tell you how AWS helped to make it happen.

This year I bought all sorts of useful goodies including a Toshiba 43 Inch Smart TV that I plan to use as a MagicMirror, some watering cans, and a Dremel Rotary Tool Kit for my workshop.

Powered by AWS
As in years past, AWS played a critical role in making Prime Day a success. A multitude of two-pizza teams worked together to make sure that every part of our infrastructure was scaled, tested, and ready to serve our customers. Here are a few examples:

Amazon EC2 – Our internal measure of compute power is an NI, or a normalized instance. We use this unit to allow us to make meaningful comparisons across different types and sizes of EC2 instances. For Prime Day 2021, we increased our number of NIs by 12.5%. Interestingly enough, due to increased efficiency (more on that in a moment), we actually used about 6,000 fewer physical servers than we did in Cyber Monday 2020.

Graviton2 Instances – Graviton2-powered EC2 instances supported 12 core retail services. This was our first peak event that was supported at scale by the AWS Graviton2 instances, and is a strong indicator that the Arm architecture is well-suited to the data center.

An internal service called Datapath is a key part of the Amazon site. It is highly optimized for our peculiar needs, and supports lookups, queries, and joins across structured blobs of data. After an in-depth evaluation and consideration of all of the alternatives, the team decided to port Datapath to Graviton and to run it on a three-Region cluster composed of over 53,200 C6g instances.

At this scale, the price-performance advantage of the Graviton2 of up to 40% versus the comparable fifth-generation x86-based instances, along with the 20% lower cost, turns into a big win for us and for our customers. As a bonus, the power efficiency of the Graviton2 helps us to achieve our goals for addressing climate change. If you are thinking about moving your workloads to Graviton2, be sure to study our very detailed Getting Started with AWS Graviton2 document, and also consider entering the Graviton Challenge! You can also use Graviton2 database instances on Amazon RDS and Amazon Aurora; read about Key Considerations in Moving to Graviton2 for Amazon RDS and Amazon Aurora Databases to learn more.

Amazon CloudFront – Fast, efficient content delivery is essential when millions of customers are shopping and making purchases. Amazon CloudFront handled a peak load of over 290 million HTTP requests per minute, for a total of over 600 billion HTTP requests during Prime Day.

Amazon Simple Queue Service – The fulfillment process for every order depends on Amazon Simple Queue Service (SQS). This year, traffic set a new record, processing 47.7 million messages per second at the peak.

Amazon Elastic Block Store – In preparation for Prime Day, the team added 159 petabytes of EBS storage. The resulting fleet handled 11.1 trillion requests per day and transferred 614 petabytes per day.

Amazon AuroraAmazon Fulfillment Technologies (AFT) powers physical fulfillment for purchases made on Amazon. On Prime Day, 3,715 instances of AFT’s PostgreSQL-compatible edition of Amazon Aurora processed 233 billion transactions, stored 1,595 terabytes of data, and transferred 615 terabytes of data

Amazon DynamoDBDynamoDB powers multiple high-traffic Amazon properties and systems including Alexa, the sites, and all Amazon fulfillment centers. Over the course of the 66-hour Prime Day, these sources made trillions of API calls while maintaining high availability with single-digit millisecond performance, and peaking at 89.2 million requests per second.

Prepare to Scale
As I have detailed in my previous posts, rigorous preparation is key to the success of Prime Day and our other large-scale events. If your are preparing for a similar event of your own, I can heartily recommend AWS Infrastructure Event Management. As part of an IEM engagement, AWS experts will provide you with architectural and operational guidance that will help you to execute your event with confidence.


Planetary-Scale Computing – 9.95 PFLOPS & Position 41 on the TOP500 List

Post Syndicated from Jeff Barr original

Weather forecasting, genome sequencing, geoanalytics, computational fluid dynamics (CFD), and other types of high-performance computing (HPC) workloads can take advantage of massive amounts of compute power. These workloads are often spikey and massively parallel, and are used in situations where time to results is critical.

Old Way
Governments, well-funded research organizations, and Fortune 500 companies invest tens of millions of dollars in supercomputers in an attempt to gain a competitive edge. Building a state-of-the-art supercomputer requires specialized expertise, years of planning, and a long-term commitment to the architecture and the implementation. Once built, the supercomputer must be kept busy in order to justify the investment, resulting in lengthy queues while jobs wait their turn. Adding capacity and taking advantage of new technology is costly and can also be disruptive.

New Way
It is now possible to build a virtual supercomputer in the cloud! Instead of committing tens of millions of dollars over the course of a decade or more, you simply acquire the resources you need, solve your problem, and release the resources. You can get as much power as you need, when you need it, and only when you need it. Instead of force-fitting your problem to the available resources, you figure out how many resources you need, get them, and solve the problem in the most natural and expeditious way possible. You do not need to make a decade-long commitment to a single processor architecture, and you can easily adopt new technology as it becomes available. You can perform experiments at any scale without long term commitment, and you can gain experience with emerging technologies such as GPUs and specialized hardware for machine learning training and inferencing.

Top500 Run

Descartes Labs optical and radar satellite imagery analysis of historical deforestation and estimated forest carbon loss for a region in Kalimantan, Borneo.

AWS customer Descartes Labs uses HPC to understand the world and to handle the flood of data that comes from sensors on the ground, in the water, and in space. The company has been cloud-based from the start, and focuses on geospatial applications that often involves petabytes of data.

CTO & Co-Founder Mike Warren told me that their intent is to never be limited by compute power. In the early days of his career, Mike worked on simulations of the universe and built multiple clusters and supercomputers including Loki, Avalon, and Space Simulator. Mike was one of the first to build clusters from commodity hardware, and has learned a lot along the way.

After retiring from Los Alamos National Lab, Mike co-founded Descartes Labs. In 2019, Descartes Labs used AWS to power a TOP500 run that delivered 1.93 PFLOPS, landing at position 136 on the TOP500 list for June 2019. That run made use of 41,472 cores on a cluster of C5 instances. Notably, Mike told me that they launched this run without any help from or coordination with the EC2 team (because Descartes Labs routinely runs production jobs of this magnitude for their customers, their account already had sufficiently high service quotas). To learn more about this run, read Thunder from the Cloud: 40,000 Cores Running in Concert on AWS. This is my favorite part of that story:

We were granted access to a group of nodes in the AWS US-East 1 region for approximately $5,000 charged to the company credit card. The potential for democratization of HPC was palpable since the cost to run custom hardware at that speed is probably closer to $20 to $30 million. Not to mention a 6–12 month wait time.

After the success of this run, Mike and his team decided to work on an even more substantial one for 2021, with a target of 7.5 PFLOPS. Working with the EC2 team, they obtained an EC2 On-Demand Capacity Reservation for a 48 hour period in early June. After some “small” runs that used just 1024 instances at a time, they were ready to take their shot. They launched 4,096 EC2 instances (C5, C5d, R5, R5d, M5, and M5d) with a total of 172,692 cores. Here are the results:

  • Rmax – 9.95 PFLOPS. This is the actual performance that was achieved: Almost 10 quadrillion floating point operations per second.
  • Rpeak – 15.11 PFLOPS. This is the theoretical peak performance.
  • HPL Efficiency – 65.87%. The ratio of Rmax to Rpeak, or a measure of how well the hardware is utilized.
  • N: 7,864,320 . This is the size of the matrix that is inverted to perform the Top500 benchmark. N2 is about 61.84 trillion.
  • P x Q: 64 x 128. This is is a parameter for the run, and represents a processing grid.

This run sits at position 41 on the June 2021 TOP500 list, and represents a 417% performance increase in just two years. When compared to the other CPU-based runs, this one sits at position 20. The GPU-based runs are certainly impressive, but ranking them separately makes for the best apples-to-apples comparison.

Mike and his team were very pleased with the results, and believe that it demonstrates the power and value of the cloud for HPC jobs of any scale. Mike noted that the Thinking Machines CM-5 that took the top spot in 1993 (and made a guest appearance in Jurassic Park) is actually slower than a single AWS core!

The run wrapped up at 11:56 AM PST on June 4th. By 12:20 PM, just 24 minutes later, the cluster had been taken down and all of the instances had been stopped. This is the power of on-demand supercomputing!

Imagine a Beowulf Cluster
Back in the early days of Slashdot, every post that referenced some then-impressive piece of hardware would invariably include a comment to the effect of “Imagine a Beowulf cluster.” Today, you can easily imagine (and then launch) clusters of just about any size and use them to address your large-scale computational needs.

If you have planetary-scale problems that can benefit from the speed and flexibility of the AWS Cloud, it is time to put your imagination to work! Here are some resources to get you started:

I would like to offer my congratulations to Mike and to his team at Descartes Labs for this amazing achievement! Mike has worked for decades to prove to the world that mass-produced, commodity hardware and software can be used to build a supercomputer, and the results more than speak for themselves.

To learn more about this run and about Descartes Labs, read Descartes Labs Achieves #41 in TOP500 with Cloud-based Supercomputing Demonstration Powered by AWS, Signaling New Era for Geospatial Data Analysis at Scale.



Heads Up – AWS News Blog RSS Feed Change

Post Syndicated from Jeff Barr original

TL;DR – If you are using the ancient Feedburner feed for this blog, please change to the new one ( ASAP.

Back in the early days of AWS, I paid a visit to the Chicago headquarters of a cool startup called FeedBurner. My friend Matt Shobe demo’ed their new product to me and showed me how “burning” a raw RSS feed could add value to it in various ways. Upon my return to Seattle I promptly burned the RSS feed of the original (TypePad-hosted) version of this blog, and shared that feed for many years.

FeedBurner has served us well over the years, but its time has passed. The company was acquired many years ago and the new owners are now putting the product into maintenance mode.

While the existing feed will continue to work for the foreseeable future, I would like to encourage you to use the one directly generated by this blog instead. Simply update your feed reader to refer to .


In the Works – AWS Region in Tel Aviv, Israel

Post Syndicated from Jeff Barr original

We launched three AWS Regions (Italy, South Africa, and Japan) in the last 12 months, and are working on regions in Australia, Indonesia, Spain, India, Switzerland, and the United Arab Emirates.

Tel Aviv, Israel in the Works
Today I am happy to announce that the AWS Israel (Tel Aviv) Region is in the works and will open in the first half of 2023. This region will have three Availability Zones and will give AWS customers in Israel the ability to run workloads and store data that must remain in-country.

There are 81 Availability Zones within 25 AWS Regions in operation today, with 21 more Availability Zones and seven announced regions (including this one) underway.

As is always the case with an AWS Region, each of the Availability Zones will be a fully isolated part of the AWS infrastructure. The AZs in this region will be connected together via high-bandwidth, low-latency network connections over dedicated, fully redundant metro fiber. This connectivity supports applications that need synchronous replication between AZs for availability or redundancy. You can take a peek at the AWS Global Infrastructure page to learn more about how we design and build regions and AZs.

AWS in Israel
I first visited Israel in 2013 and have been back several (but definitely not enough) times since then. I have spoken at several AWS Summits and visited many of early customers in the area. Today, AWS has the following resources on the ground in Israel:

Israel is also home to Annapurna Labs, an subsidiary that is responsible for developing much of the innovative hardware that powers AWS.

In addition, the government of Israel announced that it has selected AWS as the primary cloud provider for the Nimbus Project. As part of this project, government ministries and subsidiaries in Israel will use cloud computing to power a digital transformation and to provide new digital services for the citizens of Israel.

Stay Tuned
We’ll announce the opening of the AWS Israel (Tel Aviv) Region in a forthcoming blog post, so be sure to stay tuned!



Now Open Third Availability Zone in the AWS China (Beijing) Region

Post Syndicated from Jeff Barr original

I made my first trip to China in late 2008. I was able to speak to developers and entrepreneurs and to get a sense of the then-nascent market for cloud computing. With over 900 million Internet users as of 2020 (according to a recent report from China Internet Network Information Center), China now has the largest user base in the world.

A limited preview of the China (Beijing) Region was launched in 2013 and brought to general availability in 2016. A year later the AWS China (Ningxia) Region launched. In order to comply with China’s legal and regulatory requirements, we collaborated with local Chinese partners. These partners have the proper telecom licenses to provide cloud services in Mainland China. Today, developers can deploy cloud-based applications inside of using the same APIs, protocols, services, and operational practices used by our customers in other parts of the world. This commonality has been particularly attractive to multinational companies that can take advantage of their existing AWS experience when they expand their cloud infrastructure into Mainland China.

Third Availability Zone in Beijing
Today I am happy to announce that we are adding a third Availability Zone (AZ) to the China (Beijing) Region operated by Sinnet in order to support the demands of our growing customer base in China. As is the case with all AWS Availability Zones, this one encompasses one or more discrete data centers in separate facilities, each with redundant power, networking, and connectivity. With this launch, both AWS Regions in China offer three AZs and allow customers to build applications that are scalable, fault-tolerant, and highly available.

AWS Customers in the Beijing Region
Many enterprise customers in China are using the China (Beijing) Region to support their digital transformations. For example:

Yantai Shinho was founded in 1992 and now manufactures 13 popular condiments. They now have a presence in over 100 countries and supply products that tens of millions of families enjoy. They are already using the region to support their front-end and big data efforts, and plan to make use of the additional architectural options made possible by the new AZ.

Kingdee International Software Group was founded in 1993 and now provides corporate management and cloud services for more than 6.8 million enterprises and government organizations. They now have over 8,000 employees AND are committed to changing the way that hundreds of millions of people work.

As I noted earlier, our multinational customers are using the AWS Regions in China to expand their global presence. Here are a few examples:

Australian independent software vendor Canva offers its design-on-demand application to 150 million active users in 190 countries. They launched their Chinese products in August 2018, and have since built in into a first-class design platform that includes tens of millions of high-resolution pictures, Chinese fonts, original templates, and more. Chinese users have already created over 50 million designs on the platform.

Swire is a 200 year old business group that spans the aviation, beverage, food, industrial, marine services, and property industries. Their Swire Coca-Cola division has the exclusive right to manufacture, market, and distribute Coca-Cola products in eleven Chinese provinces, the Shanghai Municipality, Hong Kong, Taiwan, and part of the Western United States — a total customer base of 728 million people. Swire Coca-Cola’s systems primarily operate in the China (Beijing) Region and will soon make use of the third AZ.

Finally, startups are using the region to power their fast-growing businesses:

CraiditX applies machine learning technology originally developed for search engines to the financial services industry. Established in 2015, they use behavioral language processing, natural language processing, neural networks, and integrated modeling to build risk management systems.

Founded in 2016, Momenta is a Chinese startup that is building a “brain” for autonomous vehicles. Powered by deep learning and data-driven path planning, they are working on autonomous driving for passenger vehicles and full autonomy for mobile service vehicles, all deployed in the China (Beijing) Region.

81 and 25
This launch raises the global AWS footprint to a total of 81 Availability Zones across 25 geographic regions, with plans to launch 18 additional Availability Zones and six more regions in Australia, India, Indonesia, Spain, Switzerland, and United Arab Emirates (UAE).

–Jeff, with lots of help from Lillian Shao;

PS – The operator and service provider for the AWS China (Beijing) Region is Beijing Sinnet Technology Co., Ltd. The operator and service provider for the AWS China (Ningxia) Region is Ningxia Western Cloud Data Technology Co., Ltd.

In the Works – AWS Region in the United Arab Emirates (UAE)

Post Syndicated from Jeff Barr original

We are currently building AWS regions in Australia, Indonesia, Spain, India, and Switzerland.

UAE in the Works
I am happy to announce that the AWS Middle East (UAE) Region is in the works and will open in the first half of 2022. The new region is an extension of our existing investment, which already includes two AWS Direct Connect locations and two Amazon CloudFront edge locations, all of which have been in place since 2018. The new region will give AWS customers in the UAE the ability to run workloads and to store data that must remain in-country, in addition to the ability to serve local customers with even lower latency.

The new region will have three Availability Zones, and will be the second AWS Region in the Middle East, joining the existing AWS Region in Bahrain. There are 80 Availability Zones within 25 AWS Regions in operation today, with 15 more Availability Zones and five announced regions underway in the locations that I listed earlier.

As is always the case with an AWS Region, each of the Availability Zones will be a fully isolated part of the AWS infrastructure. The AZs in this region will be connected together via high-bandwidth, low-latency network connections to support applications that need synchronous replication between AZs for availability or redundancy.

AWS in the UAE
In addition to the upcoming AWS Region and the Direct Connect and CloudFront edge locations, we continue to build our team of account managers, partner managers, data center technicians, systems engineers, solutions architects, professional service providers, and more (check out our current positions).

We also plan to continue our on-going investments in education initiatives, training, and start-up enablement to support the UAE’s plans for economic development and digital transformation.

Our customers in the UAE are already using AWS to drive innovation! For example:

Mohammed Bin Rashid Space Centre (MBRSC) – Founded in 2006, MBSRC is home to the UAE’s National Space Program. The Hope Probe was launched last year and reached Mars in February of this year. Data from the probe’s instruments is processed and analyzed on AWS, and made available to the global scientific community in less than 20 minutes.

Anghami is the leading music platform in the Middle East and North Africa, giving over 70 million users access to 57 million songs. They have been hosting their infrastructure on AWS since their days as a tiny startup,. and have benefited from the ability to scale up by as much as 300% when new music is launched.

Sarwa is an investment bank and personal finance platform that was born on the AWS cloud in 2017. They grew by a factor of four in 2020 while processing hundreds of thousands of transactions. Recent AWS-powered innovations from Sarwa include the Sarwa App (design to market in 3 months) and the upcoming Sarwa Trade platform.

Stay Tuned
We’ll be announcing the opening of the Middle East (UAE) Region in a forthcoming blog post, so be sure to stay tuned!


AWS Asia Pacific (Osaka) Region Now Open to All, with Three AZs and More Services

Post Syndicated from Jeff Barr original

AWS has had a presence in Japan for a long time! We opened the Asia Pacific (Tokyo) Region in March 2011, added a third Availability Zone (AZ) in 2012, and a fourth in 2018. Since that launch, customers in Japan and around the world have used the region to host an incredibly wide variety of applications!

We opened the Osaka Local Region in 2018 to give our customers in Japan a disaster recovery option for their workloads. Located 400 km from Tokyo, the Osaka Local Region used an isolated, fault-tolerant design contained within a single data center.

From Local to Standard
I am happy to announce that the Osaka Local Region has been expanded and is a now a standard AWS region, complete with three Availability Zones. As is always the case with AWS, the AZs are designed to provide physical redundancy, and are able to withstand power outages, internet downtime, floods, and other natural disasters.

The following services are available, with more in the works: Amazon Elastic Kubernetes Service (EKS), Amazon API Gateway, Auto Scaling, Application Auto Scaling, Amazon Aurora, AWS Config, AWS Personal Health Dashboard, AWS IQ, AWS Organizations, AWS Secrets Manager, AWS Shield Standard (regional), AWS Snowball Edge, AWS Step Functions, AWS Systems Manager, AWS Trusted Advisor, AWS Certificate Manager, CloudEndure Migration, CloudEndure Disaster Recovery, AWS CloudFormation, Amazon CloudFront, AWS CloudTrail, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, AWS CodeDeploy, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Elastic Container Registry, Amazon Elastic Container Service (ECS), AWS Elastic Beanstalk, Amazon Elastic Block Store (EBS), Amazon Elastic Compute Cloud (EC2), EC2 Image Builder, Elastic Load Balancing, Amazon EMR, Amazon ElastiCache, Amazon EventBridge, AWS Fargate, Amazon Glacier, AWS Glue, AWS Identity and Access Management (IAM), AWS Snowball, AWS Key Management Service (KMS), Amazon Kinesis Data Firehose, Amazon Kinesis Data Streams, AWS Lambda, AWS Marketplace, AWS Mobile SDK, Network Load Balancer, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), AWS VPN, VM Import/Export, AWS X-Ray, AWS Artifact, AWS PrivateLink, and Amazon Virtual Private Cloud (VPC).

The Asia Pacific (Osaka) Region supports the C5, C5d, D2, I3, I3en, M5, M5d, R5d, and T3 instance types, in On-Demand, Spot, and Reserved Instance form. X1 and X1e instances are available in a single AZ.

In addition to the AWS regions in Tokyo and Osaka, customers in Japan also benefit from:

  • 16 CloudFront edge locations in Tokyo.
  • One CloudFront edge location in Osaka.
  • One CloudFront Regional Edge Cache in Tokyo.
  • Two AWS Direct Connect locations in Tokyo.
  • One Direct Connect location in Osaka.

Here are typical latency values from the Asia Pacific (Osaka) Region to other cities in the area:

City Latency
Nagoya 2-5 ms
Hiroshima 2-5 ms
Tokyo 5-8 ms
Fukuoka 11-13 ms
Sendai 12-15 ms
Sapporo 14-17 ms
Seoul 27 ms
Taipei 29 ms
Hong Kong 38 ms
Manila 49 ms

AWS Customers in Japan
As I mentioned earlier, our customers are using the AWS regions in Tokyo and Osaka to host an incredibly wide variety of applications. Here’s a sampling:

Mitsubishi UFJ Financial Group (MUFG) – This financial services company adopted a cloud-first strategy and did their first AWS deployment in 2017. They have built a data platform for their banking and group data that helps them to streamline administrative processes, and also migrated a market risk management system. MUFG has been using the Osaka Local Region and is planning to use the Asia Pacific (Osaka) Region to run more workloads and to support their ongoing digital transformation.

KDDI Corporation (KDDI) – This diversified (telecommunication, financial services, Internet, electricity distribution, consumer appliance, and more) company started using AWS in 2016 after AWS met KDDI’s stringent internal security standards. They currently build and run more than 60 services on AWS, including the backend of the au Denki application, used by consumers to monitor electricity usage and rates. They plan to use the Asia Pacific (Osaka) Region to initiate multi-region service to their customers in Japan.

OGIS-RI – Founded in 1983, this global IT consulting firm is a part of the Daigas Group of companies. OSIS-RI provides information strategy, systems integration, systems development, network construction, support, and security. They use AWS to provide their enterprise customers with ekul, a data measurement service that measures and visualizes gas and electricity usage in real time and send it to corporate customers across Japan.

Sony Bank – Founded in 2001 as an asset management bank for individuals, Sony Bank provides services that include foreign currency deposits, home loans, investment trusts, and debit cards. Their gradual migration of internal banking systems to AWS began in 2013 and was 80% complete at the end of 2019. This migration reduced their infrastructure costs by 60% and more than halved the time it once took to procure and build out new infrastructure.

AWS Resources in Japan
As a quick reminder, enterprises, government and research organizations, small and medium businesses, educators, and startups in Japan have access to a wide variety of AWS and community resources. Here’s a sampling:

Available Now
The new region is open to all AWS customers and you can start to use it today!



Amazon Location – Add Maps and Location Awareness to Your Applications

Post Syndicated from Jeff Barr original

We want to make it easier and more cost-effective for you to add maps, location awareness, and other location-based features to your web and mobile applications. Until now, doing this has been somewhat complex and expensive, and also tied you to the business and programming models of a single provider.

Introducing Amazon Location Service
Today we are making Amazon Location available in preview form and you can start using it today. Priced at a fraction of common alternatives, Amazon Location Service gives you access to maps and location-based services from multiple providers on an economical, pay-as-you-go basis.

You can use Amazon Location Service to build applications that know where they are and respond accordingly. You can display maps, validate addresses, perform geocoding (turn an address into a location), track the movement of packages and devices, and much more. You can easily set up geofences and receive notifications when tracked items enter or leave a geofenced area. You can even overlay your own data on the map while retaining full control.

You can access Amazon Location Service from the AWS Management Console, AWS Command Line Interface (CLI), or via a set of APIs. You can also use existing map libraries such as Mapbox GL and Tangram.

All About Amazon Location
Let’s take a look at the types of resources that Amazon Location Service makes available to you, and then talk about how you can use them in your applications.

MapsAmazon Location Service lets you create maps that make use of data from our partners. You can choose between maps and map styles provided by Esri and by HERE Technologies, with the potential for more maps & more styles from these and other partners in the future. After you create a map, you can retrieve a tile (at one of up to 16 zoom levels) using the GetMapTile function. You won’t do this directly, but will use Mapbox GL, Tangram, or another library instead.

Place Indexes – You can choose between indexes provided by Esri and HERE. The indexes support the SearchPlaceIndexForPosition function which returns places, such as residential addresses or points of interest (often known as POI) that are closest to the position that you supply, while also performing reverse geocoding to turn the position (a pair of coordinates) into a legible address. Indexes also support the SearchPlaceIndexForText function, which searches for addresses, businesses, and points of interest using free-form text such as an address, a name, a city, or a region.

Trackers –Trackers receive location updates from one or more devices via the BatchUpdateDevicePosition function, and can be queried for the current position (GetDevicePosition) or location history (GetDevicePositionHistory) of a device. Trackers can also be linked to Geofence Collections to implement monitoring of devices as they move in and out of geofences.

Geofence Collections – Each collection contains a list of geofences that define geographic boundaries. Here’s a geofence (created with that outlines a park near me:

Amazon Location in Action
I can use the AWS Management Console to get started with Amazon Location and then move on to the AWS Command Line Interface (CLI) or the APIs if necessary. I open the Amazon Location Service Console, and I can either click Try it! to create a set of starter resources, or I can open up the navigation on the left and create them one-by-one. I’ll go for one-by-one, and click Maps:

Then I click Create map to proceed:

I enter a Name and a Description:

Then I choose the desired map and click Create map:

The map is created and ready to be added to my application right away:

Now I am ready to embed the map in my application, and I have several options including the Amplify JavaScript SDK, the Amplify Android SDK, the Amplify iOS SDK, Tangram, and Mapbox GL (read the Developer Guide to learn more about each option).

Next, I want to track the position of devices so that I can be notified when they enter or exit a given region. I use a GeoJSON editing tool such as to create a geofence that is built from polygons, and save (download) the resulting file:

I click Create geofence collection in the left-side navigation, and in Step 1, I add my GeoJSON file, enter a Name and Description, and click Next:

Now I enter a Name and a Description for my tracker, and click Next. It will be linked to the geofence collection that I just created:

The next step is to arrange for the tracker to send events to Amazon EventBridge so that I can monitor them in CloudWatch Logs. I leave the settings as-is, and click Next to proceed:

I review all of my choices, and click Finalize to move ahead:

The resources are created, set up, and ready to go:

I can then write code or use the CLI to update the positions of my devices:

$ aws location batch-update-device-position \
   --tracker-name MyTracker1 \
   --updates "DeviceId=Jeff1,Position=-122.33805,47.62748,SampleTime=2020-11-05T02:59:07+0000"

After I do this a time or two, I can retrieve the position history for the device:

$ aws location get-device-position-history \
  -tracker-name MyTracker1 --device-id Jeff1
|           GetDevicePositionHistory           |
||               DevicePositions              ||
||  DeviceId     |  Jeff1                     ||
||  ReceivedTime |  2020-11-05T02:59:17.246Z  ||
||  SampleTime   |  2020-11-05T02:59:07Z      ||
|||                 Position                 |||
|||  -122.33805                              |||
|||  47.62748                                |||
||               DevicePositions              ||
||  DeviceId     |  Jeff1                     ||
||  ReceivedTime |  2020-11-05T03:02:08.002Z  ||
||  SampleTime   |  2020-11-05T03:01:29Z      ||
|||                 Position                 |||
|||  -122.43805                              |||
|||  47.52748                                |||

I can write Amazon EventBridge rules that watch for the events, and use them to perform any desired processing. Events are published when a device enters or leaves a geofenced area, and look like this:

  "version": "0",
  "id": "7cb6afa8-cbf0-e1d9-e585-fd5169025ee0",
  "detail-type": "Location Geofence Event",
  "source": "aws.geo",
  "account": "123456789012",
  "time": "2020-11-05T02:59:17.246Z",
  "region": "us-east-1",
  "resources": [
  "detail": {
        "EventType": "ENTER",
        "GeofenceId": "LakeUnionPark",
        "DeviceId": "Jeff1",
        "SampleTime": "2020-11-05T02:59:07Z",
        "Position": [-122.33805, 47.52748]

Finally, I can create and use place indexes so that I can work with geographical objects. I’ll use the CLI for a change of pace. I create the index:

$ aws location create-place-index \
  --index-name MyIndex1 --data-source Here

Then I query it to find the addresses and points of interest near the location:

$ aws location search-place-index-for-position --index-name MyIndex1 \
  --position "[-122.33805,47.62748]" --output json \
  |  jq .Results[].Place.Label
"Terry Ave N, Seattle, WA 98109, United States"
"900 Westlake Ave N, Seattle, WA 98109-3523, United States"
"851 Terry Ave N, Seattle, WA 98109-4348, United States"
"860 Terry Ave N, Seattle, WA 98109-4330, United States"
"Seattle Fireboat Duwamish, 860 Terry Ave N, Seattle, WA 98109-4330, United States"
"824 Terry Ave N, Seattle, WA 98109-4330, United States"
"9th Ave N, Seattle, WA 98109, United States"

I can also do a text-based search:

$ aws location search-place-index-for-text --index-name MyIndex1 \
  --text Coffee --bias-position "[-122.33805,47.62748]" \
  --output json | jq .Results[].Place.Label
"Mohai Cafe, 860 Terry Ave N, Seattle, WA 98109, United States"
"Starbucks, 1200 Westlake Ave N, Seattle, WA 98109, United States"
"Metropolitan Deli and Cafe, 903 Dexter Ave N, Seattle, WA 98109, United States"
"Top Pot Doughnuts, 590 Terry Ave N, Seattle, WA 98109, United States"
"Caffe Umbria, 1201 Westlake Ave N, Seattle, WA 98109, United States"
"Starbucks, 515 Westlake Ave N, Seattle, WA 98109, United States"
"Cafe 815 Mercer, 815 9th Ave N, Seattle, WA 98109, United States"
"Victrola Coffee Roasters, 500 Boren Ave N, Seattle, WA 98109, United States"
"Specialty's, 520 Terry Ave N, Seattle, WA 98109, United States"

Both of the searches have other options; read the Geocoding, Reverse Geocoding, and Search to learn more.

Things to Know
Amazon Location is launching today as a preview, and you can get started with it right away. During the preview we plan to add an API for routing, and will also do our best to respond to customer feedback and feature requests as they arrive.

Pricing is based on usage, with an initial evaluation period that lasts for three months and lets you make numerous calls to the Amazon Location APIs at no charge. After the evaluation period you pay the prices listed on the Amazon Location Pricing page.

Amazon Location is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo) Regions.



Join the Preview – Amazon Managed Service for Prometheus (AMP)

Post Syndicated from Jeff Barr original

Observability is an essential aspect of running cloud infrastructure at scale. You need to know that your resources are healthy and performing as expected, and that your system is delivering the desired level of performance to your customers.

A lot of challenges arise when monitoring container-based applications. First, because container resources are transient and there are lots of metrics to watch, the monitoring data has strikingly high cardinality. In plain language this means that there are lots of unique values, which can make it harder to define a space-efficient storage model and to create queries that return meaningful results. Second, because a well-architected container-based system is composed using a large number of moving parts, ingesting, processing, and storing the monitoring data can become an infrastructure challenge of its own.

Prometheus is a leading open-source monitoring solution with an active developer and user community. It has a multi-dimensional data model that is a great fit for time series data collected from containers.

Introducing Amazon Managed Service for Prometheus (AMP)
Today we are launching a preview of Amazon Managed Service for Prometheus (AMP). This fully-managed service is 100% compatible with Prometheus. It supports the same metrics, the same PromQL queries, and can also make use of the 150+ Prometheus exporters. AMP runs across multiple Availability Zones for high availability, and is powered by CNCF Cortex for horizontal scalability. AMP will easily scale to ingest, store, and query millions of time series metrics.

The preview includes support for Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic Container Service (ECS). It can also be used to monitor your self-managed Kubernetes clusters that are running in the cloud or on-premises.

Getting Started with Amazon Managed Service for Prometheus (AMP)
After joining the preview, I open the AMP Console, enter a name for my AMP workspace, and click Create to get started (API and CLI support is also available):

My workspace is active within a minute or so. The console provides me with the endpoints that I can use to write data to my workspace, and to issue queries:

It also provides guidance on how to configure an existing Prometheus server to send metrics to the AMP workspace:

I can also use AWS Distro for OpenTelemetry to scrape Prometheus metrics and send them to my AMP workspace.

Once I have stored some metrics in my workspace, I can run PromQL queries and I can use Grafana to create dashboards and other visualizations. Here’s a sample Grafana dashboard:

Join the Preview
As noted earlier, we’re launching Amazon Managed Service for Prometheus (AMP) in preview form and you are welcome to try it out today.

We’ll have more info (and a more detailed blog post) at launch time.


AWS CloudShell – Command-Line Access to AWS Resources

Post Syndicated from Jeff Barr original

No matter how much automation you have built, no matter how great you are at practicing Infrastructure as Code (IAC), and no matter how successfully you have transitioned from pets to cattle, you sometimes need to interact with your AWS resources at the command line. You might need to check or adjust a configuration file, make a quick fix to a production environment, or even experiment with some new AWS services or features.

Some of our customers feel most at home when working from within a web browser and have yet to set up or customize their own command-line interface (CLI). They tell is that they don’t want to deal with client applications, public keys, AWS credentials, tooling, and so forth. While none of these steps are difficult or overly time-consuming, they do add complexity and friction and we always like to help you to avoid both.

Introducing AWS CloudShell
Today we are launching AWS CloudShell, with the goal of making the process of getting to an AWS-enabled shell prompt simple and secure, with as little friction as possible. Every shell environment that you run with CloudShell has the AWS Command Line Interface (CLI) (v2) installed and configured so you can run aws commands fresh out of the box. The environments also include the Python and Node runtimes, with many more to come in the future.

To get started, I simply click the CloudShell icon in the AWS Management Console:

My shell sets itself up in a matter of seconds and I can issue my first aws command immediately:

The shell environment is based on Amazon Linux 2. I can store up to 1 GB of files per region in my home directory and they’ll be available each time I open a shell in the region. This includes shell configuration files such as .bashrc and shell history files.

I can access the shell via SSO or as any IAM principal that can login to the AWS Management Console, including federated roles. In order to access CloudShell, the AWSCloudShellFullAccess policy must be in effect. The shell runs as a normal (non-privileged) user, but I can sudo and install packages if necessary.

Here are a couple of features that you should know about:

Themes & Font Sizes – You can switch between light and dark color themes, and choose any one of five font sizes:

Tabs and Sessions – You can have multiple sessions open within the same region, and you can control the tabbing behavior, with options to split horizontally and vertically:

You can also download files from the shell environment to your desktop, and upload them from your desktop to the shell.

Things to Know
Here are a couple of important things to keep in mind when you are evaluating CloudShell:

Timeouts & Persistence – Each CloudShell session will timeout after 20 minutes or so of inactivity, and can be reestablished by refreshing the window:

RegionsCloudShell is available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo) Regions, with the remaining regions on the near-term roadmap.

Persistent Storage – Files stored within $HOME persist between invocations of CloudShell with a limit of 1 GB per region; all other storage is ephemeral. This means that any software that is installed outside of $HOME will not persist, and that no matter what you change (or break), you can always begin anew with a fresh CloudShell environment.

Network Access – Sessions can make outbound connections to the Internet, but do not allow any type of inbound connections. Sessions cannot currently connect to resources inside of private VPC subnets, but that’s also on the near-term roadmap.

Runtimes – In addition to the Python and Node runtimes, Bash, PowerShell, jq, git, the ECS CLI, the SAM CLI, npm, and pip already installed and ready to use.

Pricing – You can use up to 10 concurrent shells in each region at no charge. You only pay for other AWS resources you use with CloudShell to create and run your applications.

Try it Out
AWS CloudShell is available now and you can start using it today. Launch one and give it a try, and let us know what you think!


PennyLane on Braket + Progress Toward Fault-Tolerant Quantum Computing + Tensor Network Simulator

Post Syndicated from Jeff Barr original

I first wrote about Amazon Braket last year and invited you to Get Started with Quantum Computing! Since that launch we have continued to push forward, and have added several important & powerful new features to Amazon Braket:

August 2020 – General Availability of Amazon Braket with access to quantum computing hardware from D-Wave, IonQ, and Rigetti.

September 2020 – Access to D-Wave’s Advantage Quantum Processing Unit (QPU), which includes more than 5,000 qubits and 15-way connectivity.

November 2020 – Support for resource tagging, AWS PrivateLink, and manual qubit allocation. The first two features make it easy for you to connect your existing AWS applications to the new ones that you build with Amazon Braket, and should help you to envision what a production-class cloud-based quantum computing application will look like in the future. The last feature is particularly interesting to researchers; from what I understand, certain qubits within a given piece of quantum computing hardware can have individual physical and connectivity properties that might make them perform somewhat better when used as part of a quantum circuit. You can read about Allocating Qubits on QPU Devices to learn more (this is somewhat similar to the way that a compiler allocates CPU registers to frequently used variables).

In my initial blog post I also announced the formation of the AWS Center for Quantum Computing adjacent to Caltech.

As I write this, we are in the Noisy Intermediate Scale Quantum (NISQ) era. This description captures the state of the art in quantum computers: each gate in a quantum computing circuit introduces a certain amount of accuracy-destroying noise, and the cumulative effect of this noise imposes some practical limits on the scale of the problems.

Update Time
We are working to address this challenge, as are many others in the quantum computing field. Today I would like to give you an update on what we are doing at the practical and the theoretical level.

Similar to the way that CPUs and GPUs work hand-in-hand to address large scale classical computing problems, the emerging field of hybrid quantum algorithms joins CPUs and QPUs to speed up specific calculations within a classical algorithm. This allows for shorter quantum executions that are less susceptible to the cumulative effects of noise and that run well on today’s devices.

Variational quantum algorithms are an important type of hybrid quantum algorithm. The classical code (in the CPU) iteratively adjusts the parameters of a parameterized quantum circuit, in a manner reminiscent of the way that a neural network is built by repeatedly processing batches of training data and adjusting the parameters based on the results of an objective function. The output of the objective function provides the classical code with guidance that helps to steer the process of tuning the parameters in the desired direction. Mathematically (I’m way past the edge of my comfort zone here), this is called differentiable quantum computing.

So, with this rather lengthy introduction, what are we doing?

First, we are making the PennyLane library available so that you can build hybrid quantum-classical algorithms and run them on Amazon Braket. This library lets you “follow the gradient” and write code to address problems in computational chemistry (by way of the included Q-Chem library), machine learning, and optimization. My AWS colleagues have been working with the PennyLane team to create an integrated experience when PennyLane is used together with Amazon Braket.

PennyLane is pre-installed in Braket notebooks and you can also install the Braket-PennyLane plugin in your IDE. Once you do this, you can train quantum circuits as you would train neural networks, while also making use of familiar machine learning libraries such as PyTorch and TensorFlow. When you use PennyLane on the managed simulators that are included in Amazon Braket, you can train your circuits up to 10 times faster by using parallel circuit execution.

Second, the AWS Center for Quantum Computing is working to address the noise issue in two different ways: we are investigating ways to make the gates themselves more accurate, while also working on the development of more efficient ways to encode information redundantly across multiple qubits. Our new paper, Building a Fault-Tolerant Quantum Computer Using Concatenated Cat Codes speaks to both of these efforts. While not light reading, the 100+ page paper proposes the construction of a 2-D grid of micron-scale electro-acoustic qubits that are coupled via superconducting circuits:

Interestingly, this proposed qubit design was used to model a Toffoli gate, and then tested via simulations that ran for 170 hours on c5.18xlarge instances. In a very real sense, the classical computers are being used to design and then simulate their future quantum companions.

The proposed hybrid electro-acoustic qubits are far smaller than what is available today, and also offer a > 10x reduction in overhead (measured in the number of physical qubits required per error-corrected qubit and the associated control lines). In addition to working on the experimental development of this architecture based around hybrid electro-acoustic qubits, the AWS CQC team will also continue to explore other promising alternatives for fault-tolerant quantum computing to bring new, more powerful computing resources to the world.

And Third, we are expanding the choice of managed simulators that are available on Amazon Braket. In addition to the state vector simulator (which can simulate up to 34 qubits), you can use the new tensor network simulator that can simulate up to 50 qubits for certain circuits. This simulator builds a graph representation of the quantum circuit and uses the graph to find an optimized way to process it.

Help Wanted
If you are ready to help us to push the state of the art in quantum computing, take a look at our open positions. We are looking for Quantum Research Scientists, Software Developers, Hardware Developers, and Solutions Architects.

Time to Learn
It is still Day One (as we often say at Amazon) when it comes to quantum computing and now is the time to learn more and to get some experience with. Be sure to check out the Braket Tutorials repository and let me know what you think.


PS – If you are ready to start exploring ways that you can put quantum computing to work in your organization, be sure to take a look at the Amazon Quantum Solutions Lab.

In the Works – AWS Region in Melbourne, Australia

Post Syndicated from Jeff Barr original

We launched new AWS Regions in Italy and South Africa in 2020, and are working on regions in Indonesia, Japan, Spain, India, and Switzerland.

Melbourne, Australia in 2020
Today I am happy to announce that the Asia Pacific (Melbourne) region is in the works, and will open in the second half of 2022 with three Availability Zones. In addition to the Asia Pacific (Sydney) Region, there are already seven Amazon CloudFront Edge locations in Australia, backed by a Regional Edge cache in Sydney.

This will be our second region in Australia, and our ninth in Asia Pacific, joining the existing region in Australia along with those in China, India, Japan, Korea, and Singapore. There are 77 Availability Zones within 24 AWS Regions in operation today, with 18 more Availability Zones and six more Regions (including this one) underway.

As part of our commitment to the Climate Pledge, Amazon is on a path to powering our operations with 100% renewable energy by 2025 as part of our goal to reach net zero carbon by 2040. To this end, we have invested in two renewable energy projects in Australia with a combined 165 MW capacity and the ability to generate 392,000 MWh annually.

The new region will give you (and hundreds of thousand of other active AWS customers in Australia) additional architectural options including the ability to store backup data in geographically separated locations within Australia.

AWS in Australia
I have made several trips to Australia on behalf of AWS over the last 4 or 5 years and I always enjoy meeting our customers while I am there.

Our Australian customers use AWS to accelerate innovation, increase agility, and to drive cost savings. Here are a few examples:

Commonwealth Bank of Australia (CBA) – As Australia’s leading provider of personal, business, and institutional banking services, CBA counts on AWS to provide infrastructure that is safe, resilient, and secure. They are long-time advocates of cloud computing and have been using AWS since 2012.

Swinburne University – The university focuses on innovation, industry engagement, and social inclusion. They started using AWS in 2016 and have collaborated on innovations that support communities in Victoria. The Swinburne Data for Social Good Cloud Innovation Centre uses cloud technologies and intelligent data analytics to solve real-world problems.

XY Sense – Based in Melbourne, this startup is using smart sensors and ML-powered analytics to create technology-enabled workplaces. Their sensor platform takes advantage of multiple AWS services including IoT and serverless, and processes over 7 billion anonymous data points each month.

AWS Partner Network (APN) Partners in Australia are also doing some amazing work with AWS. Again, a few examples:

Versent – Also based in Melbourne, this partner comprises a group of specialist consultants and a product company by the name of Stax. Versent recently helped Land Services South Australia to modernize their full tech stack as part of a shift to AWS (ready the case study to learn more).

Deloitte Australia – As an AWS Strategic Global Premier Partner since 2015, Deloitte Australia works with business and public sector agencies, with a focus on delivery of advanced products and services. As part of their work, over 4,000 employees across Deloitte have participated in the Deloitte Cloud Guild and have strengthened their cloud computing skills as a result.

Investing in Developers
Several AWS programs are designed to help to create and upskill the next generation of developers and students so that they are ready to become part of the next generation of IT leadership. AWS re/Start prepares unemployed, underemployed, and transitioning individuals for a career in cloud computing. AWS Academy provides higher education institutions with a free, ready-to-teach cloud computing curriculum. AWS Educate gives students access to AWS services and content that are designed to help them build knowledge and skills in cloud computing.

Stay Tuned
As I noted earlier, the Asia Pacific (Melbourne) Region is scheduled to open in the second half of 2022. As always, we’ll announce the opening in a post on this blog, so stay tuned!


Amazon S3 Update – Strong Read-After-Write Consistency

Post Syndicated from Jeff Barr original

When we launched S3 back in 2006, I discussed its virtually unlimited capacity (“…easily store any number of blocks…”), the fact that it was designed to provide 99.99% availability, and that it offered durable storage, with data transparently stored in multiple locations. Since that launch, our customers have used S3 in an amazing diverse set of ways: backup and restore, data archiving, enterprise applications, web sites, big data, and (at last count) over 10,000 data lakes.

One of the more interesting (and sometimes a bit confusing) aspects of S3 and other large-scale distributed systems is commonly known as eventual consistency. In a nutshell, after a call to an S3 API function such as PUT that stores or modifies data, there’s a small time window where the data has been accepted and durably stored, but not yet visible to all GET or LIST requests. Here’s how I see it:

This aspect of S3 can become very challenging for big data workloads (many of which use Amazon EMR) and for data lakes, both of which require access to the most recent data immediately after a write. To help customers run big data workloads in the cloud, Amazon EMR built EMRFS Consistent View and open source Hadoop developers built S3Guard, which provided a layer of strong consistency for these applications.

S3 is Now Strongly Consistent
After that overly-long introduction, I am ready to share some good news!

Effective immediately, all S3 GET, PUT, and LIST operations, as well as operations that change object tags, ACLs, or metadata, are now strongly consistent. What you write is what you will read, and the results of a LIST will be an accurate reflection of what’s in the bucket. This applies to all existing and new S3 objects, works in all regions, and is available to you at no extra charge! There’s no impact on performance, you can update an object hundreds of times per second if you’d like, and there are no global dependencies.

This improvement is great for data lakes, but other types of applications will also benefit. Because S3 now has strong consistency, migration of on-premises workloads and storage to AWS should now be easier than ever before.

We’ve been working with the Amazon EMR team and developers in the open-source community to ensure that customers can take advantage of this update with their big data workloads. As a result of that you no longer need to use EMRFS Consistent View or S3Guard, further reducing the cost to run big data workloads in AWS.

A Word From Dropbox
Long-time AWS customer Dropbox recently migrated a 34 PB analytics data lake from on-premises Hadoop clusters to S3. Watch this video to learn more about strong consistency and how it has allowed Dropbox to simplify their data lake:




In the Works – 3 More AWS Local Zones in 2020, and 12 More in 2021

Post Syndicated from Jeff Barr original

We launched the first AWS Local Zone in Los Angeles last December, and added a second one (also in Los Angeles) in August of 2020. In my original post, I quoted Andy Jassy’s statement that we would be giving consideration to adding Local Zones in more geographic areas.

Our customers are using the EC2 instances and other compute services in these zones to host artist workstations, local rendering, sports broadcasting, online gaming, financial transaction processing, machine learning inferencing, virtual reality, and augmented reality applications, among others. These applications benefit from the extremely low latency made possible by geographic proximity.

More Local Zones
I’m happy to be able to announce that we are opening three more Local Zones today and plan to open twelve more in 2021.

Local Zones in Boston, Houston, and Miami are now available in preview form and you can request access now. In 2021, we plan to open Local Zones in other key cities and metropolitan areas including New York City, Chicago, and Atlanta.

We are choosing the target cities with the goal of allowing you to provide access with single-digit millisecond latency to the vast majority of users in the Continental United States. You can deploy the parts of your application that are the most sensitive to latency in Local Zones, and deliver amazing performance to your users. In addition to the use cases that I mentioned above, I expect to see many more that have yet to be imagined or built.

Using Local Zones
I stepped through the process of using a Local Zone in my original post, and all that I said there still applies. Here’s what you need to do:

  1. Request access to the preview and await a reply.
  2. Create a new VPC subnet for the Local Zone.
  3. Launch EC2 instances, create EBS volumes, and deploy your application.

Things to Know
Here are a couple of things that you should know about the new and upcoming Local Zones:

Instance Types – The Local Zones will have a wide selection of EC2 instance types including C5, R5, T3, and G4 instances..

Purchasing Models – You can use compute capacity in Local Zones on an On-Demand basis and you can also purchase a Savings Plan in order to receive discounts. Some of the Local Zones also support the use of Spot Instances, .

AWS Services – Local Zones support Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), Amazon Elastic Kubernetes Service (EKS), and Amazon Virtual Private Cloud, with the door open for other services in the future. You can use services such as Auto Scaling, AWS CloudFormation, and Amazon CloudWatch in the parent region to launch, control, and monitor the AWS resources in a Local Zone.

Direct Connect – As I mentioned earlier, some of our customers are using AWS Direct Connect to establish private connections between Local Zones and their existing on-premises or colo IT infrastructure. We are working with our Direct Connect Partners to make Direct Connect available for the new zones and the specifics will vary on a zone-by-zone basis.

The AWS Local Zones Features page contains additional zone-by-zone information on all of the items listed above.

Learn More
Here are some resources to help you to learn more about Local Zones:

Blog PostLow-Latency Computing with AWS Local Zones.

SitesAWS Local Zones home page, AWS Local Zones FAQ.


re:Invent 2020 – Preannouncements for Tuesday, December 1

Post Syndicated from Jeff Barr original

Andy Jassy just gave you a hint about some upcoming AWS launches, and I’ll have more to say about them when they are ready. To tide you over until then, here’s a summary of what he pre-announced:

Smaller AWS Outpost Form Factors – We are introducing two new sizes of AWS Outposts, suitable for locations such as branch offices, factories, retail stores, health clinics, hospitals, and cell sites that are space-constrained and need access to low-latency compute capacity. The 1U (rack unit) Outposts server will be equipped with AWS Graviton 2 processors; the 2U Outposts server will be equipped with Intel® processors. Both sizes will be able to run EC2, ECS, and EKS workloads locally, all provisioned and managed by AWS (including automated patching and updates).

Amazon ECS Anywhere – You will soon be able to run Amazon Elastic Container Service (ECS) in your own data center, giving you the power to select and standardize on a single container orchestrator that runs both on-premises and in the cloud. You will have access to the same ECS APIs, and you will be able to manage all of your ECS resources with the same cluster management, workload scheduling, and monitoring tools and utilities. Amazon ECS Anywhere will also make it easy for you to containerize your existing on-premises workloads, run them locally, and then connect them to the AWS Cloud.

Amazon EKS Anywhere – You will also soon be able to run Amazon Elastic Kubernetes Service (EKS) in your own data center, making it easy for you to set up, upgrade, and operate Kubernetes clusters. The default configuration for each new cluster will include logging, monitoring, networking, and storage, all optimized for the environment that will host the cluster. You will be able to spin up clusters on demand, and you will be able to backup, recover, patch, and upgrade production clusters with minimal disruption.

Again, I’ll have more to say about these when they are ready, so stay tuned, and enjoy the rest of AWS re:Invent!


Now in Preview – Larger & Faster io2 Block Express EBS Volumes with Higher Throughput

Post Syndicated from Jeff Barr original

Amazon Elastic Block Store (EBS) volumes have been an essential EC2 component since they were launched in 2008. Today, you can choose between six types of HDD and SSD volumes, each designed to serve a particular use case and to deliver a specified amount of performance.

Earlier this year we launched io2 volumes with 100x higher durability and 10x more IOPS/GiB than the earlier io1 volumes. The io2 volumes are a great fit for your most I/O-hungry and latency-sensitive applications, including high-performance, business-critical workloads.

Even More
Today we are opening up a preview of io2 Block Express volumes that are designed to deliver even higher performance!

Built on our new EBS Block Express architecture that takes advantage of some advanced communication protocols implemented as part of the AWS Nitro System, the volumes will give you up to 256K IOPS & 4000 MBps of throughput and a maximum volume size of 64 TiB, all with sub-millisecond, low-variance I/O latency. Throughput scales proportionally at 0.256 MB/second per provisioned IOPS, up to a maximum of 4000 MBps per volume. You can provision 1000 IOPS per GiB of storage, twice as many as before. The increased volume size & higher throughput means that you will no longer need to stripe multiple EBS volumes together, reducing complexity and management overhead.

Block Express is a modular storage system that is designed to increase performance and scale. Scalable Reliable Datagrams (as described in A Cloud-Optimized Transport Protocol for Elastic and Scalable HPC) are implemented using custom-built, dedicated hardware, making communication between Block Express volumes and Nitro-powered EC2 instances fast and efficient. This is, in fact, the same technology that the Elastic Fabric Adapter (EFA) uses to support high-end HPC and Machine Learning workloads on AWS,

Putting it all together, these volumes are going to deliver amazing performance for your SAP HANA, Microsoft SQL Server, Oracle, and Apache Cassandra workloads, and for your mission-critical transaction processing applications such as airline reservation systems and banking that once mandated the use of an expensive and inflexible SAN (Storage Area Network).

Join the Preview
The preview is currently available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Europe (Frankfurt) Regions. During the preview, we support the use of R5b instances, with support for other Nitro-powered instances in the works.

You can opt-in to the preview on a per-account, per-region basis, create new io2 Block Express volumes, and then attach them to R5b instances. All newly created io2 volumes in that account/region will then make use of Block Express, and will perform as described above.

This is still a work in progress. We’re still adding support for a couple of features (Multi-Attach, Elastic Volumes, and Fast Snapshot Restore) and we’re building a new I/O fencing feature so that you can attach the same volume to multiple instances while ensuring consistent access and protecting shared data.

The volumes support encryption, but you can’t create encrypted volumes from unencrypted AMIs or snapshots, or from encrypted AMIs or snapshots that were shared from another AWS account. We expect to take care of all of these items during the preview. To learn more, visit the io2 page and read the io2 documentation.

To get started, opt-in to the io2 Block Express Preview today!



New EC2 M5zn Instances – Fastest Intel Xeon Scalable CPU in the Cloud

Post Syndicated from Jeff Barr original

We launched the compute-intensive z1d instances in mid-2018 for customers who asked us for extremely high per-core performance and a high memory-to-core ratio to power their front-end Electronic Design Automation (EDA), actuarial, and CPU-bound relational database workloads.

In order to address a complementary set of use cases, customers have asked us for an EC2 instance that will give them high per-core performance like z1d, with no local NVMe storage, higher networking throughput, and a reduced memory-to-vCPU ratio. They have indicated if we built an instance with this set of attributes, it would be an excellent fit for workloads such as gaming, financial applications, simulation modeling applications such as those used in the automobile, aerospace, energy and telecommunication industries, and High Performance Computing (HPC).

Introducing M5zn
Building on the success of the z1d instances, we are launching M5zn instances in seven sizes today. These instances use 2nd generation custom Intel® Xeon® Scalable (Cascade Lake) processors with a sustained all-core turbo clock frequency of up to 4.5 GHz. M5zn instances feature high frequency processing, are a variant of the general-purpose M5 instances, and are built on the AWS Nitro System. These instances also feature low latency 100 Gbps networking and the Elastic Fabric Adapter (EFA), in order to improve performance for HPC and communication-intensive applications.

Here are the M5zn instances (all VPC-only, HVM-only, and EBS-Optimized, with support for Optimize vCPU). As you can see, the memory-to-vCPU ratio on these instances is half that of the existing z1d instances:

Instance Name vCPUs
Network Bandwidth EBS-Optimized Bandwidth
m5zn.large 2 8 GiB Up to 25 Gbps Up to 3.170 Gbps
m5zn.xlarge 4 16 GiB Up to 25 Gbps Up to 3.170 Gbps
m5zn.2xlarge 8 32 GiB Up to 25 Gbps 3.170 Gbps
m5zn.3xlarge 12 48 GiB Up to 25 Gbps 4.750 Gbps
m5zn.6xlarge 24 96 GiB 50 Gbps 9.500 Gbps
m5zn.12xlarge 48 192 GiB 100 Gbps 19 Gbps
m5zn.metal 48 192 GiB 100 Gbps 19 Gbps

The Nitro Hypervisor allows M5zn instances to deliver performance that is just about indistinguishable from bare metal. Other AWS Nitro System components such as the Nitro Security Chip and hardware-based processing for EBS increase performance, while VPC encryption provides greater security.

Things To Know
Here are a couple of “fun facts” about the M5zn instances:

Placement Groups – M5zn instances can be used in Cluster (for low latency and high network throughput), Spread (to keep critical instances separate from each other), and Partition (to reduce correlated failures) placement groups.

Networking – M5zn instances support the Elastic Network Adapter (ENA) with dedicated 100 Gbps network connections and a dedicated 19 Gbps connection to EBS. If you are building distributed ML or HPC applications for use on a cluster of M5zn instances, be sure to take a look at the Elastic Fabric Adapter (EFA). Your HPC applications can use the Message Passing Interface (MPI) to communicate efficiently at high speed while scaling to thousands of nodes.

C-State Control – You can configure CPU Power Management on m5zn.6xlarge and m5zn.12xlarge instances. This is definitely an advanced feature, but one worth exploring in those situations where you need to squeeze every possible cycle of available performance from the instance.

NUMA – You can make use of Non-Uniform Memory Access on m5zn.12xlarge instances. This is also an advanced feature, but worth exploring in situations where you have an in-depth understanding of your application’s memory access patterns.

To learn more about these and other features, visit the EC2 M5 Instances page.

Available Now
As you can see, the M5zn instances are a great fit for gaming, HPC and simulation modeling workloads such as those used by the financial, automobile, aerospace, energy, and telecommunications industries.

You can launch M5zn instances today in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Ireland), Europe (Frankfurt), and Asia Pacific (Tokyo) Regions in On-Demand, Reserved Instance, Savings Plan, and Spot form. Dedicated Instances and Dedicated Hosts are also available.

Support is available in the EC2 Forum or via your usual AWS Support contact. The EC2 team is interested in your feedback and you can contact them at [email protected].




EC2 Update – D3 / D3en Dense Storage Instances

Post Syndicated from Jeff Barr original

We have launched several generations of EC2 instances with dense storage including the HS1 in 2012 and the D2 in 2015. As you can guess from the name, our customers use these instances when they need massive amounts of very economical on-instance storage for their data warehouses, data lakes, network file systems, Hadoop clusters, and the like. These workloads demand plenty of I/O and network throughput, but work fine with a high ratio of storage to compute power.

New D3 and D3en Instances
Today we are launching the D3 and D3en instances. Like their predecessors, they give you access to massive amounts of low-cost on-instance HDD storage. The D3 instances are available in four sizes, with up to 32 vCPUs and 48 TB of storage. Here are the specs:

Instance Name vCPUs RAM HDD Storage Aggregate Disk Throughput
(128 KiB Blocks)
Network Bandwidth EBS-Optimized Bandwidth
d3.xlarge 4 32 GiB 6 TB (3 x 2 TB)  580 MiBps Up to 15 Gbps 850 Mbps
d3.2xlarge 8 64 GiB 12 TB (6 x 2 TB) 1,100 MiBps Up to 15 Gbps 1,700 Mbps
d3.4xlarge 16 128 GiB 24 TB (12 x 2 TB) 2,300 MiBps Up to 15 Gbps 2,800 Mbps
d3.8xlarge 32 256 GiB 48 TB (24 x 2 TB) 4,600 MiBps 25 Gbps 5,000 Mbps

As you can see from the table above, the D3 instances are available in the same configurations as the D2 instances for easy migration. You’ll get 5% more memory per vCPU, a 30% boost in compute power, and 2.5x higher network performance if you migrate from D2 to D3. The instances provide low-cost dense storage that delivers high performance sequential access to large data sets. They are perfect for distributed file systems such as HDFS and MapR FS, big data analytical workloads, data warehouses, log processing, and data processing.

The D3en instances are available in six sizes, with up to 48 vCPUs and 336 TB of storage. Here are the specs:

Instance Name vCPUs RAM HDD Storage Aggregate Disk Throughput
(128 KiB Blocks)
Network Bandwidth EBS-Optimized Bandwidth
d3en.xlarge 4 16 GiB 28 TB (2 x 14 TB) 500 MiBps Up to 25 Gbps 850 Mbps
d3en.2xlarge 8 32 GiB 56 TB (4 x 14 TB) 1,000 MiBps Up to 25 Gbps 1,700 Mbps
d3en.4xlarge 16 64 GiB 112 TB (8 x 14 TB) 2,000 MiBps 25 Gbps 2,800 Mbps
d3en.6xlarge 24 96 GiB 168 TB (12 x 14 TB) 3,100 MiBps 40 Gbps 4,000 Mbps
d3en.8xlarge 32 128 GiB  224 TB (16 x 14 TB) 4,100 MiBps 50 Gbps 5,000 Mbps
d3en.12xlarge 48 192 GiB 336 TB (24 x 14 TB) 6,200 MiBps 75 Gbps 7,000 Mbps

The D3en instances have a high ratio of storage to vCPU, and are optimized for high throughput and high sequential I/O to very large data sets, with a cost-per-TB that is 80% lower than on D2 instances. D3en instances can host Lustre, BeeGFS, GPFS, and other distributed file systems, they can store your data lakes, and they can run your Amazon EMR, Spark, and Hadoop analytical workloads.

Both of the instance types are built on the AWS Nitro System and are powered by custom Intel® Second Generation Scalable Xeon® (Cascade Lake) processors that can deliver all-core turbo performance of up to 3.1 GHz. The HDD storage is encrypted at rest using AES-256-XTS; traffic between D3 or D3en instances in the same VPC or within peered VPCs is encrypted using a 256-bit key.

Things to Know
Here are a couple of things that you should keep in mind regarding the D3 and D3en instances:

Regions – D3en instances are available in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) Regions; D3en instances are available in all of those regions and also in the US East (Ohio) Region, with more regions coming soon.

Purchase Options – You can purchase D3 and D3 instances in On-Demand, Savings Plan, Reserved Instance, Spot, and Dedicated Instance form.

AMIs – You must use AMIs that include the Elastic Network Adapter (ENA) and NVMe drivers.

Now Available
D3 and D3en instances are available now and you can start using them today!


New – Use Amazon EC2 Mac Instances to Build & Test macOS, iOS, ipadOS, tvOS, and watchOS Apps

Post Syndicated from Jeff Barr original

Throughout the course of my career I have done my best to stay on top of new hardware and software. As a teenager I owned an Altair 8800 and an Apple II. In my first year of college someone gave me a phone number and said “call this with modem.” I did, it answered “PENTAGON TIP,” and I had access to ARPANET!

I followed the emerging PC industry with great interest, voraciously reading every new issue of Byte, InfoWorld, and several other long-gone publications. In early 1983, rumor had it that Apple Computer would soon introduce a new system that was affordable, compact, self-contained, and very easy to use. Steve Jobs unveiled the Macintosh in January 1984 and my employer ordered several right away, along with a pair of the Apple Lisa systems that were used as cross-development hosts. As a developer, I was attracted to the Mac’s rich collection of built-in APIs and services, and still treasure my phone book edition of the Inside Macintosh documentation!

New Mac Instance
Over the last couple of years, AWS users have told us that they want to be able to run macOS on Amazon Elastic Compute Cloud (EC2). We’ve asked a lot of questions to learn more about their needs, and today I am pleased to introduce you to the new Mac instance!

The original (128 KB) Mac

Powered by Mac mini hardware and the AWS Nitro System, you can use Amazon EC2 Mac instances to build, test, package, and sign Xcode applications for the Apple platform including macOS, iOS, iPadOS, tvOS, watchOS, and Safari. The instances feature an 8th generation, 6-core Intel Core i7 (Coffee Lake) processor running at 3.2 GHz, with Turbo Boost up to 4.6 GHz. There’s 32 GiB of memory and access to other AWS services including Amazon Elastic Block Store (EBS), Amazon Elastic File System (EFS), Amazon FSx for Windows File Server, Amazon Simple Storage Service (S3), AWS Systems Manager, and so forth.

On the networking side, the instances run in a Virtual Private Cloud (VPC) and include ENA networking with up to 10 Gbps of throughput. With EBS-Optimization, and the ability to deliver up to 55,000 IOPS (16KB block size) and 8 Gbps of throughput for data transfer, EBS volumes attached to the instances can deliver the performance needed to support I/O-intensive build operations.

Mac instances run macOS 10.14 (Mojave) and 10.15 (Catalina) and can be accessed via command line (SSH) or remote desktop (VNC). The AMIs (Amazon Machine Images) for EC2 Mac instances are EC2-optimized and include the AWS goodies that you would find on other AWS AMIs: An ENA driver, the AWS Command Line Interface (CLI), the CloudWatch Agent, CloudFormation Helper Scripts, support for AWS Systems Manager, and the ec2-user account. You can use these AMIs as-is, or you can install your own packages and create custom AMIs (the homebrew-aws repo contains the additional packages and documentation on how to do this).

You can use these instances to create build farms, render farms, and CI/CD farms that target all of the Apple environments that I mentioned earlier. You can provision new instances in minutes, giving you the ability to quickly & cost-effectively build code for multiple targets without having to own & operate your own hardware. You pay only for what you use, and you get to benefit from the elasticity, scalability, security, and reliability provided by EC2.

EC2 Mac Instances in Action
As always, I asked the EC2 team for access to an instance in order to put it through its paces. The instances are available in Dedicated Host form, so I started by allocating a host:

$ aws ec2 allocate-hosts --instance-type mac1.metal \
  --availability-zone us-east-1a --auto-placement on \
  --quantity 1 --region us-east-1

Then I launched my Mac instance from the command line (console, API, and CloudFormation can also be used):

$ aws ec2 run-instances --region us-east-1 \
  --instance-type mac1.metal \
  --image-id  ami-023f74f1accd0b25b \
  --key-name keys-jbarr-us-east  --associate-public-ip-address

I took Luna for a very quick walk, and returned to find that my instance was ready to go. I used the console to give it an appropriate name:

Then I connected to my instance:

From here I can install my development tools, clone my code onto the instance, and initiate my builds.

I can also start a VNC server on the instance and use a VNC client to connect to it:

Note that the VNC protocol is not considered secure, and this feature should be used with care. I used a security group that allowed access only from my desktop’s IP address:

I can also tunnel the VNC traffic over SSH; this is more secure and would not require me to open up port 5900.

Things to Know
Here are a couple of fast-facts about the Mac instances:

AMI Updates – We expect to make new AMIs available each time Apple releases major or minor versions of each supported OS. We also plan to produce AMIs with updated Amazon packages every quarter.

Dedicated Hosts – The instances are launched as EC2 Dedicated Hosts with a minimum tenancy of 24 hours. This is largely transparent to you, but it does mean that the instances cannot be used as part of an Auto Scaling Group.

Purchase Models – You can run Mac instances On-Demand and you can also purchase a Savings Plan.

Apple M1 Chip – EC2 Mac instances with the Apple M1 chip are already in the works, and planned for 2021.

Launch one Today
You can start using Mac instances in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore) Regions today, and check out this video for more information!