All posts by Jeff Barr

AWS Asia Pacific (Osaka) Region Now Open to All, with Three AZs and More Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-asia-pacific-osaka-region-now-open-to-all-with-three-azs-more-services/

AWS has had a presence in Japan for a long time! We opened the Asia Pacific (Tokyo) Region in March 2011, added a third Availability Zone (AZ) in 2012, and a fourth in 2018. Since that launch, customers in Japan and around the world have used the region to host an incredibly wide variety of applications!

We opened the Osaka Local Region in 2018 to give our customers in Japan a disaster recovery option for their workloads. Located 400 km from Tokyo, the Osaka Local Region used an isolated, fault-tolerant design contained within a single data center.

From Local to Standard
I am happy to announce that the Osaka Local Region has been expanded and is a now a standard AWS region, complete with three Availability Zones. As is always the case with AWS, the AZs are designed to provide physical redundancy, and are able to withstand power outages, internet downtime, floods, and other natural disasters.

The following services are available, with more in the works: Amazon Elastic Kubernetes Service (EKS), Amazon API Gateway, Auto Scaling, Application Auto Scaling, Amazon Aurora, AWS Config, AWS Personal Health Dashboard, AWS IQ, AWS Organizations, AWS Secrets Manager, AWS Shield Standard (regional), AWS Snowball Edge, AWS Step Functions, AWS Systems Manager, AWS Trusted Advisor, AWS Certificate Manager, CloudEndure Migration, CloudEndure Disaster Recovery, AWS CloudFormation, Amazon CloudFront, AWS CloudTrail, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, AWS CodeDeploy, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Elastic Container Registry, Amazon Elastic Container Service (ECS), AWS Elastic Beanstalk, Amazon Elastic Block Store (EBS), Amazon Elastic Compute Cloud (EC2), EC2 Image Builder, Elastic Load Balancing, Amazon EMR, Amazon ElastiCache, Amazon EventBridge, AWS Fargate, Amazon Glacier, AWS Glue, AWS Identity and Access Management (IAM), AWS Snowball, AWS Key Management Service (KMS), Amazon Kinesis Data Firehose, Amazon Kinesis Data Streams, AWS Lambda, AWS Marketplace, AWS Mobile SDK, Network Load Balancer, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), AWS VPN, VM Import/Export, AWS X-Ray, AWS Artifact, AWS PrivateLink, and Amazon Virtual Private Cloud (VPC).

The Asia Pacific (Osaka) Region supports the C5, C5d, D2, I3, I3en, M5, M5d, R5d, and T3 instance types, in On-Demand, Spot, and Reserved Instance form. X1 and X1e instances are available in a single AZ.

In addition to the AWS regions in Tokyo and Osaka, customers in Japan also benefit from:

  • 16 CloudFront edge locations in Tokyo.
  • One CloudFront edge location in Osaka.
  • One CloudFront Regional Edge Cache in Tokyo.
  • Two AWS Direct Connect locations in Tokyo.
  • One Direct Connect location in Osaka.

Here are typical latency values from the Asia Pacific (Osaka) Region to other cities in the area:

City Latency
Nagoya 2-5 ms
Hiroshima 2-5 ms
Tokyo 5-8 ms
Fukuoka 11-13 ms
Sendai 12-15 ms
Sapporo 14-17 ms
Seoul 27 ms
Taipei 29 ms
Hong Kong 38 ms
Manila 49 ms

AWS Customers in Japan
As I mentioned earlier, our customers are using the AWS regions in Tokyo and Osaka to host an incredibly wide variety of applications. Here’s a sampling:

Mitsubishi UFJ Financial Group (MUFG) – This financial services company adopted a cloud-first strategy and did their first AWS deployment in 2017. They have built a data platform for their banking and group data that helps them to streamline administrative processes, and also migrated a market risk management system. MUFG has been using the Osaka Local Region and is planning to use the Asia Pacific (Osaka) Region to run more workloads and to support their ongoing digital transformation.

KDDI Corporation (KDDI) – This diversified (telecommunication, financial services, Internet, electricity distribution, consumer appliance, and more) company started using AWS in 2016 after AWS met KDDI’s stringent internal security standards. They currently build and run more than 60 services on AWS, including the backend of the au Denki application, used by consumers to monitor electricity usage and rates. They plan to use the Asia Pacific (Osaka) Region to initiate multi-region service to their customers in Japan.

OGIS-RI – Founded in 1983, this global IT consulting firm is a part of the Daigas Group of companies. OSIS-RI provides information strategy, systems integration, systems development, network construction, support, and security. They use AWS to provide their enterprise customers with ekul, a data measurement service that measures and visualizes gas and electricity usage in real time and send it to corporate customers across Japan.

Sony Bank – Founded in 2001 as an asset management bank for individuals, Sony Bank provides services that include foreign currency deposits, home loans, investment trusts, and debit cards. Their gradual migration of internal banking systems to AWS began in 2013 and was 80% complete at the end of 2019. This migration reduced their infrastructure costs by 60% and more than halved the time it once took to procure and build out new infrastructure.

AWS Resources in Japan
As a quick reminder, enterprises, government and research organizations, small and medium businesses, educators, and startups in Japan have access to a wide variety of AWS and community resources. Here’s a sampling:

Available Now
The new region is open to all AWS customers and you can start to use it today!

Jeff;

 

Amazon Location – Add Maps and Location Awareness to Your Applications

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-location-add-maps-and-location-awareness-to-your-applications/

We want to make it easier and more cost-effective for you to add maps, location awareness, and other location-based features to your web and mobile applications. Until now, doing this has been somewhat complex and expensive, and also tied you to the business and programming models of a single provider.

Introducing Amazon Location Service
Today we are making Amazon Location available in preview form and you can start using it today. Priced at a fraction of common alternatives, Amazon Location Service gives you access to maps and location-based services from multiple providers on an economical, pay-as-you-go basis.

You can use Amazon Location Service to build applications that know where they are and respond accordingly. You can display maps, validate addresses, perform geocoding (turn an address into a location), track the movement of packages and devices, and much more. You can easily set up geofences and receive notifications when tracked items enter or leave a geofenced area. You can even overlay your own data on the map while retaining full control.

You can access Amazon Location Service from the AWS Management Console, AWS Command Line Interface (CLI), or via a set of APIs. You can also use existing map libraries such as Mapbox GL and Tangram.

All About Amazon Location
Let’s take a look at the types of resources that Amazon Location Service makes available to you, and then talk about how you can use them in your applications.

MapsAmazon Location Service lets you create maps that make use of data from our partners. You can choose between maps and map styles provided by Esri and by HERE Technologies, with the potential for more maps & more styles from these and other partners in the future. After you create a map, you can retrieve a tile (at one of up to 16 zoom levels) using the GetMapTile function. You won’t do this directly, but will use Mapbox GL, Tangram, or another library instead.

Place Indexes – You can choose between indexes provided by Esri and HERE. The indexes support the SearchPlaceIndexForPosition function which returns places, such as residential addresses or points of interest (often known as POI) that are closest to the position that you supply, while also performing reverse geocoding to turn the position (a pair of coordinates) into a legible address. Indexes also support the SearchPlaceIndexForText function, which searches for addresses, businesses, and points of interest using free-form text such as an address, a name, a city, or a region.

Trackers –Trackers receive location updates from one or more devices via the BatchUpdateDevicePosition function, and can be queried for the current position (GetDevicePosition) or location history (GetDevicePositionHistory) of a device. Trackers can also be linked to Geofence Collections to implement monitoring of devices as they move in and out of geofences.

Geofence Collections – Each collection contains a list of geofences that define geographic boundaries. Here’s a geofence (created with geojson.io) that outlines a park near me:

Amazon Location in Action
I can use the AWS Management Console to get started with Amazon Location and then move on to the AWS Command Line Interface (CLI) or the APIs if necessary. I open the Amazon Location Service Console, and I can either click Try it! to create a set of starter resources, or I can open up the navigation on the left and create them one-by-one. I’ll go for one-by-one, and click Maps:

Then I click Create map to proceed:

I enter a Name and a Description:

Then I choose the desired map and click Create map:

The map is created and ready to be added to my application right away:

Now I am ready to embed the map in my application, and I have several options including the Amplify JavaScript SDK, the Amplify Android SDK, the Amplify iOS SDK, Tangram, and Mapbox GL (read the Developer Guide to learn more about each option).

Next, I want to track the position of devices so that I can be notified when they enter or exit a given region. I use a GeoJSON editing tool such as geojson.io to create a geofence that is built from polygons, and save (download) the resulting file:

I click Create geofence collection in the left-side navigation, and in Step 1, I add my GeoJSON file, enter a Name and Description, and click Next:

Now I enter a Name and a Description for my tracker, and click Next. It will be linked to the geofence collection that I just created:

The next step is to arrange for the tracker to send events to Amazon EventBridge so that I can monitor them in CloudWatch Logs. I leave the settings as-is, and click Next to proceed:

I review all of my choices, and click Finalize to move ahead:

The resources are created, set up, and ready to go:

I can then write code or use the CLI to update the positions of my devices:

$ aws location batch-update-device-position \
   --tracker-name MyTracker1 \
   --updates "DeviceId=Jeff1,Position=-122.33805,47.62748,SampleTime=2020-11-05T02:59:07+0000"

After I do this a time or two, I can retrieve the position history for the device:

$ aws location get-device-position-history \
  -tracker-name MyTracker1 --device-id Jeff1
------------------------------------------------
|           GetDevicePositionHistory           |
+----------------------------------------------+
||               DevicePositions              ||
|+---------------+----------------------------+|
||  DeviceId     |  Jeff1                     ||
||  ReceivedTime |  2020-11-05T02:59:17.246Z  ||
||  SampleTime   |  2020-11-05T02:59:07Z      ||
|+---------------+----------------------------+|
|||                 Position                 |||
||+------------------------------------------+||
|||  -122.33805                              |||
|||  47.62748                                |||
||+------------------------------------------+||
||               DevicePositions              ||
|+---------------+----------------------------+|
||  DeviceId     |  Jeff1                     ||
||  ReceivedTime |  2020-11-05T03:02:08.002Z  ||
||  SampleTime   |  2020-11-05T03:01:29Z      ||
|+---------------+----------------------------+|
|||                 Position                 |||
||+------------------------------------------+||
|||  -122.43805                              |||
|||  47.52748                                |||
||+------------------------------------------+||

I can write Amazon EventBridge rules that watch for the events, and use them to perform any desired processing. Events are published when a device enters or leaves a geofenced area, and look like this:

{
  "version": "0",
  "id": "7cb6afa8-cbf0-e1d9-e585-fd5169025ee0",
  "detail-type": "Location Geofence Event",
  "source": "aws.geo",
  "account": "123456789012",
  "time": "2020-11-05T02:59:17.246Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:geo:us-east-1:123456789012:geofence-collection/MyGeoFences1",
    "arn:aws:geo:us-east-1:123456789012:tracker/MyTracker1"
  ],
  "detail": {
        "EventType": "ENTER",
        "GeofenceId": "LakeUnionPark",
        "DeviceId": "Jeff1",
        "SampleTime": "2020-11-05T02:59:07Z",
        "Position": [-122.33805, 47.52748]
  }
}

Finally, I can create and use place indexes so that I can work with geographical objects. I’ll use the CLI for a change of pace. I create the index:

$ aws location create-place-index \
  --index-name MyIndex1 --data-source Here

Then I query it to find the addresses and points of interest near the location:

$ aws location search-place-index-for-position --index-name MyIndex1 \
  --position "[-122.33805,47.62748]" --output json \
  |  jq .Results[].Place.Label
"Terry Ave N, Seattle, WA 98109, United States"
"900 Westlake Ave N, Seattle, WA 98109-3523, United States"
"851 Terry Ave N, Seattle, WA 98109-4348, United States"
"860 Terry Ave N, Seattle, WA 98109-4330, United States"
"Seattle Fireboat Duwamish, 860 Terry Ave N, Seattle, WA 98109-4330, United States"
"824 Terry Ave N, Seattle, WA 98109-4330, United States"
"9th Ave N, Seattle, WA 98109, United States"
...

I can also do a text-based search:

$ aws location search-place-index-for-text --index-name MyIndex1 \
  --text Coffee --bias-position "[-122.33805,47.62748]" \
  --output json | jq .Results[].Place.Label
"Mohai Cafe, 860 Terry Ave N, Seattle, WA 98109, United States"
"Starbucks, 1200 Westlake Ave N, Seattle, WA 98109, United States"
"Metropolitan Deli and Cafe, 903 Dexter Ave N, Seattle, WA 98109, United States"
"Top Pot Doughnuts, 590 Terry Ave N, Seattle, WA 98109, United States"
"Caffe Umbria, 1201 Westlake Ave N, Seattle, WA 98109, United States"
"Starbucks, 515 Westlake Ave N, Seattle, WA 98109, United States"
"Cafe 815 Mercer, 815 9th Ave N, Seattle, WA 98109, United States"
"Victrola Coffee Roasters, 500 Boren Ave N, Seattle, WA 98109, United States"
"Specialty's, 520 Terry Ave N, Seattle, WA 98109, United States"
...

Both of the searches have other options; read the Geocoding, Reverse Geocoding, and Search to learn more.

Things to Know
Amazon Location is launching today as a preview, and you can get started with it right away. During the preview we plan to add an API for routing, and will also do our best to respond to customer feedback and feature requests as they arrive.

Pricing is based on usage, with an initial evaluation period that lasts for three months and lets you make numerous calls to the Amazon Location APIs at no charge. After the evaluation period you pay the prices listed on the Amazon Location Pricing page.

Amazon Location is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo) Regions.

Jeff;

 

Join the Preview – Amazon Managed Service for Prometheus (AMP)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/join-the-preview-amazon-managed-service-for-prometheus-amp/

Observability is an essential aspect of running cloud infrastructure at scale. You need to know that your resources are healthy and performing as expected, and that your system is delivering the desired level of performance to your customers.

A lot of challenges arise when monitoring container-based applications. First, because container resources are transient and there are lots of metrics to watch, the monitoring data has strikingly high cardinality. In plain language this means that there are lots of unique values, which can make it harder to define a space-efficient storage model and to create queries that return meaningful results. Second, because a well-architected container-based system is composed using a large number of moving parts, ingesting, processing, and storing the monitoring data can become an infrastructure challenge of its own.

Prometheus is a leading open-source monitoring solution with an active developer and user community. It has a multi-dimensional data model that is a great fit for time series data collected from containers.

Introducing Amazon Managed Service for Prometheus (AMP)
Today we are launching a preview of Amazon Managed Service for Prometheus (AMP). This fully-managed service is 100% compatible with Prometheus. It supports the same metrics, the same PromQL queries, and can also make use of the 150+ Prometheus exporters. AMP runs across multiple Availability Zones for high availability, and is powered by CNCF Cortex for horizontal scalability. AMP will easily scale to ingest, store, and query millions of time series metrics.

The preview includes support for Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic Container Service (ECS). It can also be used to monitor your self-managed Kubernetes clusters that are running in the cloud or on-premises.

Getting Started with Amazon Managed Service for Prometheus (AMP)
After joining the preview, I open the AMP Console, enter a name for my AMP workspace, and click Create to get started (API and CLI support is also available):

My workspace is active within a minute or so. The console provides me with the endpoints that I can use to write data to my workspace, and to issue queries:

It also provides guidance on how to configure an existing Prometheus server to send metrics to the AMP workspace:

I can also use AWS Distro for OpenTelemetry to scrape Prometheus metrics and send them to my AMP workspace.

Once I have stored some metrics in my workspace, I can run PromQL queries and I can use Grafana to create dashboards and other visualizations. Here’s a sample Grafana dashboard:

Join the Preview
As noted earlier, we’re launching Amazon Managed Service for Prometheus (AMP) in preview form and you are welcome to try it out today.

We’ll have more info (and a more detailed blog post) at launch time.

Jeff;

AWS CloudShell – Command-Line Access to AWS Resources

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-cloudshell-command-line-access-to-aws-resources/

No matter how much automation you have built, no matter how great you are at practicing Infrastructure as Code (IAC), and no matter how successfully you have transitioned from pets to cattle, you sometimes need to interact with your AWS resources at the command line. You might need to check or adjust a configuration file, make a quick fix to a production environment, or even experiment with some new AWS services or features.

Some of our customers feel most at home when working from within a web browser and have yet to set up or customize their own command-line interface (CLI). They tell is that they don’t want to deal with client applications, public keys, AWS credentials, tooling, and so forth. While none of these steps are difficult or overly time-consuming, they do add complexity and friction and we always like to help you to avoid both.

Introducing AWS CloudShell
Today we are launching AWS CloudShell, with the goal of making the process of getting to an AWS-enabled shell prompt simple and secure, with as little friction as possible. Every shell environment that you run with CloudShell has the AWS Command Line Interface (CLI) (v2) installed and configured so you can run aws commands fresh out of the box. The environments also include the Python and Node runtimes, with many more to come in the future.

To get started, I simply click the CloudShell icon in the AWS Management Console:

My shell sets itself up in a matter of seconds and I can issue my first aws command immediately:

The shell environment is based on Amazon Linux 2. I can store up to 1 GB of files per region in my home directory and they’ll be available each time I open a shell in the region. This includes shell configuration files such as .bashrc and shell history files.

I can access the shell via SSO or as any IAM principal that can login to the AWS Management Console, including federated roles. In order to access CloudShell, the AWSCloudShellFullAccess policy must be in effect. The shell runs as a normal (non-privileged) user, but I can sudo and install packages if necessary.

Here are a couple of features that you should know about:

Themes & Font Sizes – You can switch between light and dark color themes, and choose any one of five font sizes:

Tabs and Sessions – You can have multiple sessions open within the same region, and you can control the tabbing behavior, with options to split horizontally and vertically:

You can also download files from the shell environment to your desktop, and upload them from your desktop to the shell.

Things to Know
Here are a couple of important things to keep in mind when you are evaluating CloudShell:

Timeouts & Persistence – Each CloudShell session will timeout after 20 minutes or so of inactivity, and can be reestablished by refreshing the window:

RegionsCloudShell is available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo) Regions, with the remaining regions on the near-term roadmap.

Persistent Storage – Files stored within $HOME persist between invocations of CloudShell with a limit of 1 GB per region; all other storage is ephemeral. This means that any software that is installed outside of $HOME will not persist, and that no matter what you change (or break), you can always begin anew with a fresh CloudShell environment.

Network Access – Sessions can make outbound connections to the Internet, but do not allow any type of inbound connections. Sessions cannot currently connect to resources inside of private VPC subnets, but that’s also on the near-term roadmap.

Runtimes – In addition to the Python and Node runtimes, Bash, PowerShell, jq, git, the ECS CLI, the SAM CLI, npm, and pip already installed and ready to use.

Pricing – You can use up to 10 concurrent shells in each region at no charge. You only pay for other AWS resources you use with CloudShell to create and run your applications.

Try it Out
AWS CloudShell is available now and you can start using it today. Launch one and give it a try, and let us know what you think!

Jeff;

PennyLane on Braket + Progress Toward Fault-Tolerant Quantum Computing + Tensor Network Simulator

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/pennylane-on-braket-progress-toward-fault-tolerant-quantum-computing-tensor-network-simulator/

I first wrote about Amazon Braket last year and invited you to Get Started with Quantum Computing! Since that launch we have continued to push forward, and have added several important & powerful new features to Amazon Braket:

August 2020 – General Availability of Amazon Braket with access to quantum computing hardware from D-Wave, IonQ, and Rigetti.

September 2020 – Access to D-Wave’s Advantage Quantum Processing Unit (QPU), which includes more than 5,000 qubits and 15-way connectivity.

November 2020 – Support for resource tagging, AWS PrivateLink, and manual qubit allocation. The first two features make it easy for you to connect your existing AWS applications to the new ones that you build with Amazon Braket, and should help you to envision what a production-class cloud-based quantum computing application will look like in the future. The last feature is particularly interesting to researchers; from what I understand, certain qubits within a given piece of quantum computing hardware can have individual physical and connectivity properties that might make them perform somewhat better when used as part of a quantum circuit. You can read about Allocating Qubits on QPU Devices to learn more (this is somewhat similar to the way that a compiler allocates CPU registers to frequently used variables).

In my initial blog post I also announced the formation of the AWS Center for Quantum Computing adjacent to Caltech.

As I write this, we are in the Noisy Intermediate Scale Quantum (NISQ) era. This description captures the state of the art in quantum computers: each gate in a quantum computing circuit introduces a certain amount of accuracy-destroying noise, and the cumulative effect of this noise imposes some practical limits on the scale of the problems.

Update Time
We are working to address this challenge, as are many others in the quantum computing field. Today I would like to give you an update on what we are doing at the practical and the theoretical level.

Similar to the way that CPUs and GPUs work hand-in-hand to address large scale classical computing problems, the emerging field of hybrid quantum algorithms joins CPUs and QPUs to speed up specific calculations within a classical algorithm. This allows for shorter quantum executions that are less susceptible to the cumulative effects of noise and that run well on today’s devices.

Variational quantum algorithms are an important type of hybrid quantum algorithm. The classical code (in the CPU) iteratively adjusts the parameters of a parameterized quantum circuit, in a manner reminiscent of the way that a neural network is built by repeatedly processing batches of training data and adjusting the parameters based on the results of an objective function. The output of the objective function provides the classical code with guidance that helps to steer the process of tuning the parameters in the desired direction. Mathematically (I’m way past the edge of my comfort zone here), this is called differentiable quantum computing.

So, with this rather lengthy introduction, what are we doing?

First, we are making the PennyLane library available so that you can build hybrid quantum-classical algorithms and run them on Amazon Braket. This library lets you “follow the gradient” and write code to address problems in computational chemistry (by way of the included Q-Chem library), machine learning, and optimization. My AWS colleagues have been working with the PennyLane team to create an integrated experience when PennyLane is used together with Amazon Braket.

PennyLane is pre-installed in Braket notebooks and you can also install the Braket-PennyLane plugin in your IDE. Once you do this, you can train quantum circuits as you would train neural networks, while also making use of familiar machine learning libraries such as PyTorch and TensorFlow. When you use PennyLane on the managed simulators that are included in Amazon Braket, you can train your circuits up to 10 times faster by using parallel circuit execution.

Second, the AWS Center for Quantum Computing is working to address the noise issue in two different ways: we are investigating ways to make the gates themselves more accurate, while also working on the development of more efficient ways to encode information redundantly across multiple qubits. Our new paper, Building a Fault-Tolerant Quantum Computer Using Concatenated Cat Codes speaks to both of these efforts. While not light reading, the 100+ page paper proposes the construction of a 2-D grid of micron-scale electro-acoustic qubits that are coupled via superconducting circuits:

Interestingly, this proposed qubit design was used to model a Toffoli gate, and then tested via simulations that ran for 170 hours on c5.18xlarge instances. In a very real sense, the classical computers are being used to design and then simulate their future quantum companions.

The proposed hybrid electro-acoustic qubits are far smaller than what is available today, and also offer a > 10x reduction in overhead (measured in the number of physical qubits required per error-corrected qubit and the associated control lines). In addition to working on the experimental development of this architecture based around hybrid electro-acoustic qubits, the AWS CQC team will also continue to explore other promising alternatives for fault-tolerant quantum computing to bring new, more powerful computing resources to the world.

And Third, we are expanding the choice of managed simulators that are available on Amazon Braket. In addition to the state vector simulator (which can simulate up to 34 qubits), you can use the new tensor network simulator that can simulate up to 50 qubits for certain circuits. This simulator builds a graph representation of the quantum circuit and uses the graph to find an optimized way to process it.

Help Wanted
If you are ready to help us to push the state of the art in quantum computing, take a look at our open positions. We are looking for Quantum Research Scientists, Software Developers, Hardware Developers, and Solutions Architects.

Time to Learn
It is still Day One (as we often say at Amazon) when it comes to quantum computing and now is the time to learn more and to get some experience with. Be sure to check out the Braket Tutorials repository and let me know what you think.

Jeff;

PS – If you are ready to start exploring ways that you can put quantum computing to work in your organization, be sure to take a look at the Amazon Quantum Solutions Lab.

In the Works – AWS Region in Melbourne, Australia

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-melbourne-australia/

We launched new AWS Regions in Italy and South Africa in 2020, and are working on regions in Indonesia, Japan, Spain, India, and Switzerland.

Melbourne, Australia in 2020
Today I am happy to announce that the Asia Pacific (Melbourne) region is in the works, and will open in the second half of 2022 with three Availability Zones. In addition to the Asia Pacific (Sydney) Region, there are already seven Amazon CloudFront Edge locations in Australia, backed by a Regional Edge cache in Sydney.

This will be our second region in Australia, and our ninth in Asia Pacific, joining the existing region in Australia along with those in China, India, Japan, Korea, and Singapore. There are 77 Availability Zones within 24 AWS Regions in operation today, with 18 more Availability Zones and six more Regions (including this one) underway.

As part of our commitment to the Climate Pledge, Amazon is on a path to powering our operations with 100% renewable energy by 2025 as part of our goal to reach net zero carbon by 2040. To this end, we have invested in two renewable energy projects in Australia with a combined 165 MW capacity and the ability to generate 392,000 MWh annually.

The new region will give you (and hundreds of thousand of other active AWS customers in Australia) additional architectural options including the ability to store backup data in geographically separated locations within Australia.

AWS in Australia
I have made several trips to Australia on behalf of AWS over the last 4 or 5 years and I always enjoy meeting our customers while I am there.

Our Australian customers use AWS to accelerate innovation, increase agility, and to drive cost savings. Here are a few examples:

Commonwealth Bank of Australia (CBA) – As Australia’s leading provider of personal, business, and institutional banking services, CBA counts on AWS to provide infrastructure that is safe, resilient, and secure. They are long-time advocates of cloud computing and have been using AWS since 2012.

Swinburne University – The university focuses on innovation, industry engagement, and social inclusion. They started using AWS in 2016 and have collaborated on innovations that support communities in Victoria. The Swinburne Data for Social Good Cloud Innovation Centre uses cloud technologies and intelligent data analytics to solve real-world problems.

XY Sense – Based in Melbourne, this startup is using smart sensors and ML-powered analytics to create technology-enabled workplaces. Their sensor platform takes advantage of multiple AWS services including IoT and serverless, and processes over 7 billion anonymous data points each month.

AWS Partner Network (APN) Partners in Australia are also doing some amazing work with AWS. Again, a few examples:

Versent – Also based in Melbourne, this partner comprises a group of specialist consultants and a product company by the name of Stax. Versent recently helped Land Services South Australia to modernize their full tech stack as part of a shift to AWS (ready the case study to learn more).

Deloitte Australia – As an AWS Strategic Global Premier Partner since 2015, Deloitte Australia works with business and public sector agencies, with a focus on delivery of advanced products and services. As part of their work, over 4,000 employees across Deloitte have participated in the Deloitte Cloud Guild and have strengthened their cloud computing skills as a result.

Investing in Developers
Several AWS programs are designed to help to create and upskill the next generation of developers and students so that they are ready to become part of the next generation of IT leadership. AWS re/Start prepares unemployed, underemployed, and transitioning individuals for a career in cloud computing. AWS Academy provides higher education institutions with a free, ready-to-teach cloud computing curriculum. AWS Educate gives students access to AWS services and content that are designed to help them build knowledge and skills in cloud computing.

Stay Tuned
As I noted earlier, the Asia Pacific (Melbourne) Region is scheduled to open in the second half of 2022. As always, we’ll announce the opening in a post on this blog, so stay tuned!

Jeff;

Amazon S3 Update – Strong Read-After-Write Consistency

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-s3-update-strong-read-after-write-consistency/

When we launched S3 back in 2006, I discussed its virtually unlimited capacity (“…easily store any number of blocks…”), the fact that it was designed to provide 99.99% availability, and that it offered durable storage, with data transparently stored in multiple locations. Since that launch, our customers have used S3 in an amazing diverse set of ways: backup and restore, data archiving, enterprise applications, web sites, big data, and (at last count) over 10,000 data lakes.

One of the more interesting (and sometimes a bit confusing) aspects of S3 and other large-scale distributed systems is commonly known as eventual consistency. In a nutshell, after a call to an S3 API function such as PUT that stores or modifies data, there’s a small time window where the data has been accepted and durably stored, but not yet visible to all GET or LIST requests. Here’s how I see it:

This aspect of S3 can become very challenging for big data workloads (many of which use Amazon EMR) and for data lakes, both of which require access to the most recent data immediately after a write. To help customers run big data workloads in the cloud, Amazon EMR built EMRFS Consistent View and open source Hadoop developers built S3Guard, which provided a layer of strong consistency for these applications.

S3 is Now Strongly Consistent
After that overly-long introduction, I am ready to share some good news!

Effective immediately, all S3 GET, PUT, and LIST operations, as well as operations that change object tags, ACLs, or metadata, are now strongly consistent. What you write is what you will read, and the results of a LIST will be an accurate reflection of what’s in the bucket. This applies to all existing and new S3 objects, works in all regions, and is available to you at no extra charge! There’s no impact on performance, you can update an object hundreds of times per second if you’d like, and there are no global dependencies.

This improvement is great for data lakes, but other types of applications will also benefit. Because S3 now has strong consistency, migration of on-premises workloads and storage to AWS should now be easier than ever before.

We’ve been working with the Amazon EMR team and developers in the open-source community to ensure that customers can take advantage of this update with their big data workloads. As a result of that you no longer need to use EMRFS Consistent View or S3Guard, further reducing the cost to run big data workloads in AWS.

A Word From Dropbox
Long-time AWS customer Dropbox recently migrated a 34 PB analytics data lake from on-premises Hadoop clusters to S3. Watch this video to learn more about strong consistency and how it has allowed Dropbox to simplify their data lake:

Jeff;

 

 

In the Works – 3 More AWS Local Zones in 2020, and 12 More in 2021

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-more-aws-local-zones/

We launched the first AWS Local Zone in Los Angeles last December, and added a second one (also in Los Angeles) in August of 2020. In my original post, I quoted Andy Jassy’s statement that we would be giving consideration to adding Local Zones in more geographic areas.

Our customers are using the EC2 instances and other compute services in these zones to host artist workstations, local rendering, sports broadcasting, online gaming, financial transaction processing, machine learning inferencing, virtual reality, and augmented reality applications, among others. These applications benefit from the extremely low latency made possible by geographic proximity.

More Local Zones
I’m happy to be able to announce that we are opening three more Local Zones today and plan to open twelve more in 2021.

Local Zones in Boston, Houston, and Miami are now available in preview form and you can request access now. In 2021, we plan to open Local Zones in other key cities and metropolitan areas including New York City, Chicago, and Atlanta.

We are choosing the target cities with the goal of allowing you to provide access with single-digit millisecond latency to the vast majority of users in the Continental United States. You can deploy the parts of your application that are the most sensitive to latency in Local Zones, and deliver amazing performance to your users. In addition to the use cases that I mentioned above, I expect to see many more that have yet to be imagined or built.

Using Local Zones
I stepped through the process of using a Local Zone in my original post, and all that I said there still applies. Here’s what you need to do:

  1. Request access to the preview and await a reply.
  2. Create a new VPC subnet for the Local Zone.
  3. Launch EC2 instances, create EBS volumes, and deploy your application.

Things to Know
Here are a couple of things that you should know about the new and upcoming Local Zones:

Instance Types – The Local Zones will have a wide selection of EC2 instance types including C5, R5, T3, and G4 instances..

Purchasing Models – You can use compute capacity in Local Zones on an On-Demand basis and you can also purchase a Savings Plan in order to receive discounts. Some of the Local Zones also support the use of Spot Instances, .

AWS Services – Local Zones support Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), Amazon Elastic Kubernetes Service (EKS), and Amazon Virtual Private Cloud, with the door open for other services in the future. You can use services such as Auto Scaling, AWS CloudFormation, and Amazon CloudWatch in the parent region to launch, control, and monitor the AWS resources in a Local Zone.

Direct Connect – As I mentioned earlier, some of our customers are using AWS Direct Connect to establish private connections between Local Zones and their existing on-premises or colo IT infrastructure. We are working with our Direct Connect Partners to make Direct Connect available for the new zones and the specifics will vary on a zone-by-zone basis.

The AWS Local Zones Features page contains additional zone-by-zone information on all of the items listed above.

Learn More
Here are some resources to help you to learn more about Local Zones:

Blog PostLow-Latency Computing with AWS Local Zones.

SitesAWS Local Zones home page, AWS Local Zones FAQ.

Jeff;

re:Invent 2020 – Preannouncements for Tuesday, December 1

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/reinvent-2020-preannouncements-for-tuesday-december-1/

Andy Jassy just gave you a hint about some upcoming AWS launches, and I’ll have more to say about them when they are ready. To tide you over until then, here’s a summary of what he pre-announced:

Smaller AWS Outpost Form Factors – We are introducing two new sizes of AWS Outposts, suitable for locations such as branch offices, factories, retail stores, health clinics, hospitals, and cell sites that are space-constrained and need access to low-latency compute capacity. The 1U (rack unit) Outposts server will be equipped with AWS Graviton 2 processors; the 2U Outposts server will be equipped with Intel® processors. Both sizes will be able to run EC2, ECS, and EKS workloads locally, all provisioned and managed by AWS (including automated patching and updates).

Amazon ECS Anywhere – You will soon be able to run Amazon Elastic Container Service (ECS) in your own data center, giving you the power to select and standardize on a single container orchestrator that runs both on-premises and in the cloud. You will have access to the same ECS APIs, and you will be able to manage all of your ECS resources with the same cluster management, workload scheduling, and monitoring tools and utilities. Amazon ECS Anywhere will also make it easy for you to containerize your existing on-premises workloads, run them locally, and then connect them to the AWS Cloud.

Amazon EKS Anywhere – You will also soon be able to run Amazon Elastic Kubernetes Service (EKS) in your own data center, making it easy for you to set up, upgrade, and operate Kubernetes clusters. The default configuration for each new cluster will include logging, monitoring, networking, and storage, all optimized for the environment that will host the cluster. You will be able to spin up clusters on demand, and you will be able to backup, recover, patch, and upgrade production clusters with minimal disruption.

Again, I’ll have more to say about these when they are ready, so stay tuned, and enjoy the rest of AWS re:Invent!

Jeff;

Now in Preview – Larger & Faster io2 Block Express EBS Volumes with Higher Throughput

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-in-preview-larger-faster-io2-ebs-volumes-with-higher-throughput/

Amazon Elastic Block Store (EBS) volumes have been an essential EC2 component since they were launched in 2008. Today, you can choose between six types of HDD and SSD volumes, each designed to serve a particular use case and to deliver a specified amount of performance.

Earlier this year we launched io2 volumes with 100x higher durability and 10x more IOPS/GiB than the earlier io1 volumes. The io2 volumes are a great fit for your most I/O-hungry and latency-sensitive applications, including high-performance, business-critical workloads.

Even More
Today we are opening up a preview of io2 Block Express volumes that are designed to deliver even higher performance!

Built on our new EBS Block Express architecture that takes advantage of some advanced communication protocols implemented as part of the AWS Nitro System, the volumes will give you up to 256K IOPS & 4000 MBps of throughput and a maximum volume size of 64 TiB, all with sub-millisecond, low-variance I/O latency. Throughput scales proportionally at 0.256 MB/second per provisioned IOPS, up to a maximum of 4000 MBps per volume. You can provision 1000 IOPS per GiB of storage, twice as many as before. The increased volume size & higher throughput means that you will no longer need to stripe multiple EBS volumes together, reducing complexity and management overhead.

Block Express is a modular storage system that is designed to increase performance and scale. Scalable Reliable Datagrams (as described in A Cloud-Optimized Transport Protocol for Elastic and Scalable HPC) are implemented using custom-built, dedicated hardware, making communication between Block Express volumes and Nitro-powered EC2 instances fast and efficient. This is, in fact, the same technology that the Elastic Fabric Adapter (EFA) uses to support high-end HPC and Machine Learning workloads on AWS,

Putting it all together, these volumes are going to deliver amazing performance for your SAP HANA, Microsoft SQL Server, Oracle, and Apache Cassandra workloads, and for your mission-critical transaction processing applications such as airline reservation systems and banking that once mandated the use of an expensive and inflexible SAN (Storage Area Network).

Join the Preview
The preview is currently available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Europe (Frankfurt) Regions. During the preview, we support the use of R5b instances, with support for other Nitro-powered instances in the works.

You can opt-in to the preview on a per-account, per-region basis, create new io2 Block Express volumes, and then attach them to R5b instances. All newly created io2 volumes in that account/region will then make use of Block Express, and will perform as described above.

This is still a work in progress. We’re still adding support for a couple of features (Multi-Attach, Elastic Volumes, and Fast Snapshot Restore) and we’re building a new I/O fencing feature so that you can attach the same volume to multiple instances while ensuring consistent access and protecting shared data.

The volumes support encryption, but you can’t create encrypted volumes from unencrypted AMIs or snapshots, or from encrypted AMIs or snapshots that were shared from another AWS account. We expect to take care of all of these items during the preview. To learn more, visit the io2 page and read the io2 documentation.

To get started, opt-in to the io2 Block Express Preview today!

Jeff;

 

New EC2 M5zn Instances – Fastest Intel Xeon Scalable CPU in the Cloud

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-ec2-m5zn-instances-fastest-intel-xeon-scalable-cpu-in-the-cloud/

We launched the compute-intensive z1d instances in mid-2018 for customers who asked us for extremely high per-core performance and a high memory-to-core ratio to power their front-end Electronic Design Automation (EDA), actuarial, and CPU-bound relational database workloads.

In order to address a complementary set of use cases, customers have asked us for an EC2 instance that will give them high per-core performance like z1d, with no local NVMe storage, higher networking throughput, and a reduced memory-to-vCPU ratio. They have indicated if we built an instance with this set of attributes, it would be an excellent fit for workloads such as gaming, financial applications, simulation modeling applications such as those used in the automobile, aerospace, energy and telecommunication industries, and High Performance Computing (HPC).

Introducing M5zn
Building on the success of the z1d instances, we are launching M5zn instances in seven sizes today. These instances use 2nd generation custom Intel® Xeon® Scalable (Cascade Lake) processors with a sustained all-core turbo clock frequency of up to 4.5 GHz. M5zn instances feature high frequency processing, are a variant of the general-purpose M5 instances, and are built on the AWS Nitro System. These instances also feature low latency 100 Gbps networking and the Elastic Fabric Adapter (EFA), in order to improve performance for HPC and communication-intensive applications.

Here are the M5zn instances (all VPC-only, HVM-only, and EBS-Optimized, with support for Optimize vCPU). As you can see, the memory-to-vCPU ratio on these instances is half that of the existing z1d instances:

Instance Name vCPUs
RAM
Network Bandwidth EBS-Optimized Bandwidth
m5zn.large 2 8 GiB Up to 25 Gbps Up to 3.170 Gbps
m5zn.xlarge 4 16 GiB Up to 25 Gbps Up to 3.170 Gbps
m5zn.2xlarge 8 32 GiB Up to 25 Gbps 3.170 Gbps
m5zn.3xlarge 12 48 GiB Up to 25 Gbps 4.750 Gbps
m5zn.6xlarge 24 96 GiB 50 Gbps 9.500 Gbps
m5zn.12xlarge 48 192 GiB 100 Gbps 19 Gbps
m5zn.metal 48 192 GiB 100 Gbps 19 Gbps

The Nitro Hypervisor allows M5zn instances to deliver performance that is just about indistinguishable from bare metal. Other AWS Nitro System components such as the Nitro Security Chip and hardware-based processing for EBS increase performance, while VPC encryption provides greater security.

Things To Know
Here are a couple of “fun facts” about the M5zn instances:

Placement Groups – M5zn instances can be used in Cluster (for low latency and high network throughput), Spread (to keep critical instances separate from each other), and Partition (to reduce correlated failures) placement groups.

Networking – M5zn instances support the Elastic Network Adapter (ENA) with dedicated 100 Gbps network connections and a dedicated 19 Gbps connection to EBS. If you are building distributed ML or HPC applications for use on a cluster of M5zn instances, be sure to take a look at the Elastic Fabric Adapter (EFA). Your HPC applications can use the Message Passing Interface (MPI) to communicate efficiently at high speed while scaling to thousands of nodes.

C-State Control – You can configure CPU Power Management on m5zn.6xlarge and m5zn.12xlarge instances. This is definitely an advanced feature, but one worth exploring in those situations where you need to squeeze every possible cycle of available performance from the instance.

NUMA – You can make use of Non-Uniform Memory Access on m5zn.12xlarge instances. This is also an advanced feature, but worth exploring in situations where you have an in-depth understanding of your application’s memory access patterns.

To learn more about these and other features, visit the EC2 M5 Instances page.

Available Now
As you can see, the M5zn instances are a great fit for gaming, HPC and simulation modeling workloads such as those used by the financial, automobile, aerospace, energy, and telecommunications industries.

You can launch M5zn instances today in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Ireland), Europe (Frankfurt), and Asia Pacific (Tokyo) Regions in On-Demand, Reserved Instance, Savings Plan, and Spot form. Dedicated Instances and Dedicated Hosts are also available.

Support is available in the EC2 Forum or via your usual AWS Support contact. The EC2 team is interested in your feedback and you can contact them at [email protected].

Jeff;

 

 

EC2 Update – D3 / D3en Dense Storage Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-update-d3-d3en-dense-storage-instances/

We have launched several generations of EC2 instances with dense storage including the HS1 in 2012 and the D2 in 2015. As you can guess from the name, our customers use these instances when they need massive amounts of very economical on-instance storage for their data warehouses, data lakes, network file systems, Hadoop clusters, and the like. These workloads demand plenty of I/O and network throughput, but work fine with a high ratio of storage to compute power.

New D3 and D3en Instances
Today we are launching the D3 and D3en instances. Like their predecessors, they give you access to massive amounts of low-cost on-instance HDD storage. The D3 instances are available in four sizes, with up to 32 vCPUs and 48 TB of storage. Here are the specs:

Instance Name vCPUs RAM HDD Storage Aggregate Disk Throughput
(128 KiB Blocks)
Network Bandwidth EBS-Optimized Bandwidth
d3.xlarge 4 32 GiB 6 TB (3 x 2 TB)  580 MiBps Up to 15 Gbps 850 Mbps
d3.2xlarge 8 64 GiB 12 TB (6 x 2 TB) 1,100 MiBps Up to 15 Gbps 1,700 Mbps
d3.4xlarge 16 128 GiB 24 TB (12 x 2 TB) 2,300 MiBps Up to 15 Gbps 2,800 Mbps
d3.8xlarge 32 256 GiB 48 TB (24 x 2 TB) 4,600 MiBps 25 Gbps 5,000 Mbps

As you can see from the table above, the D3 instances are available in the same configurations as the D2 instances for easy migration. You’ll get 5% more memory per vCPU, a 30% boost in compute power, and 2.5x higher network performance if you migrate from D2 to D3. The instances provide low-cost dense storage that delivers high performance sequential access to large data sets. They are perfect for distributed file systems such as HDFS and MapR FS, big data analytical workloads, data warehouses, log processing, and data processing.

The D3en instances are available in six sizes, with up to 48 vCPUs and 336 TB of storage. Here are the specs:

Instance Name vCPUs RAM HDD Storage Aggregate Disk Throughput
(128 KiB Blocks)
Network Bandwidth EBS-Optimized Bandwidth
d3en.xlarge 4 16 GiB 28 TB (2 x 14 TB) 500 MiBps Up to 25 Gbps 850 Mbps
d3en.2xlarge 8 32 GiB 56 TB (4 x 14 TB) 1,000 MiBps Up to 25 Gbps 1,700 Mbps
d3en.4xlarge 16 64 GiB 112 TB (8 x 14 TB) 2,000 MiBps 25 Gbps 2,800 Mbps
d3en.6xlarge 24 96 GiB 168 TB (12 x 14 TB) 3,100 MiBps 40 Gbps 4,000 Mbps
d3en.8xlarge 32 128 GiB  224 TB (16 x 14 TB) 4,100 MiBps 50 Gbps 5,000 Mbps
d3en.12xlarge 48 192 GiB 336 TB (24 x 14 TB) 6,200 MiBps 75 Gbps 7,000 Mbps

The D3en instances have a high ratio of storage to vCPU, and are optimized for high throughput and high sequential I/O to very large data sets, with a cost-per-TB that is 80% lower than on D2 instances. D3en instances can host Lustre, BeeGFS, GPFS, and other distributed file systems, they can store your data lakes, and they can run your Amazon EMR, Spark, and Hadoop analytical workloads.

Both of the instance types are built on the AWS Nitro System and are powered by custom Intel® Second Generation Scalable Xeon® (Cascade Lake) processors that can deliver all-core turbo performance of up to 3.1 GHz. The HDD storage is encrypted at rest using AES-256-XTS; traffic between D3 or D3en instances in the same VPC or within peered VPCs is encrypted using a 256-bit key.

Things to Know
Here are a couple of things that you should keep in mind regarding the D3 and D3en instances:

Regions – D3en instances are available in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) Regions; D3en instances are available in all of those regions and also in the US East (Ohio) Region, with more regions coming soon.

Purchase Options – You can purchase D3 and D3 instances in On-Demand, Savings Plan, Reserved Instance, Spot, and Dedicated Instance form.

AMIs – You must use AMIs that include the Elastic Network Adapter (ENA) and NVMe drivers.

Now Available
D3 and D3en instances are available now and you can start using them today!

Jeff;

New – Use Amazon EC2 Mac Instances to Build & Test macOS, iOS, ipadOS, tvOS, and watchOS Apps

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-use-mac-instances-to-build-test-macos-ios-ipados-tvos-and-watchos-apps/

Throughout the course of my career I have done my best to stay on top of new hardware and software. As a teenager I owned an Altair 8800 and an Apple II. In my first year of college someone gave me a phone number and said “call this with modem.” I did, it answered “PENTAGON TIP,” and I had access to ARPANET!

I followed the emerging PC industry with great interest, voraciously reading every new issue of Byte, InfoWorld, and several other long-gone publications. In early 1983, rumor had it that Apple Computer would soon introduce a new system that was affordable, compact, self-contained, and very easy to use. Steve Jobs unveiled the Macintosh in January 1984 and my employer ordered several right away, along with a pair of the Apple Lisa systems that were used as cross-development hosts. As a developer, I was attracted to the Mac’s rich collection of built-in APIs and services, and still treasure my phone book edition of the Inside Macintosh documentation!

New Mac Instance
Over the last couple of years, AWS users have told us that they want to be able to run macOS on Amazon Elastic Compute Cloud (EC2). We’ve asked a lot of questions to learn more about their needs, and today I am pleased to introduce you to the new Mac instance!


The original (128 KB) Mac

Powered by Mac mini hardware and the AWS Nitro System, you can use Amazon EC2 Mac instances to build, test, package, and sign Xcode applications for the Apple platform including macOS, iOS, iPadOS, tvOS, watchOS, and Safari. The instances feature an 8th generation, 6-core Intel Core i7 (Coffee Lake) processor running at 3.2 GHz, with Turbo Boost up to 4.6 GHz. There’s 32 GiB of memory and access to other AWS services including Amazon Elastic Block Store (EBS), Amazon Elastic File System (EFS), Amazon FSx for Windows File Server, Amazon Simple Storage Service (S3), AWS Systems Manager, and so forth.

On the networking side, the instances run in a Virtual Private Cloud (VPC) and include ENA networking with up to 10 Gbps of throughput. With EBS-Optimization, and the ability to deliver up to 55,000 IOPS (16KB block size) and 8 Gbps of throughput for data transfer, EBS volumes attached to the instances can deliver the performance needed to support I/O-intensive build operations.

Mac instances run macOS 10.14 (Mojave) and 10.15 (Catalina) and can be accessed via command line (SSH) or remote desktop (VNC). The AMIs (Amazon Machine Images) for EC2 Mac instances are EC2-optimized and include the AWS goodies that you would find on other AWS AMIs: An ENA driver, the AWS Command Line Interface (CLI), the CloudWatch Agent, CloudFormation Helper Scripts, support for AWS Systems Manager, and the ec2-user account. You can use these AMIs as-is, or you can install your own packages and create custom AMIs (the homebrew-aws repo contains the additional packages and documentation on how to do this).

You can use these instances to create build farms, render farms, and CI/CD farms that target all of the Apple environments that I mentioned earlier. You can provision new instances in minutes, giving you the ability to quickly & cost-effectively build code for multiple targets without having to own & operate your own hardware. You pay only for what you use, and you get to benefit from the elasticity, scalability, security, and reliability provided by EC2.

EC2 Mac Instances in Action
As always, I asked the EC2 team for access to an instance in order to put it through its paces. The instances are available in Dedicated Host form, so I started by allocating a host:

$ aws ec2 allocate-hosts --instance-type mac1.metal \
  --availability-zone us-east-1a --auto-placement on \
  --quantity 1 --region us-east-1

Then I launched my Mac instance from the command line (console, API, and CloudFormation can also be used):

$ aws ec2 run-instances --region us-east-1 \
  --instance-type mac1.metal \
  --image-id  ami-023f74f1accd0b25b \
  --key-name keys-jbarr-us-east  --associate-public-ip-address

I took Luna for a very quick walk, and returned to find that my instance was ready to go. I used the console to give it an appropriate name:

Then I connected to my instance:

From here I can install my development tools, clone my code onto the instance, and initiate my builds.

I can also start a VNC server on the instance and use a VNC client to connect to it:

Note that the VNC protocol is not considered secure, and this feature should be used with care. I used a security group that allowed access only from my desktop’s IP address:

I can also tunnel the VNC traffic over SSH; this is more secure and would not require me to open up port 5900.

Things to Know
Here are a couple of fast-facts about the Mac instances:

AMI Updates – We expect to make new AMIs available each time Apple releases major or minor versions of each supported OS. We also plan to produce AMIs with updated Amazon packages every quarter.

Dedicated Hosts – The instances are launched as EC2 Dedicated Hosts with a minimum tenancy of 24 hours. This is largely transparent to you, but it does mean that the instances cannot be used as part of an Auto Scaling Group.

Purchase Models – You can run Mac instances On-Demand and you can also purchase a Savings Plan.

Apple M1 Chip – EC2 Mac instances with the Apple M1 chip are already in the works, and planned for 2021.

Launch one Today
You can start using Mac instances in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore) Regions today, and check out this video for more information!

Jeff;

 

re:Invent 2020 Liveblog: Andy Jassy Keynote

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/reinvent-2020-liveblog-andy-jassy-keynote/

I’m always ready to try something new! This year, I am going to liveblog Andy Jassy‘s AWS re:Invent keynote address, which takes place from 8 a.m. to 11 a.m. on Tuesday, December 1 (PST). I’ll be updating this post every couple of minutes as I watch Andy’s address from the comfort of my home office. Stay tuned!

Jeff;


 

 

In the Works – AWS Region in Hyderabad, India

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-hyderabad-india/

We opened the AWS Regions in South Africa and Italy earlier this year and are currently working on regions in Indonesia, Japan, Spain, and Switzerland. Second AWS Region in India We launched the Asia Pacific (Mumbai) Region in June 2016, giving enterprises, public sector organizations, startups, and SMBs access to state-of-the-art public cloud infrastructure. In […]

New – GPU-Equipped EC2 P4 Instances for Machine Learning & HPC

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-gpu-equipped-ec2-p4-instances-for-machine-learning-hpc/

The Amazon EC2 team has been providing our customers with GPU-equipped instances for nearly a decade. The first-generation Cluster GPU instances were launched in late 2010, followed by the G2 (2013), P2 (2016), P3 (2017), G3 (2017), P3dn (2018), and G4 (2019) instances. Each successive generation incorporates increasingly-capable GPUs, along with enough CPU power, memory, […]

AWS Nitro Enclaves – Isolated EC2 Environments to Process Confidential Data

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-nitro-enclaves-isolated-ec2-environments-to-process-confidential-data/

When I first told you about the AWS Nitro System, I said: The Nitro system is a rich collection of building blocks that can be assembled in many different ways, giving us the flexibility to design and rapidly deliver EC2 instance types with an ever-broadening selection of compute, storage, memory, and networking options. To date, […]

Amazon Prime Day 2020 – Powered by AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-prime-day-2020-powered-by-aws/

Tipped off by a colleague in Denmark, I bought the LEGO Star Wars Stormtrooper Helmet, which turned out to be a Prime Day best-seller!

As I like to do every year, I would like to share a few of the many ways that AWS helped to make Prime Day a reality for our customers. Back in 2016 I wrote How AWS Powered Amazon’s Biggest Day Ever to describe how we plan for Prime Day and that post is still informative and relevant.

This time around I would like to focus on four ways that AWS helped to support Prime Day: Amazon Live and IVS, Infrastructure Event Management, Storage, and Content Delivery.

Amazon Live and IVS on Prime Day
Throughout Prime Day 2020, Amazon customers were able to shop from livestreams through Amazon Live. Shoppers were also able to use live chat to interact with influencers and hosts in real time. They were able to ask questions, share their experiences, and get a better feel for products of interest to them.

Amazon Live helped customers learn more about products and take advantage of top deals by counting down to Deal Reveals and sharing live product demonstrations. Anitta, Russell Wilson, and Ciara curated Prime Day deals as did author Elizabeth Gilbert. In addition, influencers including @SheaWhitney, @ShopDandy, and @TheDealGuy shared their top product picks with customers. In total, there were over 1,200 live streams and tens of thousands of chat messages on Amazon Live during Prime Day.

To deliver these enhanced shopping experiences for customers and for creators, low latency video is essential. It enables Amazon Live to synchronize the products featured in the live video with the products displayed in the carousel at the bottom of the video player. Low latency also allows the livestream hosts to answer customer questions in real-time. And, of course, on Prime Day in particular, all of this needed to happen at scale.

In order to do this, the Amazon Live team made use of the newly launched Amazon Interactive Video Service (IVS). As Martin explains in his recent post (Amazon Interactive Video Service – Add Live Video to Your Apps and Websites), this is a managed live streaming service that supports the creation of interactive, low-latency video experiences. It uses the same technology that powers Twitch, and allows you to deliver live content with very low latency, often three seconds or less (20 to 30 seconds is more common).

Infrastructure Event Management
AWS Infrastructure Event Management (IEM) helps our customers to plan and run large-scale business-critical events.This program is included in the Enterprise Support plan and is available to Business Support customers for an additional fee. IEM includes an assessment of operational readiness, identification and mitigation of risks, and the confidence to run an event with AWS experts standing by and ready to help.

This year, the TAMs (Technical Account Managers) that support the IEM program created a Control Room that was 100% virtual. A combination of Slack channels and Amazon Chime bridges empowered AWS service teams, AWS support, IT support, and Amazon Customer Reliability Engineering (thousands of people in all) to communicate and collaborate in real time.

Storage for Prime Day
Amazon DynamoDB powers multiple high-traffic Amazon properties and systems including Alexa, the Amazon.com sites, and all Amazon fulfillment centers. Over the course of the 66-hour Prime Day, these sources made 16.4 trillion calls to the DynamoDB API, peaking at 80.1 million requests per second.

On the block storage side, Amazon Elastic Block Store (EBS) added 241 petabytes of storage in preparation for Prime Day; the resulting fleet handled 6.2 trillion requests per day and transferred 563 petabytes per day.

Content Delivery for Prime Day
Amazon CloudFront played an important role as always, serving up web and streamed content to a world-wide audience. CloudFront handled over 280 million HTTP requests per minute, a total of 450 billion requests across all of the Amazon.com sites.

Jeff;

Public Preview – AWS Distro for OpenTelemetry

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/public-preview-aws-distro-open-telemetry/

It took me a while to figure out what observability was all about. A year or two I asked around and my colleagues told me that I needed to follow Charity Majors and to read her blog (done, and done). Just this week, Charity tweeted:

Kislay’s tweet led to his blog post, Observing is not Debugging, which I found very helpful. As Charity noted, Kislay tells us that Observability is a study of the system in motion.

Today’s large-scale distributed applications and systems are effectively always in motion. Whether serving web requests, processing streams of data or handling events, something is always happening. At world-scale, looking at individual requests or events is not always feasible. Instead, it is necessary to take a statistical approach and to watch how well a system is working, instead of simply waiting for a total failure.

New AWS Distro for OpenTelemetry
Today we are launching a preview of AWS Distro for OpenTelemetry. We are part of the Cloud Native Computing Foundation (CNCF)’s OpenTelemetry community, working to define an open standard for the collection of distributed traces and metrics. AWS Distro for OpenTelemetry is a secure and supported distribution of the APIs, libraries, agents, and collectors defined in the OpenTelemetry Specification.

One of the coolest features of the toolkit is auto instrumentation. Starting with Java and in the works for other languages and environments (.NET and JavaScript are next), the auto-instrumentation agent identifies the frameworks and languages used by your application and automatically instruments them to collect and forward metrics and traces.

Here’s how all of the pieces fit together:

The AWS Observability Collector runs within your environment. It can be launched as a sidecar or daemonset for EKS, a sidecar for ECS, or an agent on EC2. You configure the metrics and traces that you want to collect, and also which AWS services to forward them to. You can set up a central account for monitoring complex multi-account applications, and you can also control the sampling rate (what percentage of the raw data is forwarded and ultimately stored).

Partners in Action
You can make use of AWS and partner tools and applications to observe, analyze, and act on what you see. We’re working with Cisco AppDynamics, Datadog, New Relic, Splunk, and other partners and will have more information to share during the preview.

Things to Know
The preview of the AWS Distro for OpenTelemetry is available now and you can start using it today. In addition to the .NET and JavaScript support that I mentioned earlier, we plan to support Python, Ruby, Go, C++, Erlang, and Rust as well.

This is an open source project and welcome your pull requests! We will be tracking the upstream repository and plan to release a fresh version of the toolkit quarterly.

Jeff;

PS – Be sure to sign up for our upcoming webinar, Observability at AWS and AWS Distro for OpenTelemetry Deep Dive.