All posts by Jeff Barr

Amazon Prime Day 2020 – Powered by AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-prime-day-2020-powered-by-aws/

Tipped off by a colleague in Denmark, I bought the LEGO Star Wars Stormtrooper Helmet, which turned out to be a Prime Day best-seller!

As I like to do every year, I would like to share a few of the many ways that AWS helped to make Prime Day a reality for our customers. Back in 2016 I wrote How AWS Powered Amazon’s Biggest Day Ever to describe how we plan for Prime Day and that post is still informative and relevant.

This time around I would like to focus on four ways that AWS helped to support Prime Day: Amazon Live and IVS, Infrastructure Event Management, Storage, and Content Delivery.

Amazon Live and IVS on Prime Day
Throughout Prime Day 2020, Amazon customers were able to shop from livestreams through Amazon Live. Shoppers were also able to use live chat to interact with influencers and hosts in real time. They were able to ask questions, share their experiences, and get a better feel for products of interest to them.

Amazon Live helped customers learn more about products and take advantage of top deals by counting down to Deal Reveals and sharing live product demonstrations. Anitta, Russell Wilson, and Ciara curated Prime Day deals as did author Elizabeth Gilbert. In addition, influencers including @SheaWhitney, @ShopDandy, and @TheDealGuy shared their top product picks with customers. In total, there were over 1,200 live streams and tens of thousands of chat messages on Amazon Live during Prime Day.

To deliver these enhanced shopping experiences for customers and for creators, low latency video is essential. It enables Amazon Live to synchronize the products featured in the live video with the products displayed in the carousel at the bottom of the video player. Low latency also allows the livestream hosts to answer customer questions in real-time. And, of course, on Prime Day in particular, all of this needed to happen at scale.

In order to do this, the Amazon Live team made use of the newly launched Amazon Interactive Video Service (IVS). As Martin explains in his recent post (Amazon Interactive Video Service – Add Live Video to Your Apps and Websites), this is a managed live streaming service that supports the creation of interactive, low-latency video experiences. It uses the same technology that powers Twitch, and allows you to deliver live content with very low latency, often three seconds or less (20 to 30 seconds is more common).

Infrastructure Event Management
AWS Infrastructure Event Management (IEM) helps our customers to plan and run large-scale business-critical events.This program is included in the Enterprise Support plan and is available to Business Support customers for an additional fee. IEM includes an assessment of operational readiness, identification and mitigation of risks, and the confidence to run an event with AWS experts standing by and ready to help.

This year, the TAMs (Technical Account Managers) that support the IEM program created a Control Room that was 100% virtual. A combination of Slack channels and Amazon Chime bridges empowered AWS service teams, AWS support, IT support, and Amazon Customer Reliability Engineering (thousands of people in all) to communicate and collaborate in real time.

Storage for Prime Day
Amazon DynamoDB powers multiple high-traffic Amazon properties and systems including Alexa, the Amazon.com sites, and all Amazon fulfillment centers. Over the course of the 66-hour Prime Day, these sources made 16.4 trillion calls to the DynamoDB API, peaking at 80.1 million requests per second.

On the block storage side, Amazon Elastic Block Store (EBS) added 241 petabytes of storage in preparation for Prime Day; the resulting fleet handled 6.2 trillion requests per day and transferred 563 petabytes per day.

Content Delivery for Prime Day
Amazon CloudFront played an important role as always, serving up web and streamed content to a world-wide audience. CloudFront handled over 280 million HTTP requests per minute, a total of 450 billion requests across all of the Amazon.com sites.

Jeff;

Public Preview – AWS Distro for OpenTelemetry

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/public-preview-aws-distro-open-telemetry/

It took me a while to figure out what observability was all about. A year or two I asked around and my colleagues told me that I needed to follow Charity Majors and to read her blog (done, and done). Just this week, Charity tweeted:

Kislay’s tweet led to his blog post, Observing is not Debugging, which I found very helpful. As Charity noted, Kislay tells us that Observability is a study of the system in motion.

Today’s large-scale distributed applications and systems are effectively always in motion. Whether serving web requests, processing streams of data or handling events, something is always happening. At world-scale, looking at individual requests or events is not always feasible. Instead, it is necessary to take a statistical approach and to watch how well a system is working, instead of simply waiting for a total failure.

New AWS Distro for OpenTelemetry
Today we are launching a preview of AWS Distro for OpenTelemetry. We are part of the Cloud Native Computing Foundation (CNCF)’s OpenTelemetry community, working to define an open standard for the collection of distributed traces and metrics. AWS Distro for OpenTelemetry is a secure and supported distribution of the APIs, libraries, agents, and collectors defined in the OpenTelemetry Specification.

One of the coolest features of the toolkit is auto instrumentation. Starting with Java and in the works for other languages and environments (.NET and JavaScript are next), the auto-instrumentation agent identifies the frameworks and languages used by your application and automatically instruments them to collect and forward metrics and traces.

Here’s how all of the pieces fit together:

The AWS Observability Collector runs within your environment. It can be launched as a sidecar or daemonset for EKS, a sidecar for ECS, or an agent on EC2. You configure the metrics and traces that you want to collect, and also which AWS services to forward them to. You can set up a central account for monitoring complex multi-account applications, and you can also control the sampling rate (what percentage of the raw data is forwarded and ultimately stored).

Partners in Action
You can make use of AWS and partner tools and applications to observe, analyze, and act on what you see. We’re working with Cisco AppDynamics, Datadog, New Relic, Splunk, and other partners and will have more information to share during the preview.

Things to Know
The preview of the AWS Distro for OpenTelemetry is available now and you can start using it today. In addition to the .NET and JavaScript support that I mentioned earlier, we plan to support Python, Ruby, Go, C++, Erlang, and Rust as well.

This is an open source project and welcome your pull requests! We will be tracking the upstream repository and plan to release a fresh version of the toolkit quarterly.

Jeff;

PS – Be sure to sign up for our upcoming webinar, Observability at AWS and AWS Distro for OpenTelemetry Deep Dive.

 

Amazon S3 Update – Three New Security & Access Control Features

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-s3-update-three-new-security-access-control-features/

A year or so after we launched Amazon S3, I was in an elevator at a tech conference and heard a couple of developers use “just throw it into S3” as the answer to their data storage challenge. I remember that moment well because the comment was made so casually, and it was one of the first times that I fully grasped just how quickly S3 had caught on.

Since that launch, we have added hundreds of features and multiple storage classes to S3, while also reducing the cost to storage a gigabyte of data for a month by almost 85% (from $0.15 to $0.023 for S3 Standard, and as low as $0.00099 for S3 Glacier Deep Archive). Today, our customers use S3 to support many different use cases including data lakes, backup and restore, disaster recovery, archiving, and cloud-native applications.

Security & Access Control
As the set of use cases for S3 has expanded, our customers have asked us for new ways to regulate access to their mission-critical buckets and objects. We added IAM policies many years ago, and Block Public Access in 2018. Last year we added S3 Access Points (Easily Manage Shared Data Sets with Amazon S3 Access Points) to help you manage access in large-scale environments that might encompass hundreds of applications and petabytes of storage.

Today we are launching S3 Object Ownership as a follow-on to two other S3 security & access control features that we launched earlier this month. All three features are designed to give you even more control and flexibility:

Object Ownership – You can now ensure that newly created objects within a bucket have the same owner as the bucket.

Bucket Owner Condition – You can now confirm the ownership of a bucket when you create a new object or perform other S3 operations.

Copy API via Access Points – You can now access S3’s Copy API through an Access Point.

You can use all of these new features in all AWS regions at no additional charge. Let’s take a look at each one!

Object Ownership
With the proper permissions in place, S3 already allows multiple AWS accounts to upload objects to the same bucket, with each account retaining ownership and control over the objects. This many-to-one upload model can be handy when using a bucket as a data lake or another type of data repository. Internal teams or external partners can all contribute to the creation of large-scale centralized resources. With this model, the bucket owner does not have full control over the objects in the bucket and cannot use bucket policies to share objects, which can lead to confusion.

You can now use a new per-bucket setting to enforce uniform object ownership within a bucket. This will simplify many applications, and will obviate the need for the Lambda-powered self-COPY that has become a popular way to do this up until now. Because this setting changes the behavior seen by the account that is uploading, the PUT request must include the bucket-owner-full-control ACL. You can also choose to use a bucket policy that requires the inclusion of this ACL.

To get started, open the S3 Console, locate the bucket and view its Permissions, click Object Ownership, and Edit:

Then select Bucket owner preferred and click Save:

As I mentioned earlier, you can use a bucket policy to enforce object ownership (read About Object Ownership and this Knowledge Center Article to learn more).

Many AWS services deliver data to the bucket of your choice, and are now equipped to take advantage of this feature. S3 Server Access Logging, S3 Inventory, S3 Storage Class Analysis, AWS CloudTrail, and AWS Config now deliver data that you own. You can also configure Amazon EMR to use this feature by setting fs.s3.canned.acl to BucketOwnerFullControl in the cluster configuration (learn more).

Keep in mind that this feature does not change the ownership of existing objects. Also, note that you will now own more S3 objects than before, which may cause changes to the numbers you see in your reports and other metrics.

AWS CloudFormation support for Object Ownership is under development and is expected to be ready before AWS re:Invent.

Bucket Owner Condition
This feature lets you confirm that you are writing to a bucket that you own.

You simply pass a numeric AWS Account ID to any of the S3 Bucket or Object APIs using the expectedBucketOwner parameter or the x-amz-expected-bucket-owner HTTP header. The ID indicates the AWS Account that you believe owns the subject bucket. If there’s a match, then the request will proceed as normal. If not, it will fail with a 403 status code.

To learn more, read Bucket Owner Condition.

Copy API via Access Points
S3 Access Points give you fine-grained control over access to your shared data sets. Instead of managing a single and possibly complex policy on a bucket, you can create an access point for each application, and then use an IAM policy to regulate the S3 operations that are made via the access point (read Easily Manage Shared Data Sets with Amazon S3 Access Points to see how they work).

You can now use S3 Access Points in conjunction with the S3 CopyObject API by using the ARN of the access point instead of the bucket name (read Using Access Points to learn more).

Use Them Today
As I mentioned earlier, you can use all of these new features in all AWS regions at no additional charge.

Jeff;

 

New EBS Volume Type (io2) – 100x Higher Durability and 10x More IOPS/GiB

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-ebs-volume-type-io2-more-iops-gib-higher-durability/

We launched EBS Volumes with Provisioned IOPS way back in 2012. These volumes are a great fit for your most I/O-hungry and latency-sensitive applications because you can dial in the level of performance that you need, and then (with the launch of Elastic Volumes in 2017) change it later.

Over the years, we have increased the ratio of IOPS per gibibyte (GiB) of SSD-backed storage several times, most recently in August 2016. This ratio started out at 10 IOPS per GiB, and has grown steadily to 50 IOPS per GiB. In other words, the bigger the EBS volume, the more IOPS it can be provisioned to deliver, with a per-volume upper bound of 64,000 IOPS. This change in ratios has reduced storage costs by a factor of 5 for throughput-centric workloads.

Also, based on your requests and your insatiable desire for more performance, we have raised the maximum number of IOPS per EBS volume multiple times:

The August, 2014 change in the I/O request size made EBS 16x more cost-effective for throughput-centric workloads.

Bringing the various numbers together, you can think of Provisioned IOPS volumes as being defined by capacity, IOPS, and the ratio of IOPS per GiB. You should also think about durability, which is expressed in percentage terms. For example, io1 volumes are designed to deliver 99.9% durability, which is 20x more reliable than typical commodity disk drives.

Higher Durability & More IOPS
Today we are launching the io2 volume type, with two important benefits, at the same price as the existing io1 volumes:

Higher Durability – The io2 volumes are designed to deliver 99.999% durability, making them 2000x more reliable than a commodity disk drive, further reducing the possibility of a storage volume failure and helping to improve the availability of your application. By the way, in the past we expressed durability in terms of an Annual Failure Rate, or AFR. The new, percentage-based model is consistent with our other storage offerings, and also communicates expectations for success, rather than for failure.

More IOPS – We are increasing the IOPS per GiB ratio yet again, this time to 500 IOPS per GiB. You can get higher performance from your EBS volumes, and you can reduce or outright eliminate any over-provisioning that you might have done in the past to achieve the desired level of performance.

Taken together, these benefits make io2 volumes a perfect fit for your high-performance, business-critical databases and workloads. This includes SAP HANA, Microsoft SQL Server, and IBM DB2.

You can create new io2 volumes and you can easily change the type of an existing volume to io2:

Or:

$aws ec2 modify-volume --volume-id vol-0b3c663aeca5aabb7 --volume-type io2

io2 volumes support all features of io1 volumes with the exception of Multi-Attach, which is on the roadmap.

Available Now
You can make use of io2 volumes in the US East (Ohio), US East (N. Virginia), US West (N. California), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Middle East (Bahrain) Regions today.

Jeff;

 

Amazon Braket – Go Hands-On with Quantum Computing

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-braket-go-hands-on-with-quantum-computing/

Last year I told you about Amazon Braket and explained the basics of quantum computing, starting from qubits and progressing to quantum circuits. During the preview, AWS customers such as Enel, Fidelity, and Volkswagen have been using Amazon Braket to explore and gain experience with quantum computing.

I am happy to announce that Amazon Braket is now generally available and that you can now make use of both the classically-powered circuit simulator and quantum computers from D-Wave, IonQ, and Rigetti. Today I am going to show you both elements, creating and simulating a simple circuit and then running it on real hardware (also known as a QPU, or Quantum Processing Unit).

Creating and Simulating a Simple Circuit
As I mentioned in my earlier post, you can access Amazon Braket through a notebook-style interface. I start by opening the Amazon Braket Console, choose the desired region (more on that later), and click Create notebook instance:

I give my notebook a name (amazon-braket-jeff-2), select an instance type, and choose an IAM role. I also opt out of root access and forego the use of an encryption key for this example. I can choose to run the notebook in a VPC, and I can (in the Additional settings) change the size of the notebook’s EBS volume. I make all of my choices and click Create notebook instance to proceed:

My notebook is ready in a few minutes and I click to access it:

The notebook model is based on Jupyter, and I start by browsing the examples:

I click on the Superdense Coding example to open it, and then read the introduction and explanation (the math and the logic behind this communication protocol is explained here if you are interested in learning more):

The notebook can run code on the simulator that is part of Braket, or on any of the available quantum computers:

# Select device arn for simulator
device = AwsDevice("arn:aws:braket:::device/quantum-simulator/amazon/sv1")

This code chooses the SV1 managed simulator, which shows its strength for larger circuits (25 or more qubits), and on those that require lots of compute power to simulate. For small circuits, I can also use the local simulator that is part of the Braket SDK, and which runs on the notebook instance:

device = LocalSimulator()

I step through the cells of the notebook, running each one in turn by clicking the Run arrow. The code in the notebook uses the Braket API to build a quantum circuit from scratch, and then displays it in ASCII form (q0 and q1 are qubits, and the T axis indicates time, expressed in moments):

The next cell creates a task that runs the circuit on the chosen device and displays the results:

The get_result function is defined within the notebook. It submits a task to the device, monitors the status of the task, and waits for it to complete. Then it captures the results (a set of probabilities), plots them on a bar graph, and returns the probabilities. As you could learn by looking at the code in the function, the circuit is run 1000 times; each run is known as a “shot.” You can see from the screen shot above that the counts returned by the task (504 and 496) add up to 1000. Amazon Braket allows you to specify between 10 and 100,000 shots per task (depending on the device); more shots leads to greater accuracy.

The remaining cells in the notebook run the same circuit with the other possible messages and verify that the results are as expected. You can run this (and many other examples) yourself to learn more!

Running on Real Hardware
Amazon Braket provides access to QPUs from three manufacturers. I click Devices in the Console to learn more:

Each QPU is associated with a particular AWS region, and also has a unique ARN. I can click a device card to learn more about the technology that powers the device (this reads like really good sci-fi, but I can assure you that it is real), and I can also see the ARN:

I create a new cell in the notebook and copy/paste some code to run the circuit on the Rigetti Aspen-8:

device = AwsDevice("arn:aws:braket:::device/qpu/rigetti/Aspen-8")
counts = get_result(device, circ, s3_folder)
print(counts)

This creates a task and queues it up for the QPU. I can switch to the console in the region associated with the QPU and see the tasks:

The D-Wave QPU processes Braket tasks 24/7. The other QPUs currently process Amazon Braket tasks during specific time windows, and tasks are queued if created while the window is closed. When my task has finished, its status changes to COMPLETED and a CloudWatch Event is generated:

The Amazon Braket API
I used the console to create my notebook and manage my quantum computing tasks, but API and CLI support is also available. Here are the most important API functions:

CreateQuantumTask – Create a task that runs on the simulator or on a QPU.

GetQuantumTask – Get information about a task.

SearchDevices – Use a property-based search to locate suitable QPUs.

GetDevice – Get detailed information about a particular QPU.

As you can see from the code in the notebooks, you can write code that uses the Amazon Braket SDK, including the Circuit, Gates, Moments, and AsciiCircuitDiagram modules.

Things to Know
Here are a couple of important things to keep in mind when you evaluate Amazon Braket:

Emerging Technology – Quantum computing is an emerging field. Although some of you are already experts, it will take some time for the rest of us to understand the concepts and the technology, and to figure out how to put them to use.

Computing Paradigms – The QPUs that you can access through Amazon Braket support two different paradigms. The IonQ and Rigetti QPUs and the simulator support circuit-based quantum computing, and the D-Wave QPU supports quantum annealing. You cannot run a problem designed for one paradigm on a QPU that supports the other one, so you will need to choose the appropriate QPU early in your exploratory journey.

Pricing – Each task that you run will incur a per-task charge and an additional per-shot charge that is specific to the type of QPU that you use. Use of the simulator incurs an hourly charge, billed by the second, with a 15 second minimum. Notebooks pricing is the same as for SageMaker. For more information, check out the Amazon Braket Pricing page.

Give it a Shot!
As I noted earlier, this is an emerging and exciting field and I am looking forward to hearing back after you have had a chance to put Amazon Braket to use.

Jeff;

 

AWS Wavelength Zones Are Now Open in Boston & San Francisco

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-wavelength-zones-are-now-open-in-boston-san-francisco/

We announced AWS Wavelength at AWS re:Invent 2019. As a quick recap, we have partnered with multiple 5G telecommunication providers to embed AWS hardware and software in their datacenters. The goal is to allow developers to build and deliver applications that can benefit from single-digit millisecond latency.

In the time since the announcement we have been working with our partners and a wide range of pilot customers: enterprises, startups, and app developers. Partners and customers alike are excited by the possibilities that Wavelength presents, while also thrilled to find that most of what they know about AWS and EC2 still applies!

Wavelength Zones Now Open
I am happy to be able to announce that the first two Wavelength Zones are now open, one in Boston and the other in San Francisco. These zones are now accessible upon request for developers who want to build apps to service Verizon Wireless customers in those metropolitan areas.

Initially, we expect developers to focus on gaming, media processing, ecommerce, social media, medical image analysis, and machine learning inferencing apps. I suspect that some of the most compelling and relevant use cases are still to be discovered, and that the field is wide open for innovative thinking!

Using a Wavelength Zone
As I mentioned earlier, just about everything that you know about AWS and EC2 still applies. Once you have access to a Wavelength Zone, you can launch your first EC2 instance in minutes. You can initiate the onboarding process by filling out this short [sign-up form] and we’ll do our best to get you set up.

Each WZ is associated with a specific AWS region known as the parent region. This is US East (N. Virginia) for the Wavelength Zone in Boston, and US West (N. California) for the AZ in San Francisco. I’m going to use the Wavelength zone in Boston (us-east-1-wl1-bos-wlz-1), and will show you how to launch an EC2 instance using the AWS Command Line Interface (CLI) (console, API, and CloudFormation support is also available).

I can inspect the output of describe-availability-zones to confirm that I have access to the desired Wavelength Zone:

$ aws ec2 describe-availability-zones
...
||  ZoneName             |  us-east-1f             ||
|+-----------------------+-------------------------+|
||                AvailabilityZones                ||
|+---------------------+---------------------------+|
||  GroupName          |  us-east-1-wl1            ||
||  NetworkBorderGroup |  us-east-1-wl1-bos-wlz-1  ||
||  OptInStatus        |  opted-in                 ||
||  RegionName         |  us-east-1                ||
||  State              |  available                ||
||  ZoneId             |  use1-wl1-bos-wlz1        ||
||  ZoneName           |  us-east-1-wl1-bos-wlz-1  ||
|+---------------------+---------------------------+|

I can create a new Virtual Private Cloud (VPC) or use an existing one:

$ aws ec2 --region us-east-1 create-vpc \
  --cidr-block 10.0.0.0/16

I capture the VPC Id (vpc-01d94be2191cb2dfa) because I will need it again. I’ll also need the Id of the desired security group. For simplicity I’ll use the VPC’s default group:

$ aws ec2 --region us-east-1 describe-security-groups \
  --filters Name=vpc-id,Values=vpc-01d94be2191cb2dfa \
  | grep GroupId

Next, I create a subnet to represent the target Wavelength Zone:

$ aws ec2 --region us-east-1 create-subnet \
  --cidr-block 10.0.0.0/24  \
  --availability-zone us-east-1-wl1-bos-wlz-1 \
  --vpc-id vpc-01d94be2191cb2dfa

Moving right along, I create a route table and associate it with the subnet:

$ aws ec2 --region us-east-1 create-route-table \
  --vpc-id vpc-01d94be2191cb2dfa

$ aws ec2 --region us-east-1 associate-route-table \
  --route-table-id rtb-0c3dc2a16c70d40b5 \
  --subnet-id subnet-0bc3ad0d67e79469c

Next, I create a new type of VPC resource called a Carrier Gateway. This resource is used to communicate (in this case) with Verizon wireless devices in the Boston area. I also create a route from the gateway:

$ aws ec2 --region us-east-1 create-carrier-gateway \
  --vpc-id vpc-01d94be2191cb2dfa
$ 
$ aws ec2 --region us-east-1 create-route \
  --route-table-id rtb-01af227e9ea18c5ab --destination-cidr-block 0.0.0.0/0 \
  --carrier-gateway-id cagw-020c231b6e33ad1ef

The next step is to allocate a carrier IP address for instance that I am about to launch, create an Elastic Network Interface (ENI), and associate the two (the network border group represents the set of IP addresses within the Wavelength Zone):

$ aws ec2 --region us-east-1 allocate-address \
  --domain vpc --network-border-group us-east-1-wl1-bos-wlz-1
$
$ aws ec2 --region us-east-1 create-network-interface \
  --subnet-id subnet-0bc3ad0d67e79469c
$
$ aws ec2 --region us-east-1 associate-address \
  --allocation-id eipalloc-00c2c378c065887f1 --network-interface-id eni-0af68d5ce897ed2b8

And now I can launch my EC2 instance:

 $ aws ec2 --region us-east-1 run-instances \
  --instance-type r5d.2xlarge \
  --network-interface '[{"DeviceIndex":0,"NetworkInterfaceId":"eni-0af68d5ce897ed2b8"}]' \
  --image-id ami-09d95fab7fff3776c \
  --key-name keys-jbarr-us-east

The instance is accessible from devices on the Verizon network in the Boston area, as defined by their coverage map; the Carrier IP addresses do not include Internet ingress. If I need to SSH to it for development or debugging, I can use a bastion host or assign a second IP address.

I can see my instance in the EC2 Console, and manage it as I would any other instance (I edited the name using the console):

I can create an EBS volume in the Wavelength Zone:

And attach it to the instance:

I can create snapshots of the volume, and they will be stored in the parent region.

The next step is to build an application that runs in a Wavelength Zone. You can read Deploying Your First 5G-Enabled Application with AWS Wavelength to learn how to do this!

Things to Know
Here are some things to keep in mind as you think about how to put Wavelength to use:

Pricing – You will be billed for your EC2 instances on an On-Demand basis, and you can also purchase an Instance Savings Plan.

Instance Types – We are launching with support for t3 (medium and xlarge), r5 (2xlarge), and g4 (2xlarge) instances.

Other AWS Services – In addition to launching EC2 instances directly, you can create ECS clusters, EKS clusters (using Kubernetes 1.17), and you can make use of Auto Scaling. Many other services, including AWS Identity and Access Management (IAM), AWS CloudFormation, and Amazon CloudWatch will work as expected with no additional effort on your part.

More Wavelength Zones – We plan to launch more Wavelength Zones with Verizon in the US by the end of 2020. Our work with other carrier partners is proceeding at full speed, and I’ll let you know when those Wavelength Zones become available.

Jeff;

 

 

AWS Well-Architected Framework – Updated White Papers, Tools, and Best Practices

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-well-architected-framework-updated-white-papers-tools-and-best-practices/

We want to make sure that you are designing and building AWS-powered applications in the best possible way. Back in 2015 we launched AWS Well-Architected to make sure that you have all of the information that you need to do this right. The framework is built on five pillars:

Operational Excellence – The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.

Security – The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.

Reliability – The ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.

Performance Efficiency – The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.

Cost Optimization – The ability to run systems to deliver business value at
the lowest price point.

Whether you are a startup, a unicorn, or an enterprise, the AWS Well-Architected Framework will point you in the right direction and then guide you along the way as you build your cloud applications.

Lots of Updates
Today we are making a host of updates to the Well-Architected Framework! Here’s an overview:

Well-Architected Framework -This update includes new and updated questions, best practices, and improvement plans, plus additional examples and architectural considerations. We have added new best practices in operational excellence (organization), reliability (workload architecture), and cost optimization (practice Cloud Financial Management). We are also making the framework available in eight additional languages (Spanish, French, German, Japanese, Korean, Brazilian Portuguese, Simplified Chinese, and Traditional Chinese). Read the Well-Architected Framework (PDF, Kindle) to learn more.

Pillar White Papers & Labs – We have updated the white papers that define each of the five pillars with additional content, including new & updated questions, real-world examples, additional cross-references, and a focus on actionable best practices. We also updated the labs that accompany each pillar:

Well-Architected Tool – We have updated the AWS Well-Architected Tool to reflect the updates that we made to the Framework and to the White Papers.

Learning More
In addition to the documents that I linked above, you should also watch these videos.

In this video, AWS customer Cox Automotive talks about how they are using AWS Well-Architected to deliver results across over 200 platforms:

In this video, my colleague Rodney Lester tells you how to build better workloads with the Well-Architected Framework and Tool:

Get Started Today
If you are like me, a lot of interesting services and ideas are stashed away in a pile of things that I hope to get to “someday.” Given the importance of the five pillars that I mentioned above, I’d suggest that Well-Architected does not belong in that pile, and that you should do all that you can to learn more and to become well-architected as soon as possible!

Jeff;

New – Create Amazon RDS DB Instances on AWS Outposts

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-create-amazon-rds-db-instances-on-aws-outposts/

Late last year I told you about AWS Outposts and invited you to Order Yours Today. As I told you at the time, this is a comprehensive, single-vendor compute and storage offering that is designed to meet the needs of customers who need local processing and very low latency in their data centers and on factory floors. Outposts uses the hardware that we use in AWS public regions

I first told you about Amazon RDS back in 2009. This fully managed service makes it easy for you to launch, operate, and scale a relational database. Over the years we have added support for multiple open source and commercial databases, along with tons of features, all driven by customer requests.

DB Instances on AWS Outposts
Today I am happy to announce that you can now create RDS DB Instances on AWS Outposts. We are launching with support for MySQL and PostgreSQL, with plans to add other database engines in the future (as always, let us know what you need so that we can prioritize it).

You can make use of important RDS features including scheduled backups to Amazon Simple Storage Service (S3), built-in encryption at rest and in transit, and more.

Creating a DB Instance
I can create a DB Instance using the RDS Console, API (CreateDBInstance), CLI (create-db-instance), or CloudFormation (AWS::RDS::DBInstance).

I’ll use the Console, taking care to select the AWS Region that serves as “home base” for my Outpost. I open the Console and click Create database to get started:

I select On-premises for the Database location, and RDS on Outposts for the On-premises database option:

Next, I choose the Virtual Private Cloud (VPC). The VPC must already exist, and it must have a subnet for my Outpost. I also choose the Security Group and the Subnet:

Moving forward, I select the database engine, and version. We’re launching with support for MySQL 8.0.17 and PostgreSQL 12.2-R1, with plans to add more engines and versions based on your feedback:

I give my DB Instance a name (jb-database-2), and enter the credentials for the master user:

Then I choose the size of the instance. I can select between Standard classes (db.m5):

and Memory Optimized classes (db.r5):

Next, I configure the desired amount of SSD storage:

One thing to keep in mind is that each Outpost has a large, but finite amount of compute power and storage. If there’s not enough of either one free when I attempt to create the database, the request will fail.

Within the Additional configuration section I can set up several database options, customize my backups, and set up the maintenance window. Once everything is ready to go, I click Create database:

As usual when I use RDS, the state of my instance starts out as Creating and transitions to Available when my DB Instance is ready:

After the DB instance is ready, I simply configure my code (running in my VPC or in my Outpost) to use the new endpoint:

Things to Know
Here are a couple of things to keep in mind about this new way to use Amazon RDS:

Operations & Functions – Much of what you already know about RDS works as expected and is applicable. You can rename, reboot, stop, start, tag DB instances, and you can make use of point-in-time recovery; you can scale the instance up and down, and automatic minor version upgrades work as expected. You cannot make use of read replicas or create highly available clusters.

Backup & Recover – Automated backups work as expected, and are stored in S3. You can use them to create a fresh DB Instance in the cloud or in any of your Outposts. Manual snapshots also work, and are stored on the Outpost. They can be used to create a fresh DB Instance on the same Outpost.

Encryption – The storage associated with your DB instance is encrypted, as are your DB snapshots, both with KMS keys.

Pricing – RDS on Outposts pricing is based on a management fee that is charged on an hourly basis for each database that is managed. For more information, check out the RDS on Outposts pricing page.

Available Now
You can start creating RDS DB Instances on your Outposts today.

Jeff;

 

Introducing Amazon Honeycode – Build Web & Mobile Apps Without Writing Code

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/introducing-amazon-honeycode-build-web-mobile-apps-without-writing-code/

VisiCalc was launched in 1979, and I purchased a copy (shown at right) for my Apple II. The spreadsheet model was clean, easy to use, and most of all, easy to teach. I was working in a retail computer store at that time, and knew that this product was a big deal when customers started asking to purchase the software, and for whatever hardware that was needed to run it.

Today’s spreadsheets fill an important gap between mass-produced packaged applications and custom-built code created by teams of dedicated developers. Every tool has its limits, however. Sharing data across multiple users and multiple spreadsheets is difficult, as is dealing with large amounts of data. Integration & automation are also challenging, and require specialized skills. In many cases, those custom-built apps would be a better solution than a spreadsheet, but a lack of developers or other IT resources means that these apps rarely get built.

Introducing Amazon Honeycode
Today we are launching Amazon Honeycode in beta form. This new fully-managed AWS service gives you the power to build powerful mobile & web applications without writing any code. It uses the familiar spreadsheet model and lets you get started in minutes. If you or your teammates are already familiar with spreadsheets and formulas, you’ll be happy to hear that just about everything you know about sheets, tables, values, and formulas still applies.

Amazon Honeycode includes templates for some common applications that you and other members of your team can use right away:

You can customize these apps at any time and the changes will be deployed immediately. You can also start with an empty table, or by importing some existing data in CSV form. The applications that you build with Honeycode can make use of a rich palette of user interface objects including lists, buttons, and input fields:

You can also take advantage of a repertoire of built-in, trigger-driven actions that can generate email notifications and modify tables:

Honeycode also includes a lengthy list of built-in functions. The list includes many functions that will be familiar to users of existing spreadsheets, along with others that are new to Honeycode. For example, FindRow is a more powerful version of the popular Vlookup function.

Getting Started with Honeycode
It is easy to get started. I visit the Honeycode Builder, and create my account:

After logging in I see My Drive, with my workbooks & apps, along with multiple search, filter, & view options:

I can open & explore my existing items, or I can click Create workbook to make something new. I do that, and then select the Simple To-do template:

The workbook, tables, and the apps are created and ready to use right away. I can simply clear the sample data from the tables and share the app with the users, or I can inspect and customize it. Let’s inspect it first, and then share it!

After I create the new workbook, the Tasks table is displayed and I can see the sample data:

Although this looks like a traditional spreadsheet, there’s a lot going on beneath the surface. Let’s go through, column-by-column:

A (Task) – Plain text.

B (Assignee) – Text, formatted as a Contact.

C (First Name) – Text, computed by a formula:

In the formula, Assignee refers to column B, and First Name refers to the first name of the contact.

D (Due) – A date, with multiple formatting options:

E (Done) – A picklist that pulls values from another table, and that is formatted as a Honeycode rowlink. Together, this restricts the values in this column to those found in the other table (Done, in this case, with the values Yes and No), and also makes the values from that table visible within the context of this one:

F (Remind On) – Another picklist, this one taking values from the ReminderOptions table:

G (Notification) – Another date.

This particular table uses just a few of the features and options that are available to you.

I can use the icons on the left to explore my workbook:

I can see the tables:

I can also see the apps. A single Honeycode workbook can contain multiple apps that make use of the same tables:

I’ll return to the apps and the App Builder in a minute, but first I’ll take a look at the automations:

Again, all of the tables and apps in the workbook can use any of the automations in the workbook.

The Honeycode App Builder
Let’s take a closer look at the app builder. As was the case with the tables, I will show you some highlights and let you explore the rest yourself. Here’s what I see when I open my Simple To-do app in the builder:

This app contains four screens (My Tasks, All Tasks, Edit, and Add Task). All screens have both web and mobile layouts. Newly created screens, and also those in this app, have the layouts linked, so that changes to one are reflected in the other. I can unlink the layouts if I want to exercise more control over the controls, the presentation, or to otherwise differentiate the two:

Objects within a screen can reference data in tables. For example, the List object on the My Task screen filters rows of the Tasks table, selecting the undone tasks and ordering them by the due date:

Here’s the source expression:

=Filter(Tasks,"Tasks[Done]<>% ORDER BY Tasks[Due]","Yes")

The “%”  in the filter condition is replaced by the second parameter (“Yes”) when the filter is evaluated. This substitution system makes it easy for you to create interesting & powerful filters using the FILTER() function.

When the app runs, the objects within the List are replicated, one per task:

Objects on screens can initiate run automations and initiate actions. For example, the ADD TASK button navigates to the Add Task screen:

The Add Task screen prompts for the values that specify the new task, and the ADD button uses an automation that writes the values to the Tasks table:

Automations can be triggered in four different ways. Here’s the automation that generates reminders for tasks that have not been marked as done. The automation runs once for each row in the Tasks table:

The notification runs only if the task has not been marked as done, and could also use the FILTER() function:

While I don’t have the space to show you how to build an app from scratch, here’s a quick overview.

Click Create workbook and Import CSV file or Start from scratch:

Click the Tables icon and create reference and data tables:

Click the Apps icon and build the app. You can select a wizard that uses your tables as a starting point, or you can start from an empty canvas.

Click the Automations icon and add time-driven or data-driven automations:

Share the app, as detailed in the next section.

Sharing Apps
After my app is ready to go, I can share it with other members of my team. Each Honeycode user can be a member of one or more teams:

To share my app, I click Share app:

Then I search for the desired team members and share the app:

They will receive an email that contains a link, and can start using the app immediately. Users with mobile devices can install the Honeycode Player (iOS, Android) and make use of any apps that have been shared with them. Here’s the Simple To-do app:

Amazon Honeycode APIs
External applications can also use the Honeycode APIs to interact with the applications you build with Honeycode. The functions include:

GetScreenData – Retrieve data from any screen of a Honeycode application.

InvokeScreenAutomation – Invoke an automation or action defined in the screen of a Honeycode application.

Check it Out
As you can see, Amazon Honeycode is easy to use, with plenty of power to let you build apps that help you and your team to be more productive. Check it out, build something cool, and let me know what you think! You can find out more in the announcement video from Larry Augustin here:

Jeff;

PS – The Amazon Honeycode Forum is the place for you to ask questions, learn from other users, and to find tutorials and other resources that will help you to get started.

Introducing AWS Snowcone – A Small, Lightweight, Rugged, Secure Edge Computing, Edge Storage, and Data Transfer Device

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/introducing-aws-snowcone-small-lightweight-edge-storage-and-processing/

Last month I published my AWS Snowball Edge Update and told you about the latest updates to Snowball Edge, including faster storage-optimized devices with more memory & vCPUs, the AWS OpsHub for Snow Family GUI-based management tool, IAM for Snowball Edge, and Snowball Edge Support for AWS Systems Manager.

AWS Snowcone
Today I would like to introduce you to the newest and smallest member of the AWS Snow Family of physical edge computing, edge storage, and data transfer devices for rugged or disconnected environments, AWS Snowcone:

AWS Snowcone weighs 4.5 pounds and includes 8 terabytes of usable storage. It is small (9″ long, 6″ wide, and 3″ tall) and rugged, and can be used in a variety of environments including desktops, data centers, messenger bags, vehicles, and in conjunction with drones. Snowcone runs on either AC power or an optional battery, making it great for many different types of use cases where self-sufficiency is vital.

The device enclosure is both tamper-evident and tamper-resistant, and also uses a Trusted Platform Module (TPM) designed to ensure both security and full chain-of-custody for your data. The device encrypts data at rest and in transit using keys that are managed by AWS Key Management Service (KMS) and are never stored on the device.

Like other Snow Family devices, Snowcone includes an E Ink shipping label designed to ensure the device is automatically sent to the correct AWS facility and to aid in tracking. It also includes 2 CPUs, 4 GB of memory, wired or wireless access, and USB-C power using a cord or the optional battery. There’s enough compute power for you to launch EC2 instances and to use AWS IoT Greengrass.

You can use Snowcone for data migration, content distribution, tactical edge computing, healthcare IoT, industrial IoT, transportation, logistics, and autonomous vehicle use cases. You can ship data-laden devices to AWS for offline data transfer, or you can use AWS DataSync for online data transfer.

Ordering a Snowcone
The ordering process for Snowcone is similar to that for Snowball Edge. I open the Snow Family Console, and click Create Job:

I select the Import into Amazon S3 job type and click Next:

I choose my address (or enter a new one), and a shipping speed:

Next, I give my job a name (Snowcone2) and indicate that I want a Snowcone. I also acknowledge that I will provide my own power supply:

Deeper into the page, I choose an S3 bucket for my data, opt-in to WiFi connectivity, and choose an EC2 AMI that will be loaded on the device before it is shipped to me:

As you can see from the image, I can choose multiple buckets and/or multiple AMIs. The AMIs must be made from an instance launched from a CentOS or Ubuntu product in AWS Marketplace, and it must contain a SSH key.

On successive pages (not shown), I specify permissions (an IAM role), choose an AWS Key Management Service (KMS) key to encrypt my data, and set up a SNS topic for job notifications. Then I confirm my choices and click Create job:

Then I await delivery of my device! I can check the status at any time:

As noted in one of the earlier screens, I will also need a suitable power supply or battery (you can find several on the Snowcone Accessories page).

Time passes, the doorbell rings, Luna barks, and my device is delivered…

Luna and a Snowcone

The console also updates to show that my device has been delivered:

On that page, I click Get credentials, copy the client unlock code, and download the manifest file:

Setting up My Snowcone
I connect the Snowcone to the power supply and to my network, and power up! After a few seconds of initialization, the device shows its IP address and invites me to connect:

The IP address was supplied by the DHCP server on my home network, and should be fine. If not, I can touch Network and configure a static IP address or log in to my WiFi network.

Next, I download AWS OpsHub for Snow Family, install it, and then configure it to access the device. I select Snowcone and click Next:

I enter the IP address as shown on the display:

Then I enter the unlock code, upload the manifest file, and click Unlock device:

After a minute or two, the device is unlocked and ready. I enter a name (Snowcone1) that I’ll use within AWS OpsHub and click Save profile name:

I’m all set:

AWS OpsHub for Snow Family
Now that I have ordered & received my device, installed AWS OpsHub for Snow Family, and unlocked my device, I am ready to start managing some file storage and doing some edge computing!

I click on Get started within Manage file storage, and Start NFS. I have several network options, and I’ll use the defaults:

The NFS server is ready within a minute or so, and it has its own IP address:

Once it is ready I can mount the NFS volume and copy files to the Snowcone:

I can store process these files locally, or I can use AWS DataSync to transfer them to the cloud.

As I showed you earlier in this post, I selected an EC2 AMI when I created my job. I can launch instances on the Snowcone using this AMI. I click on Compute, and Launch instance:

I have three instance types to choose from:

Instance NameCPUsRAM
snc1.micro11 GiB
snc1.small12 GiB
snc1.medium24 GiB

I select my AMI & instance type, confirm the networking options, and click Launch:

I can also create storage volumes and attach them to the instance.

The ability to build AMIs and run them on Snowcones gives you the power to build applications that do all sorts of interesting filtering, pre-processing, and analysis at the edge.

I can use AWS DataSync to transfer data from the device to a wide variety of AWS storage services including Amazon Simple Storage Service (S3), Amazon Elastic File System (EFS), or Amazon FSx for Windows File Server. I click on Get started, then Start DataSync Agent, confirm my network settings, and click Start agent:

Once the agent is up and running, I copy the IP address:

Then I follow the link and create a DataSync agent (the deploy step is not necessary because the agent is already running). I choose an endpoint and paste the IP address of the agent, then click Get key:

I give my agent a name (SnowAgent), tag it, and click Create agent:

Then I configure the NFS server in the Snowcone as a DataSync location, and use it to transfer data in or out using a DataSync Task.

API / CLI
While AWS OpsHub is going to be the primary access method for most users, the device can also be accessed programmatically. I can use the Snow Family tools to retrieve the AWS Access Key and Secret Key from the device, create a CLI profile (region is snow), and run commands (or issue API calls) as usual:

C:\>aws ec2 \
   --endpoint http://192.168.7.154:8008 describe-images \
   --profile snowcone1
{
    "Images": [
        {
            "ImageId": "s.ami-0581034c71faf08d9",
            "Public": false,
            "State": "AVAILABLE",
            "BlockDeviceMappings": [
                {
                    "DeviceName": "/dev/sda1",
                    "Ebs": {
                        "DeleteOnTermination": false,
                        "Iops": 0,
                        "SnapshotId": "s.snap-01f2a33baebb50f0e",
                        "VolumeSize": 8
                    }
                }
            ],
            "Description": "Image for Snowcone delivery #1",
            "EnaSupport": false,
            "Name": "Snowcone v1",
            "RootDeviceName": "/dev/sda1"
        },
        {
            "ImageId": "s.ami-0bb6828757f6a23cf",
            "Public": false,
            "State": "AVAILABLE",
            "BlockDeviceMappings": [
                {
                    "DeviceName": "/dev/sda",
                    "Ebs": {
                        "DeleteOnTermination": true,
                        "Iops": 0,
                        "SnapshotId": "s.snap-003d9042a046f11f9",
                        "VolumeSize": 20
                    }
                }
            ],
            "Description": "AWS DataSync AMI for online data transfer",
            "EnaSupport": false,
            "Name": "scn-datasync-ami",
            "RootDeviceName": "/dev/sda"
        }
    ]
}

Get One Today
You can order a Snowcone today for use in US locations.

Jeff;

 

New – SaaS Contract Upgrades and Renewals for AWS Marketplace

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-saas-contract-upgrades-and-renewals-for-aws-marketplace/

AWS Marketplace currently contains over 7,500 listings from 1,500 independent software vendors (ISVs). You can browse the digital catalog to find, test, buy, and deploy software that runs on AWS:

Each ISV sets the pricing model and prices for their software. There are a variety of options available, including free trials, hourly or usage-based pricing, monthly, annual AMI pricing, and up-front pricing for 1-, 2-, and 3-year contracts. These options give each ISV the flexibility to define the models that work best for their customers. If their offering is delivered via a Software as a Service (SaaS) contract model, the seller can define the usage categories, dimensions, and contract length.

Upgrades & Renewals
AWS customers that make use of the SaaS and usage-based products that they find in AWS Marketplace generally start with a small commitment and then want to upgrade or renew them early as their workloads expand.

Today we are making the process of upgrading and renewing these contracts easier than ever before. While the initial contract is still in effect, buyers can communicate with sellers to negotiate a new Private Offer that best meets their needs. The offer can include additional entitlements to use the product, pricing discounts, a payment schedule, a revised contract end-date, and changes to the end-user license agreement (EULA), all in accord with the needs of a specific buyer.

Once the buyer accepts the offer, the new terms go in to effect immediately. This new, streamlined process means that sellers no longer need to track parallel (paper and digital) contracts, and also ensures that buyers receive continuous service.

Let’s say I am already using a product from AWS Marketplace and negotiate an extended contract end-date with the seller. The seller creates a Private Offer for me and sends me a link that I follow in order to find & review it:

I select the Upgrade offer, and I can see I have a new contract end date, the number of dimensions on my upgrade contract, and the payment schedule. I click Upgrade current contract to proceed:

I confirm my intent:

And I am good to go:

This feature is available to all buyers & SaaS sellers, and applies to SaaS contracts and contracts with consumption pricing.

Jeff;

MSP360 – Evolving Cloud Backup with AWS for Over a Decade

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/msp360-evolving-cloud-backup-with-aws-for-over-a-decade/

Back in 2009 I received an email from an AWS developer named Andy. He told me that he and his team of five engineers had built a product called CloudBerryExplorer for Amazon S3. I mentioned his product in my CloudFront Management Tool Roundup and in several subsequent blog posts. During re:Invent 2019, I learned that CloudBerry has grown to over 130 employees and is now known as MSP360. Andy and his core team are still in place, and continue to provide file management and cloud-based backup services.

MSP360 focuses on providing backup and remote management services to Managed Service Providers (MSPs). These providers, in turn, market to IT professionals and small businesses. MSP360, in effect, provides an “MSP in a box” that gives the MSPs the ability to provide a robust, AWS-powered cloud backup solution. Each MSP can add their own branding and market the resulting product to the target audience of their choice: construction, financial services, legal services, healthcare, and manufacturing to name a few.

We launched the AWS Partner Network (APN) in 2012. MSP360 was one of the first to join. Today, as an APN Advanced Technology Partner with Storage Competency for the Backup & Restore use case and one of our top storage partners, MSP360 gives its customers access to multiple Amazon Simple Storage Service (S3) storage options and classes, and also supports Snowball Edge. They are planning to support AWS Outposts and are also working on a billing model that will simplify the billing experience for MSP360 customers that use Amazon S3.

Here I am with the MSP360 team and some of my AWS colleagues at re:Invent 2019:

 

Inside MSP360 (CloudBerry) Managed Backup Service
CloudBerry Explorer started out as a file transfer scheduler that ran only on Windows. It is now known as MSP360 (CloudBerry) Managed Backup Service (MBS) and provides centralized job management, monitoring, reporting, and licensing control. MBS supports file-based and image-level backup, and also includes specialized support for applications like SQL Server and Microsoft Exchange. Agentless, host-level backup support is available for VMware and Hyper-V. Customers can also backup Microsoft Office 365 and Google G Suite documents, data, and configurations.

By the Numbers
The product suite is available via a monthly subscription model that is a great fit for the MSPs and for their customers. As reported in a recent story, this model has allowed them to grow their revenue by 60% in 2019, driven by a 40% increase in product activations. Their customer base now includes over 9,000 MSPs and over 100,000 end-user customers. Working together with their MSP, customers can choose to store their data in any commercial AWS region, including the two regions in China.

Special Offer
The MSP360 team has created a special offer that is designed to help new customers to get started at no charge. The offer includes $200 in MBS licenses and customers can make use of up to 2 terabytes of S3 storage. Customers also get access to the MSP360 Remote Desktop product and other features. To take advantage of this offer, visit the MSP360 Special Offer page.

Jeff;

 

 

Adventures in Scaling in Changing Times

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/adventures-in-scaling-in-changing-times/

I don’t know about you, but the last two months have been kind of crazy for me due to the spread of COVID-19.

In the middle of a trans-Nordics trip in early March that took me to Denmark, Finland, and Sweden in the course of a week, Amazon asked me and my coworkers to work from home if possible. I finished my trip, returned to Seattle, and did my best to adapt to these changing times.

In the ensuing weeks, several of my scheduled trips were cancelled, all of my in-person meetings with colleagues and customers were replaced with Amazon Chime video calls, and we decided to start taping What’s New with AWS from my home office.

On the personal side, I watched as many of the entertainment, education, and sporting events that I enjoy were either canceled or moved online. Just as you probably did, I quickly found new ways to connect with family and friends that did not require face-to-face interaction.

I thought that it would be interesting to see how these sudden, large-scale changes are affecting our customers. My colleague Monica Benjamin checked in with AWS customers across several different fields and industries and provided me with the source material for this post. Here’s what we learned…

Edmodo – Education
Education technology company Edmodo provides tools for K-12 schools and teachers. More than 125 million members count on Edmodo to provide a secure space for teachers, students, and parents to communicate and collaborate. As the pandemic began spreading across Europe, Edmodo’s traffic began to grow at an exponential rate. AWS has allowed them to rapidly scale in order to meet this new demand so that education continues across the world. Per Thomsen (Vice President, Engineering) told us:

In early March, our traffic grew significantly with the total number of global learners engaging on the network spiking within a matter of weeks. This required us to increase site capacity by 15 times. With AWS and Amazon EC2 instances, Edmodo has been able to quickly scale and meet this new demand so we could continue to provide teachers and students with our uninterrupted services for their distance learning needs. Having AWS always at our fingertips gives us elastic and robust compute capacity to scale rapidly.

BlueJeans – Cloud-Based Video Conferencing
Global video provider BlueJeans supports employees working from home, health care providers shifting to telehealth, and educators moving to distance learning. Customers like BlueJeans because it provides high video and voice quality, strong security, and interoperability. Swaroop Kulkarni (Technical Director, Office of the CTO) told us:

With so many people working from home, we have seen explosive growth in traffic since the start of the Coronavirus pandemic. In just two weeks our usage skyrocketed 300% over the pre-COVID-19 average. We have always run a hybrid infrastructure between our datacenters and public cloud and fortunately had already shifted critical workloads to Amazon EC2 services before the Coronavirus outbreak. The traffic surge in March 2020 led us to scale up on AWS. We took advantage of the global presence of AWS and nearly doubled the number of regions and added US East (Ohio), APAC (Mumbai) and APAC (Singapore). We also experimented with various instance types (C,M,R families) and time-of-day scaling and this served us well for managing costs. Overall, we were able to stay ahead of traffic increases smoothly and seamlessly. We appreciate the partnership with AWS.

Netflix – Media & Entertainment
Home entertainment provider Netflix started to see their usage spike in March, with an increase in stream starts in many different parts of the world. Nils Pommerien (Director, Cloud Instrastructure Engineering) told us:

Like other home entertainment services, Netflix has seen temporarily higher viewing and increased member growth during this unprecedented time. In order to meet this demand our control plane services needed to scale very quickly. This is where the value of AWS’ cloud and our strong partnership became apparent, both in being able to meet capacity needs in compute, storage, as well as providing the necessary infrastructure, such as AWS Auto Scaling, which is deeply ingrained in Netflix’s operations model.

Pinterest – Billions of Pins
Visual discovery engine Pinterest has been scaling to meet the needs of an ever-growing audience. Coburn Watson (Head of Infrastructure and SRE) told us:

Pinterest has been able to provide inspiration for an expanded global customer audience during this challenging period, whether looking for public health information, new foods to prepare, or projects and crafts to do with friends and family. Working closely with AWS, Pinterest has been able to ensure additional capacity was available during this period to keep Pinterest up and serving our customers.

Finra – Financial Services
FINRA regulates a critical part of the securities industry – brokerage firms doing business with the public in the United States. FINRA takes in as much as 400 billion market events per day that are tracked, aggregated, and analyzed for the purpose of protecting investors. Steve Randich (Executive Vice President and Chief Information Officer) told us:

The COVID-19 pandemic has caused extreme volatility in the U.S. securities markets, and since March we have seen market volumes increase by 2-3x. Our compute resources with AWS are automatically provisioned and can process a record peak and then shut down to nothing, without any human intervention. We automatically turn on and off up to 100,000 compute nodes in a single day. We would have been unable to handle this surge in volume within our on premises data center.

As you can see from what Steve said, scaling down is just as important as scaling up.

Snap – Reinventing the Camera
The Snapchat application lets people express themselves and helps them to maintain connections with family and close friends. Saral Jain (Director of Engineering) told us:

As the global coronavirus pandemic affected the lives of millions around the world, Snapchat has played an important role in people’s lives, especially for helping close friends and family stay together emotionally while they are separated physically. In recent months, we have seen increased engagement across our platform resulting in higher workloads and the need to rapidly scale up our cloud infrastructure. For example, communication with friends increased by over 30 percent in the last week of March compared to the last week of January, with more than a 50 percent increase in some of our larger markets. AWS cloud has been valuable in helping us deal with this significant increase in demand, with services like EC2 and DynamoDB delivering high performance and reliability we need to provide the best experience for our customers.

I hope that you are staying safe, and that you have enjoyed this look at what our customers are doing in these unique and rapidly changing times. If you have a story of your own to share, please let me know.

Jeff;

 

 

AWS Inter-Region Data Transfer (DTIR) Price Reduction

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-inter-region-data-transfer-dtir-price-reduction/

If you build AWS applications that span two or more AWS regions, this post is for you. We are reducing the cost to transfer data from the South America (São Paulo), Middle East (Bahrain), Africa (Cape Town), and Asia Pacific (Sydney) Regions to other AWS regions as follows, effective May 1, 2020:

RegionOld Rate ($/GB)New Rate ($/GB)
South America (São Paulo)0.16000.1380
Middle East (Bahrain)0.16000.1105
Africa (Cape Town)0.18000.1470
Asia Pacific (Sydney)0.14000.0980

Consult the price list to see inter-region data transfer prices for all AWS regions.

Jeff;

 

New – AWS Elemental Link – Deliver Live Video to the Cloud for Events & Streams

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-elemental-link-deliver-live-video-to-the-cloud-for-events-streams/

Video is central to so many online experiences. Regardless of the origin or creator, today’s viewers expect a high-resolution, broadcast-quality experience.

In sophisticated environments, dedicated hardware and an associated A/V team can capture, encode, and stream or store video that meets these expectations. However, cost and operational complexity have prevented others from delivering a similar experience. Classrooms, local sporting events, enterprise events, and small performance spaces do not have the budget or the specialized expertise needed to install, configure, and run the hardware and software needed to reliably deliver video to the cloud for processing, storage, and on-demand delivery or live streaming.

Introducing AWS Elemental Link
Today I would like to tell you about AWS Elemental Link. This new device connects live video sources to AWS Elemental MediaLive. The device is small (about 32 cubic inches) and weighs less than a pound. It draws very little power, is absolutely silent, and is available for purchase today at $995.

You can order these devices from the AWS Management Console and have them shipped to the intended point of use. They arrive preconfigured, and need only be connected to power, video, and the Internet. You can monitor and manage any number of Link devices from the console, without the need for specialized expertise at the point of use.

When connected to a video source, the Link device sends all video, audio, and metadata streams that arrive on the built-in 3G-SDI or HDMI connectors to AWS Elemental MediaLive, with automatic, hands-free tuning that adapts to available bandwidth. Once your video is in the cloud, you can use the full lineup of AWS Elemental Media Services to process, store, distribute, and monetize it.

Ordering an AWS Elemental Link
To get started, I visit the AWS Elemental Link Console and click Start order:

I indicate that I understands the Terms of Service, and click Continue to place order to proceed:

I enter my order, starting with contact information and an optional order name:

Then I enter individual order lines, and click Add new order line after each one. Each line represents one or more devices destined for one physical address. All of the devices in an order line are provisioned for the same AWS region:

I can see my Order summary at the bottom. Once I have created all of the desired order lines I click Next to proceed:

I choose a payment option, verify my billing address, and click Next:

Then I review my order and click Submit to place it:

After I pay my invoice, I wait for my devices to arrive.

Connection & Setup
When my device arrives, I connect it to my network and my camera, and plug in the power supply. I wait a minute or so while the device powers up and connects to the network, AWS, and to my camera. When it is all set, the front panel looks like this:

Next, I open the AWS Elemental MediaLive Console and click Devices:

Now that everything is connected, I can create a MediaLive input (Studio1), selecting Elemental Link as the source and choosing one of the listed input devices:

And that’s the setup and connection process. From here I would create a channel that references the input and then set up an output group to stream, archive, broadcast, or package the video stream. We’re building a CloudFormation-powered solution that will take care of all of this for you; stay tuned for details.

You can order your AWS Elemental Link today and start delivering video to the cloud in minutes!

Jeff;

 

Join the FORMULA 1 DeepRacer ProAm Special Event

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/join-the-formula-1-deepracer-proam-special-event/

The AWS DeepRacer League gives you the opportunity to race for prizes and glory, while also having fun & learning about reinforcement learning. You can use the AWS DeepRacer 3D racing simulator to build, train, and evaluate your models. You can review the results and improve your models in order to ensure that they are race-ready.

Winning a FORMULA 1 (F1) race requires a technologically sophisticated car, a top-notch driver, an outstanding crew, and (believe it or not) a healthy dose of machine learning. For the past couple of seasons AWS has been working with the Formula 1 team to find ways to use machine learning to make cars that are faster and more fuel-efficient than ever before (read The Fastest Cars Deserve the Fastest Cloud and Formula 1 Works with AWS to Develop Next Generation Race Car to learn more).

Special Event
Each month the AWS DeepRacer League runs a new Virtual Race in the AWS DeepRacer console and this month is a special one: the Formula 1 DeepRacer ProAm Special Event. During the month of May you can compete for the opportunity to race against models built and tuned by Formula drivers and their crews. Here’s the lineup:

Rob Smedley – Director of Data Systems for F1 and AWS Technical Ambassador.

Daniel Ricciardo – F1 driver for Renault, with 7 Grand Prix wins and 29 podium appearances.

Tatiana Calderon – Test driver for the Alfa Romeo F1 team and 2019 F2 driver.

Each pro will be partnered with a member of the AWS Pit Crew tasked with teaching them new skills and taking them on a learning journey. Here’s the week-by-week plan for the pros:

Week 1 – Learn the basics of reinforcement learning and submit models using a standard, single-camera vehicle configuration.

Week 2 – Add stereo cameras to vehicles and learn how to configure reward functions to dodge debris on the track.

Week 3 – Add LIDAR to vehicles and use the rest of the month to prepare for the head-to-head qualifier.

At the end of the month the top AWS DeepRacer amateurs will face off against the professionals, in an exciting head to head elimination race, scheduled for the week of June 1.

The teams will be documenting their learning journey and you’ll be able to follow along as they apply real-life racing strategies and data science to the world of autonomous racing.

Bottom line: You have the opportunity to build & train a model, and then race it against one from Rob, Daniel, or Tatiana. How cool is that?

Start Your Engines
And now it is your turn. Read Get Started with AWS DeepRacer, build your model, join the Formula 1 DeepRacer ProAm Special Event, train it on the Circuit de Barcelona-Catalunya track, and don’t give up until you are at the top of the chart.

Training and evaluation using the DeepRacer Console are available at no charge for the duration of the event (Terms and Conditions apply), making this a great opportunity for you to have fun while learning a useful new skill.

Good luck, and see you at the finish line!

Jeff;

 

New – Use CloudWatch Synthetics to Monitor Sites, API Endpoints, Web Workflows, and More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-use-cloudwatch-synthetics-to-monitor-sites-api-endpoints-web-workflows-and-more/

Today’s applications encompass hundreds or thousands of moving parts including containers, microservices, legacy internal services, and third-party services. In addition to monitoring the health and performance of each part, you need to make sure that the parts come together to deliver an acceptable customer experience.

CloudWatch Synthetics (announced at AWS re:Invent 2019) allows you to monitor your sites, API endpoints, web workflows, and more. You get an outside-in view with better visibility into performance and availability so that you can become aware of and then fix any issues quicker than ever before. You can increase customer satisfaction and be more confident that your application is meeting your performance goals.

You can start using CloudWatch Synthetics in minutes. You simply create canaries that monitor individual web pages, multi-page web workflows such as wizards and checkouts, and API endpoints, with metrics stored in Amazon CloudWatch and other data (screen shots and HTML pages) stored in an S3 bucket. as you create your canaries, you can set CloudWatch alarms so that you are notified when thresholds based on performance, behavior, or site integrity are crossed. You can view screenshots, HAR (HTTP archive) files, and logs to learn more about the failure, with the goal of fixing it as quickly as possible.

CloudWatch Synthetics in Action
Canaries were once used to provide an early warning that deadly gases were present in a coal mine. The canaries provided by CloudWatch Synthetics provide a similar early warning, and are considerably more humane. I open the CloudWatch Console and click Canaries to get started:

I can see the overall status of my canaries at a glance:

I created several canaries last month in preparation for this blog post. I chose a couple of sites including the CNN home page, my personal blog, the Amazon Movers and Shakers page, and the Amazon Best Sellers page. I did not know which sites would return the most interesting results, and I certainly don’t mean to pick on any one of them. I do think that it is important to show you how this (and every) feature performs with real data, so here we go!

I can turn my attention to the Canary runs section, and look at individual data points. Each data point is an aggregation of runs for a single canary:

I can click on the amzn_movers_shakers canary to learn more:

I can see that there was a single TimeoutError issue in the last 24 hours. I can see the screenshots that were captured as part of each run, along with the HAR files and logs. Each HAR file contains a detailed log of the HTTP requests that were made when the canary was run, along with the responses and the amount of time that it took for the request to complete:

Each canary run is implemented using a Lambda function. I can access the function’s execution metrics in the Metrics tab:

And I can see the canary script and other details in the Configuration tab:

Hatching a Canary
Now that you have seen a canary in action, let me show you how to create one. I return to the list of canaries and click Create canary. I can use one of four blueprints to create my canary, or I can upload or import an existing one:

All of these methods ultimately result in a script that is run either once or periodically. The canaries that I showed above were all built from the Heartbeat monitoring blueprint, like this:

I can also create canaries for API endpoints, using either GET or PUT methods, any desired HTTP headers, and some request data:

Another blueprint lets me create a canary that checks a web page for broken links (I’ll use this post):

Finally, the GUI workflow builder lets me create a sophisticated canary that can include simulated clicks, content verification via CSS selector or text, text entry, and navigation to other URLs:

As you can see from these examples, the canary scripts are using the syn-1.0 runtime. This runtime supports Node.JS scripts that can use the Puppeteer and Chromium packages. Scripts can make use of a set of library functions and can (with the right IAM permissions) access other AWS services and resources. Here’s an example script that calls AWS Secrets Manager:

var synthetics = require('Synthetics');
const log = require('SyntheticsLogger');

const AWS = require('aws-sdk');
const secretsManager = new AWS.SecretsManager();

const getSecrets = async (secretName) => {
    var params = {
        SecretId: secretName
    };
    return await secretsManager.getSecretValue(params).promise();
}

const secretsExample = async function () {
    // Fetch secrets
    var secrets = await getSecrets("secretname")
    
    // Use secrets
    log.info("SECRETS: " + JSON.stringify(secrets));
};

exports.handler = async () => {
    return await secretsExample();
};

Scripts signal success by running to completion, and errors by raising an exception.

After I create my script, I establish a schedule and a pair of data retention periods. I also choose an S3 bucket that will store the artifacts created each time a canary is run:

I can also control the IAM role, set CloudWatch Alarms, and configure access to endpoints that are in a VPC:

Watch the demo video to see CloudWatch Synthetics in action:

Things to Know
Here are a couple of things to know about CloudWatch Synthetics:

Observability – You can use CloudWatch Synthetics in conjunction with ServiceLens and AWS X-Ray to map issues back to the proper part of your application. To learn more about how to do this, read Debugging with Amazon CloudWatch Synthetics and AWS X-Ray and Using ServiceLens to Monitor the Health of Your Applications.

Automation – You can create canaries using the Console, CLI, APIs, and from CloudFormation templates.

Pricing – As part of the AWS Free Tier you get 100 canary runs per month at no charge. After that, you pay per run, with prices starting at $0.0012 per run, plus the usual charges for S3 storage and Lambda invocations.

Limits – You can create up to 100 canaries per account in the US East (N. Virginia), Europe (Ireland), US West (Oregon), US East (Ohio), and Asia Pacific (Tokyo) Regions, and up to 20 per account in other regions where CloudWatch Synthetics are available.

Available Now
CloudWatch Synthetics are available now and you can start using them today!

Jeff;

AWS ChatBot – ChatOps for Slack and Chime

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-chatbot-chatops-for-slack-and-chime/

Last year, my colleague Ilya Bezdelev wrote Introducing AWS Chatbot: ChatOps for AWS to launch the public beta of AWS Chatbot. He also participated in the re:Invent 2019 Launchpad and did an in-depth AWS Chatbot demo:

In his initial post, Ilya showed you how you can practice ChatOps within Amazon Chime or Slack, receiving AWS notifications and executing commands in an environment that is intrinsically collaborative. In a later post, Running AWS commands from Slack using AWS Chatbot, Ilya showed how to configure AWS Chatbot in a Slack channel, display CloudWatch alarms, describe AWS resources, invoke a Lambda function and retrieve the logs, and create an AWS Support case. My colleagues Erin Carlson and Matt Cowsert wrote about AWS Budgets Integration with Chatbot and walked through the process of setting up AWS Budget alerts and arranging for notifications from within AWS Chatbot. Finally, Anushri Anwekar showed how to Receive AWS Developer Tools Notifications over Slack using AWS Chatbot.

As you can see from the posts that I referred to above, AWS Chatbot is a unique and powerful communication tool that has the potential to change the way that you monitor and maintain your cloud environments.

Now Generally Available
I am happy to announce that AWS Chatbot has graduated from beta to general availability, and that you can use it to practice ChatOps across multiple AWS regions. We are launching with support for Amazon CloudWatch, the AWS Code* services, AWS Health, AWS Budgets, Amazon GuardDuty, and AWS CloudFormation.

You can connect it to your Amazon Chime chatrooms and your Slack channels in minutes. Simply open the AWS Chatbot Console, choose your Chat client, and click Configure client to get started:

As part of the configuration process you will have the opportunity to choose an existing IAM role or to create a new one from one or more templates. The role provides AWS Chatbot with access to CloudWatch metrics, and the power to run commands, invoke Lambda functions, respond to notification actions, and generate support cases:

AWS Chatbot listens on Amazon Simple Notification Service (SNS) topics to learn about events and alarm notifications in each region of interest:

You can set up CloudWatch Alarms in any region where you select a topic and use them to send notifications to AWS Chatbot.

Special Offer from Slack
Our friends at Slack have put together a special offer to help you and your team connect and stay productive through new and shifting circumstances:

If you upgrade from the Free Plan to a Standard or Plus Plan you will receive a 25% discount for the first 12 months from your upgrade date.

Available Now
You can start using AWS Chatbot today at no additional charge. You pay for the underlying services (CloudWatch, SNS, and so forth) as if you were using them without AWS Chatbot, and you also pay any charges associated with the use of your chat client.

Jeff;

 

Capacity-Optimized Spot Instance Allocation in Action at Mobileye and Skyscanner

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/capacity-optimized-spot-instance-allocation-in-action-at-mobileye-and-skyscanner/

Amazon EC2 Spot Instances were launched way back in 2009. The instances are spare EC2 compute capacity that is available at savings of up to 90% when compared to the On-Demand prices. Spot Instances can be interrupted by EC2 (with a two minute heads-up), but are otherwise the same as On-Demand instances. You can use Amazon EC2 Auto Scaling to seamlessly scale Spot Instances, On-Demand instances, and instances that are part of a Savings Plan, all within a single EC2 Auto Scaling Group.

Over the years we have added many powerful Spot features including a Simplified Pricing Model, the Capacity-Optimized scaling strategy (more on that in a sec), integration with the EC2 RunInstances API, and much more.

EC2 Auto Scaling lets you use two different allocation strategies for Spot Instances:

lowest-price – Allocates instances from the Spot Instance pools that have the lowest price at the time of fulfillment. Spot pricing changes slowly over time based on long-term trends in supply and demand, but capacity fluctuates in real time. As the lowest-price strategy does not account for pool capacity depth as it deploys Spot Instances, this allocation strategy is a good fit for fault-tolerant workloads with a low cost of interruption.

capacity-optimized – Allocates instances from the Spot Instance pools with the optimal capacity for the number of instances that are launching, making use of real-time capacity data. This allocation strategy is appropriate for workloads that have a higher cost of interruption. It thrives on flexibility, empowered by the instance families, sizes, and generations that you choose.

Today I want to show you how you can use the capacity-optimized allocation strategy and to share a pair of customer stories with you.

Using Capacity-Optimized Allocation
First, I switch to the the new Auto Scaling console by clicking Go to the new console:

The new console includes a nice diagram to explain how Auto Scaling works. I click Create Auto Scaling group to proceed:

I name my Auto Scaling group and choose a launch template as usual, then click Next:

If you are not familiar with launch templates, read Recent EC2 Goodies – Launch Templates and Spread Placement, to learn all about them.

Because my launch template does not specify an instance type, Combine purchase options and instance types is pre-selected and cannot be changed. I ensure that the Capacity-optimized allocation strategy is also selected, and set the desired balance of On-Demand and Spot Instances:

Then I choose a primary instance type, and the console recommends others. I can choose Family and generation (m3, m4, m5 for example) flexibility or just size flexibility (large, xlarge, 12xlarge, and so forth) within the generation of the primary instance type. As I noted earlier, this strategy thrives on flexibility, so choosing as many relevant instances as possible is to my benefit.

I can also specify a weight for each of the instance types that I decide to use (this is especially useful when I am making use of size flexibility):

I also (not shown) select my VPC and the desired subnets within it, click Next, and proceed as usual. Flexibility with regard to subnets/Availability Zones is also to my benefit; for more information, read Auto Scaling Groups with Multiple Instance Types and Purchase Options.

And with that, let’s take a look at how AWS customers Skyscanner and Mobileye are making use of this feature!

Capacity-Optimized Allocation at Skyscanner
Skyscanner is an online travel booking site. They run the front-end processing for their site on Spot Instances, making good use of up to 40,000 cores per day. Skyscanner’s online platform runs on Kubernetes clusters powered entirely by Spot Instances (watch this video to learn more). Capacity-optimized allocation has delivered many benefits including:

Faster Time to Market – The ability to access more compute power at a lower cost has allowed them to reduce the time to launch a new service from 6-7 weeks using traditional infrastructure to just 50 minutes on the AWS Cloud.

Cost Savings – Diversifying Spot Instances across Availability Zones and instance types has resulted in an overall savings of 70% per core.

Reduced Interruptions – A test that Skyscanner ran in preparation for Black Friday showed that their old configuration (lowest-price) had between 200 and 300 Spot interruptions and the new one (capacity-optimized) had between 10 and 15.

Capacity-Optimized Allocation at Mobileye
Mobileye (an Intel company) develops vision-based technology for self-driving vehicles and advanced driver assistant systems. Spot Instances are used to run their analytics, machine learning, simulation, and AWS Batch workloads packaged in Docker containers. They typically use between 200K and 300K concurrent cores, with peak daily usage of around 500K, all on Spot. Here’s a instance count graph over the course of a day:

After switching to capacity-optimized allocation and making some changes in accord with our Spot Instance best practices, they reduced the overall interruption rate by about 75%. These changes allowed them to save money on their compute costs while increasing application uptime and reducing their time-to-insight.

To learn more about how Mobileye uses Spot Instances, watch their re:Invent presentation, Navigating the Winding Road Toward Driverless Mobility.

Jeff;

 

AWS Snowball Edge Update – Faster Hardware, OpsHub GUI, IAM, and AWS Systems Manager

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-snowball-edge-update/

Over the last couple of years I’ve told you about several members of the “Snow” family of edge computing and data transfer devices – The original Snowball, the more-powerful Snowball Edge, and the exabyte-scale Snowmobile.

Today I would like to tell you about the latest updates to Snowball Edge. Here’s what I have for you today:

Snowball Edge Update – New storage optimized devices that are 25% faster, with more memory, more vCPUs, and support for 100 Gigabit networking.

AWS OpsHub for Snow Family – A new GUI-based tool to simplify the management of Snowball Edge devices.

IAM for Snowball Edge – AWS Identity and Access Management (IAM) can now be used to manage access to services and resources on Snowball Edge devices.

Snowball Edge Support for AWS Systems Manager – Support for task automation to simplify common maintenance and deployment tasks on instances and other resources on Snowball Edge devices.

Let’s take a closer look at each one…

Snowball Edge Storage Optimized Update
We’ve refreshed the hardware, more than doubling the processing power and boosting data transfer speed by up to 25%, all at the same price as the older devices.

The newest Snowballl Edge Storage Optimized devices feature 40 vCPUs and 80 GB of memory, up from 24 and 48, respectively. The processor now runs at 3.2 GHz, allowing you to launch more powerful EC2 instances that can handle your preprocessing and analytics workloads even better than before. In addition to the 80 TB of storage for data processing and data transfer workloads, there’s now 1 TB of SATA SSD storage that is accessible to the EC2 instances that you launch on the device. The improved data transfer speed that I mentioned earlier is made possible by a new 100 Gigabit QSFP+ network adapter.

Here are the instances that are available on the new hardware (you will need to rebuild any existing AMIs in order to use them):

Instance NameMemoryvCPUs
sbe-c.small21
sbe-c.medium41
sbe-c.large82
sbe-c.xlarge164
sbe-c.2xlarge328
sbe-c.4xlarge6416

You can cluster up to twelve Storage Optimized devices together in order to create a single S3-compatible bucket that can store nearly 1 petabyte of data. You can also run Lambda functions on this and on other Snowball Edge devices.

To learn more and to order a Snowball Edge (or an entire cluster), visit the AWS Snowball Console.

AWS OpsHub for Snow Family
This is a new graphical user interface that you can use to manage Snowball Edge devices. You can unlock devices and configure devices, use drag-and-drop operations to copy data, launch applications (EC2 AMIs), monitor device metrics, and automate routine operations.

Once downloaded and installed on your Windows or Mac, you can use AWS OpsHub even if you don’t have a connection to the Internet. This makes it ideal for use in some of the mobile and disconnected modes that I mentioned earlier, and also makes it a great fit for high-security environments.

AWS OpsHub is available at no charge wherever Snowball Edge is available.

To learn more and to get started with AWS OpsHub, visit the Snowball Resources Page.

IAM for Snowball Edge
You can now use user-based IAM policies to control access to services and resources running on Snowball Edge devices. If you have multiple users with access to the same device, you can use IAM policies to ensure that each user has the appropriate permissions.

If you have applications that make calls to IAM, S3, EC2, or STS (newly available on Snowball Edge) API functions on a device, you should make sure that you specify the “snow” region in your calls. This is optional now, but will become mandatory for devices ordered after November 2, 2020.

IAM support is available for devices ordered on or after April 16, 2020.

To learn more, read Using Local IAM.

Snowball Edge Support for AWS Systems Manager
AWS Systems Manager gives you the power to automate common maintenance and deployment tasks in order to make you and your teams more efficient.

You can now write scripts in Python or PowerShell and execute them in AWS OpsHub. The scripts can include any of the operations supported on the device. For example, here’s a simple script that restarts an EC2 instance:

To learn more, read about Automating Tasks.

Jeff;