All posts by Jeff Barr

AWS Well-Architected Framework – Updated White Papers, Tools, and Best Practices

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-well-architected-framework-updated-white-papers-tools-and-best-practices/

We want to make sure that you are designing and building AWS-powered applications in the best possible way. Back in 2015 we launched AWS Well-Architected to make sure that you have all of the information that you need to do this right. The framework is built on five pillars:

Operational Excellence – The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.

Security – The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.

Reliability – The ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.

Performance Efficiency – The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.

Cost Optimization – The ability to run systems to deliver business value at
the lowest price point.

Whether you are a startup, a unicorn, or an enterprise, the AWS Well-Architected Framework will point you in the right direction and then guide you along the way as you build your cloud applications.

Lots of Updates
Today we are making a host of updates to the Well-Architected Framework! Here’s an overview:

Well-Architected Framework -This update includes new and updated questions, best practices, and improvement plans, plus additional examples and architectural considerations. We have added new best practices in operational excellence (organization), reliability (workload architecture), and cost optimization (practice Cloud Financial Management). We are also making the framework available in eight additional languages (Spanish, French, German, Japanese, Korean, Brazilian Portuguese, Simplified Chinese, and Traditional Chinese). Read the Well-Architected Framework (PDF, Kindle) to learn more.

Pillar White Papers & Labs – We have updated the white papers that define each of the five pillars with additional content, including new & updated questions, real-world examples, additional cross-references, and a focus on actionable best practices. We also updated the labs that accompany each pillar:

Well-Architected Tool – We have updated the AWS Well-Architected Tool to reflect the updates that we made to the Framework and to the White Papers.

Learning More
In addition to the documents that I linked above, you should also watch these videos.

In this video, AWS customer Cox Automotive talks about how they are using AWS Well-Architected to deliver results across over 200 platforms:

In this video, my colleague Rodney Lester tells you how to build better workloads with the Well-Architected Framework and Tool:

Get Started Today
If you are like me, a lot of interesting services and ideas are stashed away in a pile of things that I hope to get to “someday.” Given the importance of the five pillars that I mentioned above, I’d suggest that Well-Architected does not belong in that pile, and that you should do all that you can to learn more and to become well-architected as soon as possible!

Jeff;

New – Create Amazon RDS DB Instances on AWS Outposts

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-create-amazon-rds-db-instances-on-aws-outposts/

Late last year I told you about AWS Outposts and invited you to Order Yours Today. As I told you at the time, this is a comprehensive, single-vendor compute and storage offering that is designed to meet the needs of customers who need local processing and very low latency in their data centers and on factory floors. Outposts uses the hardware that we use in AWS public regions

I first told you about Amazon RDS back in 2009. This fully managed service makes it easy for you to launch, operate, and scale a relational database. Over the years we have added support for multiple open source and commercial databases, along with tons of features, all driven by customer requests.

DB Instances on AWS Outposts
Today I am happy to announce that you can now create RDS DB Instances on AWS Outposts. We are launching with support for MySQL and PostgreSQL, with plans to add other database engines in the future (as always, let us know what you need so that we can prioritize it).

You can make use of important RDS features including scheduled backups to Amazon Simple Storage Service (S3), built-in encryption at rest and in transit, and more.

Creating a DB Instance
I can create a DB Instance using the RDS Console, API (CreateDBInstance), CLI (create-db-instance), or CloudFormation (AWS::RDS::DBInstance).

I’ll use the Console, taking care to select the AWS Region that serves as “home base” for my Outpost. I open the Console and click Create database to get started:

I select On-premises for the Database location, and RDS on Outposts for the On-premises database option:

Next, I choose the Virtual Private Cloud (VPC). The VPC must already exist, and it must have a subnet for my Outpost. I also choose the Security Group and the Subnet:

Moving forward, I select the database engine, and version. We’re launching with support for MySQL 8.0.17 and PostgreSQL 12.2-R1, with plans to add more engines and versions based on your feedback:

I give my DB Instance a name (jb-database-2), and enter the credentials for the master user:

Then I choose the size of the instance. I can select between Standard classes (db.m5):

and Memory Optimized classes (db.r5):

Next, I configure the desired amount of SSD storage:

One thing to keep in mind is that each Outpost has a large, but finite amount of compute power and storage. If there’s not enough of either one free when I attempt to create the database, the request will fail.

Within the Additional configuration section I can set up several database options, customize my backups, and set up the maintenance window. Once everything is ready to go, I click Create database:

As usual when I use RDS, the state of my instance starts out as Creating and transitions to Available when my DB Instance is ready:

After the DB instance is ready, I simply configure my code (running in my VPC or in my Outpost) to use the new endpoint:

Things to Know
Here are a couple of things to keep in mind about this new way to use Amazon RDS:

Operations & Functions – Much of what you already know about RDS works as expected and is applicable. You can rename, reboot, stop, start, tag DB instances, and you can make use of point-in-time recovery; you can scale the instance up and down, and automatic minor version upgrades work as expected. You cannot make use of read replicas or create highly available clusters.

Backup & Recover – Automated backups work as expected, and are stored in S3. You can use them to create a fresh DB Instance in the cloud or in any of your Outposts. Manual snapshots also work, and are stored on the Outpost. They can be used to create a fresh DB Instance on the same Outpost.

Encryption – The storage associated with your DB instance is encrypted, as are your DB snapshots, both with KMS keys.

Pricing – RDS on Outposts pricing is based on a management fee that is charged on an hourly basis for each database that is managed. For more information, check out the RDS on Outposts pricing page.

Available Now
You can start creating RDS DB Instances on your Outposts today.

Jeff;

 

Introducing Amazon Honeycode – Build Web & Mobile Apps Without Writing Code

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/introducing-amazon-honeycode-build-web-mobile-apps-without-writing-code/

VisiCalc was launched in 1979, and I purchased a copy (shown at right) for my Apple II. The spreadsheet model was clean, easy to use, and most of all, easy to teach. I was working in a retail computer store at that time, and knew that this product was a big deal when customers started asking to purchase the software, and for whatever hardware that was needed to run it.

Today’s spreadsheets fill an important gap between mass-produced packaged applications and custom-built code created by teams of dedicated developers. Every tool has its limits, however. Sharing data across multiple users and multiple spreadsheets is difficult, as is dealing with large amounts of data. Integration & automation are also challenging, and require specialized skills. In many cases, those custom-built apps would be a better solution than a spreadsheet, but a lack of developers or other IT resources means that these apps rarely get built.

Introducing Amazon Honeycode
Today we are launching Amazon Honeycode in beta form. This new fully-managed AWS service gives you the power to build powerful mobile & web applications without writing any code. It uses the familiar spreadsheet model and lets you get started in minutes. If you or your teammates are already familiar with spreadsheets and formulas, you’ll be happy to hear that just about everything you know about sheets, tables, values, and formulas still applies.

Amazon Honeycode includes templates for some common applications that you and other members of your team can use right away:

You can customize these apps at any time and the changes will be deployed immediately. You can also start with an empty table, or by importing some existing data in CSV form. The applications that you build with Honeycode can make use of a rich palette of user interface objects including lists, buttons, and input fields:

You can also take advantage of a repertoire of built-in, trigger-driven actions that can generate email notifications and modify tables:

Honeycode also includes a lengthy list of built-in functions. The list includes many functions that will be familiar to users of existing spreadsheets, along with others that are new to Honeycode. For example, FindRow is a more powerful version of the popular Vlookup function.

Getting Started with Honeycode
It is easy to get started. I visit the Honeycode Builder, and create my account:

After logging in I see My Drive, with my workbooks & apps, along with multiple search, filter, & view options:

I can open & explore my existing items, or I can click Create workbook to make something new. I do that, and then select the Simple To-do template:

The workbook, tables, and the apps are created and ready to use right away. I can simply clear the sample data from the tables and share the app with the users, or I can inspect and customize it. Let’s inspect it first, and then share it!

After I create the new workbook, the Tasks table is displayed and I can see the sample data:

Although this looks like a traditional spreadsheet, there’s a lot going on beneath the surface. Let’s go through, column-by-column:

A (Task) – Plain text.

B (Assignee) – Text, formatted as a Contact.

C (First Name) – Text, computed by a formula:

In the formula, Assignee refers to column B, and First Name refers to the first name of the contact.

D (Due) – A date, with multiple formatting options:

E (Done) – A picklist that pulls values from another table, and that is formatted as a Honeycode rowlink. Together, this restricts the values in this column to those found in the other table (Done, in this case, with the values Yes and No), and also makes the values from that table visible within the context of this one:

F (Remind On) – Another picklist, this one taking values from the ReminderOptions table:

G (Notification) – Another date.

This particular table uses just a few of the features and options that are available to you.

I can use the icons on the left to explore my workbook:

I can see the tables:

I can also see the apps. A single Honeycode workbook can contain multiple apps that make use of the same tables:

I’ll return to the apps and the App Builder in a minute, but first I’ll take a look at the automations:

Again, all of the tables and apps in the workbook can use any of the automations in the workbook.

The Honeycode App Builder
Let’s take a closer look at the app builder. As was the case with the tables, I will show you some highlights and let you explore the rest yourself. Here’s what I see when I open my Simple To-do app in the builder:

This app contains four screens (My Tasks, All Tasks, Edit, and Add Task). All screens have both web and mobile layouts. Newly created screens, and also those in this app, have the layouts linked, so that changes to one are reflected in the other. I can unlink the layouts if I want to exercise more control over the controls, the presentation, or to otherwise differentiate the two:

Objects within a screen can reference data in tables. For example, the List object on the My Task screen filters rows of the Tasks table, selecting the undone tasks and ordering them by the due date:

Here’s the source expression:

=Filter(Tasks,"Tasks[Done]<>% ORDER BY Tasks[Due]","Yes")

The “%”  in the filter condition is replaced by the second parameter (“Yes”) when the filter is evaluated. This substitution system makes it easy for you to create interesting & powerful filters using the FILTER() function.

When the app runs, the objects within the List are replicated, one per task:

Objects on screens can initiate run automations and initiate actions. For example, the ADD TASK button navigates to the Add Task screen:

The Add Task screen prompts for the values that specify the new task, and the ADD button uses an automation that writes the values to the Tasks table:

Automations can be triggered in four different ways. Here’s the automation that generates reminders for tasks that have not been marked as done. The automation runs once for each row in the Tasks table:

The notification runs only if the task has not been marked as done, and could also use the FILTER() function:

While I don’t have the space to show you how to build an app from scratch, here’s a quick overview.

Click Create workbook and Import CSV file or Start from scratch:

Click the Tables icon and create reference and data tables:

Click the Apps icon and build the app. You can select a wizard that uses your tables as a starting point, or you can start from an empty canvas.

Click the Automations icon and add time-driven or data-driven automations:

Share the app, as detailed in the next section.

Sharing Apps
After my app is ready to go, I can share it with other members of my team. Each Honeycode user can be a member of one or more teams:

To share my app, I click Share app:

Then I search for the desired team members and share the app:

They will receive an email that contains a link, and can start using the app immediately. Users with mobile devices can install the Honeycode Player (iOS, Android) and make use of any apps that have been shared with them. Here’s the Simple To-do app:

Amazon Honeycode APIs
External applications can also use the Honeycode APIs to interact with the applications you build with Honeycode. The functions include:

GetScreenData – Retrieve data from any screen of a Honeycode application.

InvokeScreenAutomation – Invoke an automation or action defined in the screen of a Honeycode application.

Check it Out
As you can see, Amazon Honeycode is easy to use, with plenty of power to let you build apps that help you and your team to be more productive. Check it out, build something cool, and let me know what you think! You can find out more in the announcement video from Larry Augustin here:

Jeff;

PS – The Amazon Honeycode Forum is the place for you to ask questions, learn from other users, and to find tutorials and other resources that will help you to get started.

Introducing AWS Snowcone – A Small, Lightweight, Rugged, Secure Edge Computing, Edge Storage, and Data Transfer Device

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/introducing-aws-snowcone-small-lightweight-edge-storage-and-processing/

Last month I published my AWS Snowball Edge Update and told you about the latest updates to Snowball Edge, including faster storage-optimized devices with more memory & vCPUs, the AWS OpsHub for Snow Family GUI-based management tool, IAM for Snowball Edge, and Snowball Edge Support for AWS Systems Manager.

AWS Snowcone
Today I would like to introduce you to the newest and smallest member of the AWS Snow Family of physical edge computing, edge storage, and data transfer devices for rugged or disconnected environments, AWS Snowcone:

AWS Snowcone weighs 4.5 pounds and includes 8 terabytes of usable storage. It is small (9″ long, 6″ wide, and 3″ tall) and rugged, and can be used in a variety of environments including desktops, data centers, messenger bags, vehicles, and in conjunction with drones. Snowcone runs on either AC power or an optional battery, making it great for many different types of use cases where self-sufficiency is vital.

The device enclosure is both tamper-evident and tamper-resistant, and also uses a Trusted Platform Module (TPM) designed to ensure both security and full chain-of-custody for your data. The device encrypts data at rest and in transit using keys that are managed by AWS Key Management Service (KMS) and are never stored on the device.

Like other Snow Family devices, Snowcone includes an E Ink shipping label designed to ensure the device is automatically sent to the correct AWS facility and to aid in tracking. It also includes 2 CPUs, 4 GB of memory, wired or wireless access, and USB-C power using a cord or the optional battery. There’s enough compute power for you to launch EC2 instances and to use AWS IoT Greengrass.

You can use Snowcone for data migration, content distribution, tactical edge computing, healthcare IoT, industrial IoT, transportation, logistics, and autonomous vehicle use cases. You can ship data-laden devices to AWS for offline data transfer, or you can use AWS DataSync for online data transfer.

Ordering a Snowcone
The ordering process for Snowcone is similar to that for Snowball Edge. I open the Snow Family Console, and click Create Job:

I select the Import into Amazon S3 job type and click Next:

I choose my address (or enter a new one), and a shipping speed:

Next, I give my job a name (Snowcone2) and indicate that I want a Snowcone. I also acknowledge that I will provide my own power supply:

Deeper into the page, I choose an S3 bucket for my data, opt-in to WiFi connectivity, and choose an EC2 AMI that will be loaded on the device before it is shipped to me:

As you can see from the image, I can choose multiple buckets and/or multiple AMIs. The AMIs must be made from an instance launched from a CentOS or Ubuntu product in AWS Marketplace, and it must contain a SSH key.

On successive pages (not shown), I specify permissions (an IAM role), choose an AWS Key Management Service (KMS) key to encrypt my data, and set up a SNS topic for job notifications. Then I confirm my choices and click Create job:

Then I await delivery of my device! I can check the status at any time:

As noted in one of the earlier screens, I will also need a suitable power supply or battery (you can find several on the Snowcone Accessories page).

Time passes, the doorbell rings, Luna barks, and my device is delivered…

Luna and a Snowcone

The console also updates to show that my device has been delivered:

On that page, I click Get credentials, copy the client unlock code, and download the manifest file:

Setting up My Snowcone
I connect the Snowcone to the power supply and to my network, and power up! After a few seconds of initialization, the device shows its IP address and invites me to connect:

The IP address was supplied by the DHCP server on my home network, and should be fine. If not, I can touch Network and configure a static IP address or log in to my WiFi network.

Next, I download AWS OpsHub for Snow Family, install it, and then configure it to access the device. I select Snowcone and click Next:

I enter the IP address as shown on the display:

Then I enter the unlock code, upload the manifest file, and click Unlock device:

After a minute or two, the device is unlocked and ready. I enter a name (Snowcone1) that I’ll use within AWS OpsHub and click Save profile name:

I’m all set:

AWS OpsHub for Snow Family
Now that I have ordered & received my device, installed AWS OpsHub for Snow Family, and unlocked my device, I am ready to start managing some file storage and doing some edge computing!

I click on Get started within Manage file storage, and Start NFS. I have several network options, and I’ll use the defaults:

The NFS server is ready within a minute or so, and it has its own IP address:

Once it is ready I can mount the NFS volume and copy files to the Snowcone:

I can store process these files locally, or I can use AWS DataSync to transfer them to the cloud.

As I showed you earlier in this post, I selected an EC2 AMI when I created my job. I can launch instances on the Snowcone using this AMI. I click on Compute, and Launch instance:

I have three instance types to choose from:

Instance NameCPUsRAM
snc1.micro11 GiB
snc1.small12 GiB
snc1.medium24 GiB

I select my AMI & instance type, confirm the networking options, and click Launch:

I can also create storage volumes and attach them to the instance.

The ability to build AMIs and run them on Snowcones gives you the power to build applications that do all sorts of interesting filtering, pre-processing, and analysis at the edge.

I can use AWS DataSync to transfer data from the device to a wide variety of AWS storage services including Amazon Simple Storage Service (S3), Amazon Elastic File System (EFS), or Amazon FSx for Windows File Server. I click on Get started, then Start DataSync Agent, confirm my network settings, and click Start agent:

Once the agent is up and running, I copy the IP address:

Then I follow the link and create a DataSync agent (the deploy step is not necessary because the agent is already running). I choose an endpoint and paste the IP address of the agent, then click Get key:

I give my agent a name (SnowAgent), tag it, and click Create agent:

Then I configure the NFS server in the Snowcone as a DataSync location, and use it to transfer data in or out using a DataSync Task.

API / CLI
While AWS OpsHub is going to be the primary access method for most users, the device can also be accessed programmatically. I can use the Snow Family tools to retrieve the AWS Access Key and Secret Key from the device, create a CLI profile (region is snow), and run commands (or issue API calls) as usual:

C:\>aws ec2 \
   --endpoint http://192.168.7.154:8008 describe-images \
   --profile snowcone1
{
    "Images": [
        {
            "ImageId": "s.ami-0581034c71faf08d9",
            "Public": false,
            "State": "AVAILABLE",
            "BlockDeviceMappings": [
                {
                    "DeviceName": "/dev/sda1",
                    "Ebs": {
                        "DeleteOnTermination": false,
                        "Iops": 0,
                        "SnapshotId": "s.snap-01f2a33baebb50f0e",
                        "VolumeSize": 8
                    }
                }
            ],
            "Description": "Image for Snowcone delivery #1",
            "EnaSupport": false,
            "Name": "Snowcone v1",
            "RootDeviceName": "/dev/sda1"
        },
        {
            "ImageId": "s.ami-0bb6828757f6a23cf",
            "Public": false,
            "State": "AVAILABLE",
            "BlockDeviceMappings": [
                {
                    "DeviceName": "/dev/sda",
                    "Ebs": {
                        "DeleteOnTermination": true,
                        "Iops": 0,
                        "SnapshotId": "s.snap-003d9042a046f11f9",
                        "VolumeSize": 20
                    }
                }
            ],
            "Description": "AWS DataSync AMI for online data transfer",
            "EnaSupport": false,
            "Name": "scn-datasync-ami",
            "RootDeviceName": "/dev/sda"
        }
    ]
}

Get One Today
You can order a Snowcone today for use in US locations.

Jeff;

 

New – SaaS Contract Upgrades and Renewals for AWS Marketplace

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-saas-contract-upgrades-and-renewals-for-aws-marketplace/

AWS Marketplace currently contains over 7,500 listings from 1,500 independent software vendors (ISVs). You can browse the digital catalog to find, test, buy, and deploy software that runs on AWS:

Each ISV sets the pricing model and prices for their software. There are a variety of options available, including free trials, hourly or usage-based pricing, monthly, annual AMI pricing, and up-front pricing for 1-, 2-, and 3-year contracts. These options give each ISV the flexibility to define the models that work best for their customers. If their offering is delivered via a Software as a Service (SaaS) contract model, the seller can define the usage categories, dimensions, and contract length.

Upgrades & Renewals
AWS customers that make use of the SaaS and usage-based products that they find in AWS Marketplace generally start with a small commitment and then want to upgrade or renew them early as their workloads expand.

Today we are making the process of upgrading and renewing these contracts easier than ever before. While the initial contract is still in effect, buyers can communicate with sellers to negotiate a new Private Offer that best meets their needs. The offer can include additional entitlements to use the product, pricing discounts, a payment schedule, a revised contract end-date, and changes to the end-user license agreement (EULA), all in accord with the needs of a specific buyer.

Once the buyer accepts the offer, the new terms go in to effect immediately. This new, streamlined process means that sellers no longer need to track parallel (paper and digital) contracts, and also ensures that buyers receive continuous service.

Let’s say I am already using a product from AWS Marketplace and negotiate an extended contract end-date with the seller. The seller creates a Private Offer for me and sends me a link that I follow in order to find & review it:

I select the Upgrade offer, and I can see I have a new contract end date, the number of dimensions on my upgrade contract, and the payment schedule. I click Upgrade current contract to proceed:

I confirm my intent:

And I am good to go:

This feature is available to all buyers & SaaS sellers, and applies to SaaS contracts and contracts with consumption pricing.

Jeff;

MSP360 – Evolving Cloud Backup with AWS for Over a Decade

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/msp360-evolving-cloud-backup-with-aws-for-over-a-decade/

Back in 2009 I received an email from an AWS developer named Andy. He told me that he and his team of five engineers had built a product called CloudBerryExplorer for Amazon S3. I mentioned his product in my CloudFront Management Tool Roundup and in several subsequent blog posts. During re:Invent 2019, I learned that CloudBerry has grown to over 130 employees and is now known as MSP360. Andy and his core team are still in place, and continue to provide file management and cloud-based backup services.

MSP360 focuses on providing backup and remote management services to Managed Service Providers (MSPs). These providers, in turn, market to IT professionals and small businesses. MSP360, in effect, provides an “MSP in a box” that gives the MSPs the ability to provide a robust, AWS-powered cloud backup solution. Each MSP can add their own branding and market the resulting product to the target audience of their choice: construction, financial services, legal services, healthcare, and manufacturing to name a few.

We launched the AWS Partner Network (APN) in 2012. MSP360 was one of the first to join. Today, as an APN Advanced Technology Partner with Storage Competency for the Backup & Restore use case and one of our top storage partners, MSP360 gives its customers access to multiple Amazon Simple Storage Service (S3) storage options and classes, and also supports Snowball Edge. They are planning to support AWS Outposts and are also working on a billing model that will simplify the billing experience for MSP360 customers that use Amazon S3.

Here I am with the MSP360 team and some of my AWS colleagues at re:Invent 2019:

 

Inside MSP360 (CloudBerry) Managed Backup Service
CloudBerry Explorer started out as a file transfer scheduler that ran only on Windows. It is now known as MSP360 (CloudBerry) Managed Backup Service (MBS) and provides centralized job management, monitoring, reporting, and licensing control. MBS supports file-based and image-level backup, and also includes specialized support for applications like SQL Server and Microsoft Exchange. Agentless, host-level backup support is available for VMware and Hyper-V. Customers can also backup Microsoft Office 365 and Google G Suite documents, data, and configurations.

By the Numbers
The product suite is available via a monthly subscription model that is a great fit for the MSPs and for their customers. As reported in a recent story, this model has allowed them to grow their revenue by 60% in 2019, driven by a 40% increase in product activations. Their customer base now includes over 9,000 MSPs and over 100,000 end-user customers. Working together with their MSP, customers can choose to store their data in any commercial AWS region, including the two regions in China.

Special Offer
The MSP360 team has created a special offer that is designed to help new customers to get started at no charge. The offer includes $200 in MBS licenses and customers can make use of up to 2 terabytes of S3 storage. Customers also get access to the MSP360 Remote Desktop product and other features. To take advantage of this offer, visit the MSP360 Special Offer page.

Jeff;

 

 

Adventures in Scaling in Changing Times

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/adventures-in-scaling-in-changing-times/

I don’t know about you, but the last two months have been kind of crazy for me due to the spread of COVID-19.

In the middle of a trans-Nordics trip in early March that took me to Denmark, Finland, and Sweden in the course of a week, Amazon asked me and my coworkers to work from home if possible. I finished my trip, returned to Seattle, and did my best to adapt to these changing times.

In the ensuing weeks, several of my scheduled trips were cancelled, all of my in-person meetings with colleagues and customers were replaced with Amazon Chime video calls, and we decided to start taping What’s New with AWS from my home office.

On the personal side, I watched as many of the entertainment, education, and sporting events that I enjoy were either canceled or moved online. Just as you probably did, I quickly found new ways to connect with family and friends that did not require face-to-face interaction.

I thought that it would be interesting to see how these sudden, large-scale changes are affecting our customers. My colleague Monica Benjamin checked in with AWS customers across several different fields and industries and provided me with the source material for this post. Here’s what we learned…

Edmodo – Education
Education technology company Edmodo provides tools for K-12 schools and teachers. More than 125 million members count on Edmodo to provide a secure space for teachers, students, and parents to communicate and collaborate. As the pandemic began spreading across Europe, Edmodo’s traffic began to grow at an exponential rate. AWS has allowed them to rapidly scale in order to meet this new demand so that education continues across the world. Per Thomsen (Vice President, Engineering) told us:

In early March, our traffic grew significantly with the total number of global learners engaging on the network spiking within a matter of weeks. This required us to increase site capacity by 15 times. With AWS and Amazon EC2 instances, Edmodo has been able to quickly scale and meet this new demand so we could continue to provide teachers and students with our uninterrupted services for their distance learning needs. Having AWS always at our fingertips gives us elastic and robust compute capacity to scale rapidly.

BlueJeans – Cloud-Based Video Conferencing
Global video provider BlueJeans supports employees working from home, health care providers shifting to telehealth, and educators moving to distance learning. Customers like BlueJeans because it provides high video and voice quality, strong security, and interoperability. Swaroop Kulkarni (Technical Director, Office of the CTO) told us:

With so many people working from home, we have seen explosive growth in traffic since the start of the Coronavirus pandemic. In just two weeks our usage skyrocketed 300% over the pre-COVID-19 average. We have always run a hybrid infrastructure between our datacenters and public cloud and fortunately had already shifted critical workloads to Amazon EC2 services before the Coronavirus outbreak. The traffic surge in March 2020 led us to scale up on AWS. We took advantage of the global presence of AWS and nearly doubled the number of regions and added US East (Ohio), APAC (Mumbai) and APAC (Singapore). We also experimented with various instance types (C,M,R families) and time-of-day scaling and this served us well for managing costs. Overall, we were able to stay ahead of traffic increases smoothly and seamlessly. We appreciate the partnership with AWS.

Netflix – Media & Entertainment
Home entertainment provider Netflix started to see their usage spike in March, with an increase in stream starts in many different parts of the world. Nils Pommerien (Director, Cloud Instrastructure Engineering) told us:

Like other home entertainment services, Netflix has seen temporarily higher viewing and increased member growth during this unprecedented time. In order to meet this demand our control plane services needed to scale very quickly. This is where the value of AWS’ cloud and our strong partnership became apparent, both in being able to meet capacity needs in compute, storage, as well as providing the necessary infrastructure, such as AWS Auto Scaling, which is deeply ingrained in Netflix’s operations model.

Pinterest – Billions of Pins
Visual discovery engine Pinterest has been scaling to meet the needs of an ever-growing audience. Coburn Watson (Head of Infrastructure and SRE) told us:

Pinterest has been able to provide inspiration for an expanded global customer audience during this challenging period, whether looking for public health information, new foods to prepare, or projects and crafts to do with friends and family. Working closely with AWS, Pinterest has been able to ensure additional capacity was available during this period to keep Pinterest up and serving our customers.

Finra – Financial Services
FINRA regulates a critical part of the securities industry – brokerage firms doing business with the public in the United States. FINRA takes in as much as 400 billion market events per day that are tracked, aggregated, and analyzed for the purpose of protecting investors. Steve Randich (Executive Vice President and Chief Information Officer) told us:

The COVID-19 pandemic has caused extreme volatility in the U.S. securities markets, and since March we have seen market volumes increase by 2-3x. Our compute resources with AWS are automatically provisioned and can process a record peak and then shut down to nothing, without any human intervention. We automatically turn on and off up to 100,000 compute nodes in a single day. We would have been unable to handle this surge in volume within our on premises data center.

As you can see from what Steve said, scaling down is just as important as scaling up.

Snap – Reinventing the Camera
The Snapchat application lets people express themselves and helps them to maintain connections with family and close friends. Saral Jain (Director of Engineering) told us:

As the global coronavirus pandemic affected the lives of millions around the world, Snapchat has played an important role in people’s lives, especially for helping close friends and family stay together emotionally while they are separated physically. In recent months, we have seen increased engagement across our platform resulting in higher workloads and the need to rapidly scale up our cloud infrastructure. For example, communication with friends increased by over 30 percent in the last week of March compared to the last week of January, with more than a 50 percent increase in some of our larger markets. AWS cloud has been valuable in helping us deal with this significant increase in demand, with services like EC2 and DynamoDB delivering high performance and reliability we need to provide the best experience for our customers.

I hope that you are staying safe, and that you have enjoyed this look at what our customers are doing in these unique and rapidly changing times. If you have a story of your own to share, please let me know.

Jeff;

 

 

AWS Inter-Region Data Transfer (DTIR) Price Reduction

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-inter-region-data-transfer-dtir-price-reduction/

If you build AWS applications that span two or more AWS regions, this post is for you. We are reducing the cost to transfer data from the South America (São Paulo), Middle East (Bahrain), Africa (Cape Town), and Asia Pacific (Sydney) Regions to other AWS regions as follows, effective May 1, 2020:

RegionOld Rate ($/GB)New Rate ($/GB)
South America (São Paulo)0.16000.1380
Middle East (Bahrain)0.16000.1105
Africa (Cape Town)0.18000.1470
Asia Pacific (Sydney)0.14000.0980

Consult the price list to see inter-region data transfer prices for all AWS regions.

Jeff;

 

New – AWS Elemental Link – Deliver Live Video to the Cloud for Events & Streams

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-elemental-link-deliver-live-video-to-the-cloud-for-events-streams/

Video is central to so many online experiences. Regardless of the origin or creator, today’s viewers expect a high-resolution, broadcast-quality experience.

In sophisticated environments, dedicated hardware and an associated A/V team can capture, encode, and stream or store video that meets these expectations. However, cost and operational complexity have prevented others from delivering a similar experience. Classrooms, local sporting events, enterprise events, and small performance spaces do not have the budget or the specialized expertise needed to install, configure, and run the hardware and software needed to reliably deliver video to the cloud for processing, storage, and on-demand delivery or live streaming.

Introducing AWS Elemental Link
Today I would like to tell you about AWS Elemental Link. This new device connects live video sources to AWS Elemental MediaLive. The device is small (about 32 cubic inches) and weighs less than a pound. It draws very little power, is absolutely silent, and is available for purchase today at $995.

You can order these devices from the AWS Management Console and have them shipped to the intended point of use. They arrive preconfigured, and need only be connected to power, video, and the Internet. You can monitor and manage any number of Link devices from the console, without the need for specialized expertise at the point of use.

When connected to a video source, the Link device sends all video, audio, and metadata streams that arrive on the built-in 3G-SDI or HDMI connectors to AWS Elemental MediaLive, with automatic, hands-free tuning that adapts to available bandwidth. Once your video is in the cloud, you can use the full lineup of AWS Elemental Media Services to process, store, distribute, and monetize it.

Ordering an AWS Elemental Link
To get started, I visit the AWS Elemental Link Console and click Start order:

I indicate that I understands the Terms of Service, and click Continue to place order to proceed:

I enter my order, starting with contact information and an optional order name:

Then I enter individual order lines, and click Add new order line after each one. Each line represents one or more devices destined for one physical address. All of the devices in an order line are provisioned for the same AWS region:

I can see my Order summary at the bottom. Once I have created all of the desired order lines I click Next to proceed:

I choose a payment option, verify my billing address, and click Next:

Then I review my order and click Submit to place it:

After I pay my invoice, I wait for my devices to arrive.

Connection & Setup
When my device arrives, I connect it to my network and my camera, and plug in the power supply. I wait a minute or so while the device powers up and connects to the network, AWS, and to my camera. When it is all set, the front panel looks like this:

Next, I open the AWS Elemental MediaLive Console and click Devices:

Now that everything is connected, I can create a MediaLive input (Studio1), selecting Elemental Link as the source and choosing one of the listed input devices:

And that’s the setup and connection process. From here I would create a channel that references the input and then set up an output group to stream, archive, broadcast, or package the video stream. We’re building a CloudFormation-powered solution that will take care of all of this for you; stay tuned for details.

You can order your AWS Elemental Link today and start delivering video to the cloud in minutes!

Jeff;

 

Join the FORMULA 1 DeepRacer ProAm Special Event

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/join-the-formula-1-deepracer-proam-special-event/

The AWS DeepRacer League gives you the opportunity to race for prizes and glory, while also having fun & learning about reinforcement learning. You can use the AWS DeepRacer 3D racing simulator to build, train, and evaluate your models. You can review the results and improve your models in order to ensure that they are race-ready.

Winning a FORMULA 1 (F1) race requires a technologically sophisticated car, a top-notch driver, an outstanding crew, and (believe it or not) a healthy dose of machine learning. For the past couple of seasons AWS has been working with the Formula 1 team to find ways to use machine learning to make cars that are faster and more fuel-efficient than ever before (read The Fastest Cars Deserve the Fastest Cloud and Formula 1 Works with AWS to Develop Next Generation Race Car to learn more).

Special Event
Each month the AWS DeepRacer League runs a new Virtual Race in the AWS DeepRacer console and this month is a special one: the Formula 1 DeepRacer ProAm Special Event. During the month of May you can compete for the opportunity to race against models built and tuned by Formula drivers and their crews. Here’s the lineup:

Rob Smedley – Director of Data Systems for F1 and AWS Technical Ambassador.

Daniel Ricciardo – F1 driver for Renault, with 7 Grand Prix wins and 29 podium appearances.

Tatiana Calderon – Test driver for the Alfa Romeo F1 team and 2019 F2 driver.

Each pro will be partnered with a member of the AWS Pit Crew tasked with teaching them new skills and taking them on a learning journey. Here’s the week-by-week plan for the pros:

Week 1 – Learn the basics of reinforcement learning and submit models using a standard, single-camera vehicle configuration.

Week 2 – Add stereo cameras to vehicles and learn how to configure reward functions to dodge debris on the track.

Week 3 – Add LIDAR to vehicles and use the rest of the month to prepare for the head-to-head qualifier.

At the end of the month the top AWS DeepRacer amateurs will face off against the professionals, in an exciting head to head elimination race, scheduled for the week of June 1.

The teams will be documenting their learning journey and you’ll be able to follow along as they apply real-life racing strategies and data science to the world of autonomous racing.

Bottom line: You have the opportunity to build & train a model, and then race it against one from Rob, Daniel, or Tatiana. How cool is that?

Start Your Engines
And now it is your turn. Read Get Started with AWS DeepRacer, build your model, join the Formula 1 DeepRacer ProAm Special Event, train it on the Circuit de Barcelona-Catalunya track, and don’t give up until you are at the top of the chart.

Training and evaluation using the DeepRacer Console are available at no charge for the duration of the event (Terms and Conditions apply), making this a great opportunity for you to have fun while learning a useful new skill.

Good luck, and see you at the finish line!

Jeff;

 

New – Use CloudWatch Synthetics to Monitor Sites, API Endpoints, Web Workflows, and More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-use-cloudwatch-synthetics-to-monitor-sites-api-endpoints-web-workflows-and-more/

Today’s applications encompass hundreds or thousands of moving parts including containers, microservices, legacy internal services, and third-party services. In addition to monitoring the health and performance of each part, you need to make sure that the parts come together to deliver an acceptable customer experience.

CloudWatch Synthetics (announced at AWS re:Invent 2019) allows you to monitor your sites, API endpoints, web workflows, and more. You get an outside-in view with better visibility into performance and availability so that you can become aware of and then fix any issues quicker than ever before. You can increase customer satisfaction and be more confident that your application is meeting your performance goals.

You can start using CloudWatch Synthetics in minutes. You simply create canaries that monitor individual web pages, multi-page web workflows such as wizards and checkouts, and API endpoints, with metrics stored in Amazon CloudWatch and other data (screen shots and HTML pages) stored in an S3 bucket. as you create your canaries, you can set CloudWatch alarms so that you are notified when thresholds based on performance, behavior, or site integrity are crossed. You can view screenshots, HAR (HTTP archive) files, and logs to learn more about the failure, with the goal of fixing it as quickly as possible.

CloudWatch Synthetics in Action
Canaries were once used to provide an early warning that deadly gases were present in a coal mine. The canaries provided by CloudWatch Synthetics provide a similar early warning, and are considerably more humane. I open the CloudWatch Console and click Canaries to get started:

I can see the overall status of my canaries at a glance:

I created several canaries last month in preparation for this blog post. I chose a couple of sites including the CNN home page, my personal blog, the Amazon Movers and Shakers page, and the Amazon Best Sellers page. I did not know which sites would return the most interesting results, and I certainly don’t mean to pick on any one of them. I do think that it is important to show you how this (and every) feature performs with real data, so here we go!

I can turn my attention to the Canary runs section, and look at individual data points. Each data point is an aggregation of runs for a single canary:

I can click on the amzn_movers_shakers canary to learn more:

I can see that there was a single TimeoutError issue in the last 24 hours. I can see the screenshots that were captured as part of each run, along with the HAR files and logs. Each HAR file contains a detailed log of the HTTP requests that were made when the canary was run, along with the responses and the amount of time that it took for the request to complete:

Each canary run is implemented using a Lambda function. I can access the function’s execution metrics in the Metrics tab:

And I can see the canary script and other details in the Configuration tab:

Hatching a Canary
Now that you have seen a canary in action, let me show you how to create one. I return to the list of canaries and click Create canary. I can use one of four blueprints to create my canary, or I can upload or import an existing one:

All of these methods ultimately result in a script that is run either once or periodically. The canaries that I showed above were all built from the Heartbeat monitoring blueprint, like this:

I can also create canaries for API endpoints, using either GET or PUT methods, any desired HTTP headers, and some request data:

Another blueprint lets me create a canary that checks a web page for broken links (I’ll use this post):

Finally, the GUI workflow builder lets me create a sophisticated canary that can include simulated clicks, content verification via CSS selector or text, text entry, and navigation to other URLs:

As you can see from these examples, the canary scripts are using the syn-1.0 runtime. This runtime supports Node.JS scripts that can use the Puppeteer and Chromium packages. Scripts can make use of a set of library functions and can (with the right IAM permissions) access other AWS services and resources. Here’s an example script that calls AWS Secrets Manager:

var synthetics = require('Synthetics');
const log = require('SyntheticsLogger');

const AWS = require('aws-sdk');
const secretsManager = new AWS.SecretsManager();

const getSecrets = async (secretName) => {
    var params = {
        SecretId: secretName
    };
    return await secretsManager.getSecretValue(params).promise();
}

const secretsExample = async function () {
    // Fetch secrets
    var secrets = await getSecrets("secretname")
    
    // Use secrets
    log.info("SECRETS: " + JSON.stringify(secrets));
};

exports.handler = async () => {
    return await secretsExample();
};

Scripts signal success by running to completion, and errors by raising an exception.

After I create my script, I establish a schedule and a pair of data retention periods. I also choose an S3 bucket that will store the artifacts created each time a canary is run:

I can also control the IAM role, set CloudWatch Alarms, and configure access to endpoints that are in a VPC:

Watch the demo video to see CloudWatch Synthetics in action:

Things to Know
Here are a couple of things to know about CloudWatch Synthetics:

Observability – You can use CloudWatch Synthetics in conjunction with ServiceLens and AWS X-Ray to map issues back to the proper part of your application. To learn more about how to do this, read Debugging with Amazon CloudWatch Synthetics and AWS X-Ray and Using ServiceLens to Monitor the Health of Your Applications.

Automation – You can create canaries using the Console, CLI, APIs, and from CloudFormation templates.

Pricing – As part of the AWS Free Tier you get 100 canary runs per month at no charge. After that, you pay per run, with prices starting at $0.0012 per run, plus the usual charges for S3 storage and Lambda invocations.

Limits – You can create up to 100 canaries per account in the US East (N. Virginia), Europe (Ireland), US West (Oregon), US East (Ohio), and Asia Pacific (Tokyo) Regions, and up to 20 per account in other regions where CloudWatch Synthetics are available.

Available Now
CloudWatch Synthetics are available now and you can start using them today!

Jeff;

AWS ChatBot – ChatOps for Slack and Chime

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-chatbot-chatops-for-slack-and-chime/

Last year, my colleague Ilya Bezdelev wrote Introducing AWS Chatbot: ChatOps for AWS to launch the public beta of AWS Chatbot. He also participated in the re:Invent 2019 Launchpad and did an in-depth AWS Chatbot demo:

In his initial post, Ilya showed you how you can practice ChatOps within Amazon Chime or Slack, receiving AWS notifications and executing commands in an environment that is intrinsically collaborative. In a later post, Running AWS commands from Slack using AWS Chatbot, Ilya showed how to configure AWS Chatbot in a Slack channel, display CloudWatch alarms, describe AWS resources, invoke a Lambda function and retrieve the logs, and create an AWS Support case. My colleagues Erin Carlson and Matt Cowsert wrote about AWS Budgets Integration with Chatbot and walked through the process of setting up AWS Budget alerts and arranging for notifications from within AWS Chatbot. Finally, Anushri Anwekar showed how to Receive AWS Developer Tools Notifications over Slack using AWS Chatbot.

As you can see from the posts that I referred to above, AWS Chatbot is a unique and powerful communication tool that has the potential to change the way that you monitor and maintain your cloud environments.

Now Generally Available
I am happy to announce that AWS Chatbot has graduated from beta to general availability, and that you can use it to practice ChatOps across multiple AWS regions. We are launching with support for Amazon CloudWatch, the AWS Code* services, AWS Health, AWS Budgets, Amazon GuardDuty, and AWS CloudFormation.

You can connect it to your Amazon Chime chatrooms and your Slack channels in minutes. Simply open the AWS Chatbot Console, choose your Chat client, and click Configure client to get started:

As part of the configuration process you will have the opportunity to choose an existing IAM role or to create a new one from one or more templates. The role provides AWS Chatbot with access to CloudWatch metrics, and the power to run commands, invoke Lambda functions, respond to notification actions, and generate support cases:

AWS Chatbot listens on Amazon Simple Notification Service (SNS) topics to learn about events and alarm notifications in each region of interest:

You can set up CloudWatch Alarms in any region where you select a topic and use them to send notifications to AWS Chatbot.

Special Offer from Slack
Our friends at Slack have put together a special offer to help you and your team connect and stay productive through new and shifting circumstances:

If you upgrade from the Free Plan to a Standard or Plus Plan you will receive a 25% discount for the first 12 months from your upgrade date.

Available Now
You can start using AWS Chatbot today at no additional charge. You pay for the underlying services (CloudWatch, SNS, and so forth) as if you were using them without AWS Chatbot, and you also pay any charges associated with the use of your chat client.

Jeff;

 

Capacity-Optimized Spot Instance Allocation in Action at Mobileye and Skyscanner

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/capacity-optimized-spot-instance-allocation-in-action-at-mobileye-and-skyscanner/

Amazon EC2 Spot Instances were launched way back in 2009. The instances are spare EC2 compute capacity that is available at savings of up to 90% when compared to the On-Demand prices. Spot Instances can be interrupted by EC2 (with a two minute heads-up), but are otherwise the same as On-Demand instances. You can use Amazon EC2 Auto Scaling to seamlessly scale Spot Instances, On-Demand instances, and instances that are part of a Savings Plan, all within a single EC2 Auto Scaling Group.

Over the years we have added many powerful Spot features including a Simplified Pricing Model, the Capacity-Optimized scaling strategy (more on that in a sec), integration with the EC2 RunInstances API, and much more.

EC2 Auto Scaling lets you use two different allocation strategies for Spot Instances:

lowest-price – Allocates instances from the Spot Instance pools that have the lowest price at the time of fulfillment. Spot pricing changes slowly over time based on long-term trends in supply and demand, but capacity fluctuates in real time. As the lowest-price strategy does not account for pool capacity depth as it deploys Spot Instances, this allocation strategy is a good fit for fault-tolerant workloads with a low cost of interruption.

capacity-optimized – Allocates instances from the Spot Instance pools with the optimal capacity for the number of instances that are launching, making use of real-time capacity data. This allocation strategy is appropriate for workloads that have a higher cost of interruption. It thrives on flexibility, empowered by the instance families, sizes, and generations that you choose.

Today I want to show you how you can use the capacity-optimized allocation strategy and to share a pair of customer stories with you.

Using Capacity-Optimized Allocation
First, I switch to the the new Auto Scaling console by clicking Go to the new console:

The new console includes a nice diagram to explain how Auto Scaling works. I click Create Auto Scaling group to proceed:

I name my Auto Scaling group and choose a launch template as usual, then click Next:

If you are not familiar with launch templates, read Recent EC2 Goodies – Launch Templates and Spread Placement, to learn all about them.

Because my launch template does not specify an instance type, Combine purchase options and instance types is pre-selected and cannot be changed. I ensure that the Capacity-optimized allocation strategy is also selected, and set the desired balance of On-Demand and Spot Instances:

Then I choose a primary instance type, and the console recommends others. I can choose Family and generation (m3, m4, m5 for example) flexibility or just size flexibility (large, xlarge, 12xlarge, and so forth) within the generation of the primary instance type. As I noted earlier, this strategy thrives on flexibility, so choosing as many relevant instances as possible is to my benefit.

I can also specify a weight for each of the instance types that I decide to use (this is especially useful when I am making use of size flexibility):

I also (not shown) select my VPC and the desired subnets within it, click Next, and proceed as usual. Flexibility with regard to subnets/Availability Zones is also to my benefit; for more information, read Auto Scaling Groups with Multiple Instance Types and Purchase Options.

And with that, let’s take a look at how AWS customers Skyscanner and Mobileye are making use of this feature!

Capacity-Optimized Allocation at Skyscanner
Skyscanner is an online travel booking site. They run the front-end processing for their site on Spot Instances, making good use of up to 40,000 cores per day. Skyscanner’s online platform runs on Kubernetes clusters powered entirely by Spot Instances (watch this video to learn more). Capacity-optimized allocation has delivered many benefits including:

Faster Time to Market – The ability to access more compute power at a lower cost has allowed them to reduce the time to launch a new service from 6-7 weeks using traditional infrastructure to just 50 minutes on the AWS Cloud.

Cost Savings – Diversifying Spot Instances across Availability Zones and instance types has resulted in an overall savings of 70% per core.

Reduced Interruptions – A test that Skyscanner ran in preparation for Black Friday showed that their old configuration (lowest-price) had between 200 and 300 Spot interruptions and the new one (capacity-optimized) had between 10 and 15.

Capacity-Optimized Allocation at Mobileye
Mobileye (an Intel company) develops vision-based technology for self-driving vehicles and advanced driver assistant systems. Spot Instances are used to run their analytics, machine learning, simulation, and AWS Batch workloads packaged in Docker containers. They typically use between 200K and 300K concurrent cores, with peak daily usage of around 500K, all on Spot. Here’s a instance count graph over the course of a day:

After switching to capacity-optimized allocation and making some changes in accord with our Spot Instance best practices, they reduced the overall interruption rate by about 75%. These changes allowed them to save money on their compute costs while increasing application uptime and reducing their time-to-insight.

To learn more about how Mobileye uses Spot Instances, watch their re:Invent presentation, Navigating the Winding Road Toward Driverless Mobility.

Jeff;

 

AWS Snowball Edge Update – Faster Hardware, OpsHub GUI, IAM, and AWS Systems Manager

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-snowball-edge-update/

Over the last couple of years I’ve told you about several members of the “Snow” family of edge computing and data transfer devices – The original Snowball, the more-powerful Snowball Edge, and the exabyte-scale Snowmobile.

Today I would like to tell you about the latest updates to Snowball Edge. Here’s what I have for you today:

Snowball Edge Update – New storage optimized devices that are 25% faster, with more memory, more vCPUs, and support for 100 Gigabit networking.

AWS OpsHub for Snow Family – A new GUI-based tool to simplify the management of Snowball Edge devices.

IAM for Snowball Edge – AWS Identity and Access Management (IAM) can now be used to manage access to services and resources on Snowball Edge devices.

Snowball Edge Support for AWS Systems Manager – Support for task automation to simplify common maintenance and deployment tasks on instances and other resources on Snowball Edge devices.

Let’s take a closer look at each one…

Snowball Edge Storage Optimized Update
We’ve refreshed the hardware, more than doubling the processing power and boosting data transfer speed by up to 25%, all at the same price as the older devices.

The newest Snowballl Edge Storage Optimized devices feature 40 vCPUs and 80 GB of memory, up from 24 and 48, respectively. The processor now runs at 3.2 GHz, allowing you to launch more powerful EC2 instances that can handle your preprocessing and analytics workloads even better than before. In addition to the 80 TB of storage for data processing and data transfer workloads, there’s now 1 TB of SATA SSD storage that is accessible to the EC2 instances that you launch on the device. The improved data transfer speed that I mentioned earlier is made possible by a new 100 Gigabit QSFP+ network adapter.

Here are the instances that are available on the new hardware (you will need to rebuild any existing AMIs in order to use them):

Instance NameMemoryvCPUs
sbe-c.small21
sbe-c.medium41
sbe-c.large82
sbe-c.xlarge164
sbe-c.2xlarge328
sbe-c.4xlarge6416

You can cluster up to twelve Storage Optimized devices together in order to create a single S3-compatible bucket that can store nearly 1 petabyte of data. You can also run Lambda functions on this and on other Snowball Edge devices.

To learn more and to order a Snowball Edge (or an entire cluster), visit the AWS Snowball Console.

AWS OpsHub for Snow Family
This is a new graphical user interface that you can use to manage Snowball Edge devices. You can unlock devices and configure devices, use drag-and-drop operations to copy data, launch applications (EC2 AMIs), monitor device metrics, and automate routine operations.

Once downloaded and installed on your Windows or Mac, you can use AWS OpsHub even if you don’t have a connection to the Internet. This makes it ideal for use in some of the mobile and disconnected modes that I mentioned earlier, and also makes it a great fit for high-security environments.

AWS OpsHub is available at no charge wherever Snowball Edge is available.

To learn more and to get started with AWS OpsHub, visit the Snowball Resources Page.

IAM for Snowball Edge
You can now use user-based IAM policies to control access to services and resources running on Snowball Edge devices. If you have multiple users with access to the same device, you can use IAM policies to ensure that each user has the appropriate permissions.

If you have applications that make calls to IAM, S3, EC2, or STS (newly available on Snowball Edge) API functions on a device, you should make sure that you specify the “snow” region in your calls. This is optional now, but will become mandatory for devices ordered after November 2, 2020.

IAM support is available for devices ordered on or after April 16, 2020.

To learn more, read Using Local IAM.

Snowball Edge Support for AWS Systems Manager
AWS Systems Manager gives you the power to automate common maintenance and deployment tasks in order to make you and your teams more efficient.

You can now write scripts in Python or PowerShell and execute them in AWS OpsHub. The scripts can include any of the operations supported on the device. For example, here’s a simple script that restarts an EC2 instance:

To learn more, read about Automating Tasks.

Jeff;

AWS Data Transfer Out (DTO) 40% Price Reduction in South America (São Paulo) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-data-transfer-out-dto-40-price-reduction-in-south-america-sao-paulo-region/

I have good news for AWS customers using our South America (São Paulo) Region. Effective April 1, 2020 we are reducing prices for Data Transfer Out to the Internet (DTO) from the South America (São Paulo) Region by 40%. Data Transfer in remains free.

Here are the new prices for DTO from EC2, S3, and many other AWS services to the Internet:

Monthly Usage TierPrevious AWS Rate ($/GB)Price AdjustmentNew AWS Rate ($/GB)
Less than 10 TB
0.250-40%0.150
Less than 50 TB0.230-40%0.138
Less than 150 TB0.210-40%0.126
More than 150 TB0.190-40%0.114

At AWS, we focus on driving down our costs over time. As we do this, we pass the savings along to our customers. This is our 81st price reduction since 2006.

If you want to get started with AWS, the AWS Free Tier includes 15 GB/month of global data transfer out and lets you explore more than 60 AWS services.

Jeff;

 

New – Low-Cost HDD Storage Option for Amazon FSx for Windows File Server

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-low-cost-hdd-storage-option-for-amazon-fsx-for-windows-file-server/

You can use Amazon FSx for Windows File Server to create file systems that can be accessed from a wide variety of sources and that use your existing Active Directory environment to authenticate users. Last year we added a ton of features including Self-Managed Directories, Native Multi-AZ File Systems, Support for SQL Server, Fine-Grained File Restoration, On-Premises Access, a Remote Management CLI, Data Deduplication, Programmatic File Share Configuration, Enforcement of In-Transit Encryption, and Storage Quotas.

New HDD Option
Today we are adding a new HDD (Hard Disk Drive) storage option to Amazon FSx for Windows File Server. While the existing SSD (Solid State Drive) storage option is designed for the highest performance latency-sensitive workloads like databases, media processing, and analytics, HDD storage is designed for a broad spectrum of workloads including home directories, departmental shares, and content management systems.

Single-AZ HDD storage is priced at $0.013 per GB-month and Multi-AZ HDD storage is priced at $0.025 per GB-month (this makes Amazon FSx for Windows File Server the lowest cost file storage for Windows applications and workloads in the cloud). Even better, if you use this option in conjunction with Data Deduplication and use 50% space savings as a reasonable reference point, you can achieve an effective cost of $0.0065 per GB-month for a single-AZ file system and $0.0125 per GB-month for a multi-AZ file system.

You can choose the HDD option when you create a new file system:

If you have existing SSD-based file systems, you can create new HDD-based file systems and then use AWS DataSync or robocopy to move the files. Backups taken from newly created SSD or HDD file systems can be restored to either type of storage, and with any desired level of throughput capacity.

Performance and Caching
The HDD storage option is designed to deliver 12 MB/second of throughput per TiB of storage, with the ability to handle bursts of up to 80 MB/second per TiB of storage. When you create your file system, you also specify the throughput capacity:

The amount of throughput that you provision also controls the size of a fast, in-memory cache for your file share; higher levels of throughput come with larger amounts of cache. As a result, Amazon FSx for Windows File Server file systems can be provisioned so as to be able to provide over 3 GB/s of network throughput and hundreds of thousands of network IOPS, even with HDD storage. This will allow you to create cost-effective file systems that are able to handle many different use cases, including those where a modest subset of a large amount of data is accessed frequently. To learn more, read Amazon FSx for Windows File Server Performance.

Now Available
HDD file systems are available in all regions where Amazon FSx for Windows File Server is available and you can start creating them today.

Jeff;

BuildforCOVID19 Global Online Hackathon

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/buildforcovid19-global-online-hackathon/

The COVID-19 Global Hackathon is an opportunity for builders to create software solutions that drive social impact with the aim of tackling some of the challenges related to the current coronavirus (COVID-19) pandemic.

We’re encouraging YOU – builders around the world – to #BuildforCOVID19 using technologies of your choice across a range of suggested themes and challenge areas, some of which have been sourced through health partners like the World Health Organization. The hackathon welcomes locally and globally focused solutions and is open to all developers.

AWS is partnering with technology companies like Facebook, Giphy, Microsoft, Pinterest, Slack, TikTok, Twitter, and WeChat to support this hackathon. We will be providing technical mentorship and credits for all participants.

Join BuildforCOVID19 and chat with fellow participants and AWS mentors in the COVID19 Global Hackathon Slack channel.

Jeff;

Working From Home? Here’s How AWS Can Help

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/working-from-home-heres-how-aws-can-help/

Just a few weeks and so much has changed. Old ways of living, working, meeting, greeting, and communicating are gone for a while. Friendly handshakes and warm hugs are not healthy or socially acceptable at the moment.

My colleagues and I are aware that many people are dealing with changes in their work, school, and community environments. We’re taking measures to support our customers, communities, and employees to help them to adjust and deal with the situation, and will continue to do more.

Working from Home
With people in many cities and countries now being asked to work or learn from home, we believe that some of our services can help to make the transition from the office or the classroom to the home just a bit easier. Here’s an overview of our solutions:

Amazon WorkSpaces lets you launch virtual Windows and Linux desktops that can be accessed anywhere and from any device. These desktops can be used for remote work, remote training, and more.

Amazon WorkDocs makes it easy for you to collaborate with others, also from anywhere and on any device. You can create, edit, share, and review content, all stored centrally on AWS.

Amazon Chime supports online meetings with up to 100 participants (growing to 250 later this month), including chats and video calls, all from a single application.

Amazon Connect lets you set up a call or contact center in the cloud, with the ability to route incoming calls and messages to tens of thousands of agents. You can use this to provide emergency information or personalized customer service, while the agents are working from home.

Amazon AppStream lets you deliver desktop applications to any computer. You can deliver enterprise, educational, or telemedicine apps at scale, including those that make use of GPUs for computation or 3D rendering.

AWS Client VPN lets you set up secure connections to your AWS and on-premises networks from anywhere. You can give your employees, students, or researchers the ability to “dial in” (as we used to say) to your existing network.

Some of these services have special offers designed to make it easier for you to get started at no charge; others are already available to you under the AWS Free Tier. You can learn more on the home page for each service, and on our new Remote Working & Learning page.

You can sign up for and start using these services without talking to us, but we are here to help if you need more information or need some help in choosing the right service(s) for your needs. Here are some points of contact:

If you are already an AWS customer, your Technical Account Manager (TAM) and Solutions Architect (SA) will be happy to help.

Some Useful Content
I am starting a collection of other AWS-related content that will help you use these services and work-from-home as efficiently as possible. Here’s what I have so far:

If you create something similar, share it with me and I’ll add it to my list.

Please Stay Tuned
This is, needless to say, a dynamic and unprecedented situation and we are all learning as we go.

I do want you to know that we’re doing our best to help. If there’s something else that you need, please do not hesitate to reach out. Go through your normal AWS channels first, but contact me if you are in a special situation and I’ll do my best!

Jeff;

 

Bottlerocket – Open Source OS for Container Hosting

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/bottlerocket-open-source-os-for-container-hosting/

It is safe to say that our industry has decided that containers are now the chosen way to package and scale applications. Our customers are making great use of Amazon ECS and Amazon Elastic Kubernetes Service, with over 80% of all cloud-based containers running on AWS.

Container-based environments lend themselves to easy scale-out, and customers can run host environments that encompass hundreds or thousands of instances. At this scale, several challenges arise with the host operating system. For example:

Security – Installing extra packages simply to satisfy dependencies can increase the attack surface.

Updates – Traditional package-based update systems and mechanisms are complex and error prone, and can have issues with dependencies.

Overhead – Extra, unnecessary packages consume disk space and compute cycles, and also increase startup time.

Drift – Inconsistent packages and configurations can damage the integrity of a cluster over time.

Introducing Bottlerocket
Today I would like to tell you about Bottlerocket, a new Linux-based open source operating system that we designed and optimized specifically for use as a container host.

Bottlerocket reflects much of what we have learned over the years. It includes only the packages that are needed to make it a great container host, and integrates with existing container orchestrators. It supports Docker image and images that conform to the Open Container Initiative (OCI) image format.

Instead of a package update system, Bottlerocket uses a simple, image-based model that allows for a rapid & complete rollback if necessary. This removes opportunities for conflicts and breakage, and makes it easier for you to apply fleet-wide updates with confidence using orchestrators such as EKS.

In addition to the minimal package set, Bottlerocket uses a file system that is primarily read-only, and that is integrity-checked at boot time via dm-verity. SSH access is discouraged, and is available only as part of a separate admin container that you can enable on an as-needed basis and then use for troubleshooting purposes.

Try it Out
We’re launching a public preview of Bottlerocket today. You can follow the steps in QUICKSTART to set up an EKS cluster, and you can take a look at the GitHub repo. Try it out, report bugs, send pull requests, and let us know what you think!

Jeff;

 

AWS Named as a Leader in Gartner’s Magic Quadrant for Cloud AI Developer Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-named-as-a-leader-in-gartners-magic-quadrant-for-cloud-ai-developer-services/

Last week I spoke to executives from a large AWS customer and had an opportunity to share aspects of the Amazon culture with them. I was able to talk to them about our Leadership Principles and our Working Backwards model. They asked, as customers often do, about where we see the industry in the next 5 or 10 years. This is a hard question to answer, because about 90% of our product roadmap is driven by requests from our customers. I honestly don’t know where the future will take us, but I do know that it will help our customers to meet their goals and to deliver on their vision.

Magic Quadrant for Cloud AI Developer Services
It is always good to see that our hard work continues to delight our customers, and it is also good to be recognized by Gartner and other leading analysts. Today I am happy to share that AWS has secured the top-right corner of Gartner’s Magic Quadrant for Cloud AI Developer Services, earning highest placement for Ability to Execute and furthest to the right for Completeness of Vision:

You can read the full report to learn more (registration is required).

Keep the Cat Out
As a simple yet powerful example of the power of the AWS AI & ML services, check out Ben Hamm’s DeepLens-powered cat door:

AWS AI & ML Services
Building on top of the AWS compute, storage, networking, security, database, and analytics services, our lineup of AI and ML offerings are designed to serve newcomers, experts, and everyone in-between. Let’s take a look at a few of them:

Amazon SageMaker – Gives developers and data scientists the power to build, train, test, tune, deploy, and manage machine learning models. SageMaker provides a complete set of machine learning components designed to reduce effort, lower costs, and get models into production as quickly as possible:

Amazon Kendra – An accurate and easy-to-use enterprise search service that is powered by machine learning. Kendra makes content from multiple, disparate sources searchable with powerful natural language queries:

Amazon CodeGuru – This service provides automated code reviews and makes recommendations that can improve application performance by identifying the most expensive lines of code. It has been trained on hundreds of thousands of internal Amazon projects and on over 10,000 open source projects on GitHub.

Amazon Textract – This service extracts text and data from scanned documents, going beyond traditional OCR by identifying the contents of fields in forms and information stored in tables. Powered by machine learning, Textract can handle virtually any type of document without the need for manual effort or custom code:

Amazon Personalize – Based on the same technology that is used at Amazon.com, this service provides real-time personalization and recommendations. To learn more, read Amazon Personalize – Real-Time Personalization and Recommendation for Everyone.

Time to Learn
If you are ready to learn more about AI and ML, check out the AWS Ramp-Up Guide for Machine Learning:

You should also take a look at our Classroom Training in Machine Learning and our library of Digital Training in Machine Learning.

Jeff;

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.