All posts by Jeff Barr

New – AWS Private 5G – Build Your Own Private Mobile Network

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-private-5g-build-your-own-private-mobile-network/

Back in the mid-1990’s, I had a young family and 5 or 6 PCs in the basement. One day my son Stephen and I bought a single box that contained a bunch of 3COM network cards, a hub, some drivers, and some cables, and spent a pleasant weekend setting up our first home LAN.

Introducing AWS Private 5G
Today I would like to introduce you to AWS Private 5G, the modern, corporate version of that very powerful box of hardware and software. This cool new service lets you design and deploy your own private mobile network in a matter of days. It is easy to install, operate, and scale, and does not require any specialized expertise. You can use the network to communicate with the sensors & actuators in your smart factory, or to provide better connectivity for handheld devices, scanners, and tablets for process automation.

The private mobile network makes use of CBRS spectrum. It supports 4G LTE (Long Term Evolution) today, and will support 5G in the future, both of which give you a consistent, predictable level of throughput with ultra low latency. You get long range coverage, indoors and out, and fine-grained access control.

AWS Private 5G runs on AWS-managed infrastructure. It is self-service and API-driven, and can scale with respect to geographic coverage, device count, and overall throughput. It also works nicely with other parts of AWS, and lets you use AWS Identity and Access Management (IAM) to control access to both devices and applications.

Getting Started with AWS Private 5G
To get started, I visit the AWS Private 5G Console and click Create network:

I assign a name to my network (JeffCell) and to my site (JeffSite) and click Create network:

The network and the site are created right away. Now I click Create order:

I fill in the shipping address, agree to the pricing (more on that later), and click Create order:

Then I await delivery, and click Acknowledge order to proceed:

The package includes a radio unit and ten SIM cards. The radio unit requires AC power and wired access to the public Internet, along with basic networking (IPv4 and DHCP).

When the order arrives, I click Acknowledge order and confirm that I have received the desired radio unit and SIMs. Then I engage a Certified Professional Installer (CPI) to set it up. As part of the installation process, the installer will enter the latitude, longitude, and elevation of my site.

Things to Know
Here are a couple of important things to know about AWS Private 5G:

Partners – Planning and deploying a private wireless network can be complex and not every enterprise will have the tools to do this work on their own. In addition, CBRS spectrum in the United States requires Certified Professional Installation (CPI) of radios. To address these needs, we are building an ecosystem of partners that can provide customers with radio planning, installation, CPI certification, and implementation of customer use cases. You can access these partners from the AWS Private 5G Console and work with them through the AWS Marketplace.

Deployment Options – In the demo above, I showed you the cloud–based deployment option, which is designed for testing and evaluation purposes, for time-limited deployments, and for deployments that do not use the network in latency-sensitive ways. With this option, the AWS Private 5G Mobile Core runs within a specific AWS Region. We are also working to enable on-premises hosting of the Mobile Core on a Private 5G compute appliance.

CLI and API Access – I can also use the create-network, create-network-site, and acknowledge-order-receipt commands to set up my AWS Private 5G network from the command line. I still need to use the console to place my equipment order.

Scaling and Expansion – Each network supports one radio unit that can provide up to 150 Mbps of throughput spread across up to 100 SIMs. We are working to add support for multiple radio units and greater number of SIM cards per network.

Regions and Locations – We are launching AWS Private 5G in the US East (Ohio), US East (N. Virginia), and US West (Oregon) Regions, and are working to make the service available outside of the United States in the near future.

Pricing – Each radio unit is billed at $10 per hour, with a 60 day minimum.

To learn more, read about AWS Private 5G.

Jeff;

AWS Week in Review – August 1, 2022

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-august-1-2022/

AWS re:Inforce returned to Boston last week, kicking off with a keynote from Amazon Chief Security Officer Steve Schmidt and AWS Chief Information Security officer C.J. Moses:

Be sure to take some time to watch this video and the other leadership sessions, and to use what you learn to take some proactive steps to improve your security posture.

Last Week’s Launches
Here are some launches that caught my eye last week:

AWS Wickr uses 256-bit end-to-end encryption to deliver secure messaging, voice, and video calling, including file sharing and screen sharing, across desktop and mobile devices. Each call, message, and file is encrypted with a new random key and can be decrypted only by the intended recipient. AWS Wickr supports logging to a secure, customer-controlled data store for compliance and auditing, and offers full administrative control over data: permissions, ephemeral messaging options, and security groups. You can now sign up for the preview.

AWS Marketplace Vendor Insights helps AWS Marketplace sellers to make security and compliance data available through AWS Marketplace in the form of a unified, web-based dashboard. Designed to support governance, risk, and compliance teams, the dashboard also provides evidence that is backed by AWS Config and AWS Audit Manager assessments, external audit reports, and self-assessments from software vendors. To learn more, read the What’s New post.

GuardDuty Malware Protection protects Amazon Elastic Block Store (EBS) volumes from malware. As Danilo describes in his blog post, a malware scan is initiated when Amazon GuardDuty detects that a workload running on an EC2 instance or in a container appears to be doing something suspicious. The new malware protection feature creates snapshots of the attached EBS volumes, restores them within a service account, and performs an in-depth scan for malware. The scanner supports many types of file systems and file formats and generates actionable security findings when malware is detected.

Amazon Neptune Global Database lets you build graph applications that run across multiple AWS Regions using a single graph database. You can deploy a primary Neptune cluster in one region and replicate its data to up to five secondary read-only database clusters, with up to 16 read replicas each. Clusters can recover in minutes in the result of an (unlikely) regional outage, with a Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of 1 minute. To learn a lot more and see this new feature in action, read Introducing Amazon Neptune Global Database.

Amazon Detective now Supports Kubernetes Workloads, with the ability to scale to thousands of container deployments and millions of configuration changes per second. It ingests EKS audit logs to capture API activity from users, applications, and the EKS control plane, and correlates user activity with information gleaned from Amazon VPC flow logs. As Channy notes in his blog post, you can enable Amazon Detective and take advantage of a free 30 day trial of the EKS capabilities.

AWS SSO is Now AWS IAM Identity Center in order to better represent the full set of workforce and account management capabilities that are part of IAM. You can create user identities directly in IAM Identity Center, or you can connect your existing Active Directory or standards-based identify provider. To learn more, read this post from the AWS Security Blog.

AWS Config Conformance Packs now provide you with percentage-based scores that will help you track resource compliance within the scope of the resources addressed by the pack. Scores are computed based on the product of the number of resources and the number of rules, and are reported to Amazon CloudWatch so that you can track compliance trends over time. To learn more about how scores are computed, read the What’s New post.

Amazon Macie now lets you perform one-click temporary retrieval of sensitive data that Macie has discovered in an S3 bucket. You can retrieve up to ten examples at a time, and use these findings to accelerate your security investigations. All of the data that is retrieved and displayed in the Macie console is encrypted using customer-managed AWS Key Management Service (AWS KMS) keys. To learn more, read the What’s New post.

AWS Control Tower was updated multiple times last week. CloudTrail Organization Logging creates an org-wide trail in your management account to automatically log the actions of all member accounts in your organization. Control Tower now reduces redundant AWS Config items by limiting recording of global resources to home regions. To take advantage of this change you need to update to the latest landing zone version and then re-register each Organizational Unit, as detailed in the What’s New post. Lastly, Control Tower’s region deny guardrail now includes AWS API endpoints for AWS Chatbot, Amazon S3 Storage Lens, and Amazon S3 Multi Region Access Points. This allows you to limit access to AWS services and operations for accounts enrolled in your AWS Control Tower environment.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here are some other news items and customer stories that you may find interesting:

AWS Open Source News and Updates – My colleague Ricardo Sueiras writes a weekly open source newsletter and highlights new open source projects, tools, and demos from the AWS community. Read installment #122 here.

Growy Case Study – This Netherlands-based company is building fully-automated robot-based vertical farms that grow plants to order. Read the case study to learn how they use AWS IoT and other services to monitor and control light, temperature, CO2, and humidity to maximize yield and quality.

Journey of a Snap on Snapchat – This video shows you how a snapshot flows end-to-end from your camera to AWS, to your friends. With over 300 million daily active users, Snap takes advantage of Amazon Elastic Kubernetes Service (EKS), Amazon DynamoDB, Amazon Simple Storage Service (Amazon S3), Amazon CloudFront, and many other AWS services, storing over 400 terabytes of data in DynamoDB and managing over 900 EKS clusters.

Cutting Cardboard Waste – Bin packing is almost certainly a part of every computer science curriculum! In the linked article from the Amazon Science site, you can learn how an Amazon Principal Research Scientist developed PackOpt to figure out the optimal set of boxes to use for shipments from Amazon’s global network of fulfillment centers. This is an NP-hard problem and the article describes how they build a parallelized solution that explores a multitude of alternative solutions, all running on AWS.

Upcoming Events
Check your calendar and sign up for these online and in-person AWS events:

AWS SummitAWS Global Summits – AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Registrations are open for the following AWS Summits in August:

Imagine Conference 2022IMAGINE 2022 – The IMAGINE 2022 conference will take place on August 3 at the Seattle Convention Center, Washington, USA. It’s a no-cost event that brings together education, state, and local leaders to learn about the latest innovations and best practices in the cloud. You can register here.

That’s all for this week. Check back next Monday for another Week in Review!

Jeff;

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Fortinet FortiCNP – Now Available in AWS Marketplace

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/fortinet-forticnp-now-available-in-aws-marketplace/

When I first started to talk about AWS in front of IT professionals, they would always listen intently and ask great questions. Invariably, a seasoned pro would raise there hand and ask “This all sounds great, but have you thought about security?” Of course we had, and for a while I would describe our principal security features ahead of time instead of waiting for the question.

Today, the field of cloud security is well-developed, as is the practice of SecOps (Security Operations). There are plenty of tools, plenty of best practices, and a heightened level of awareness regarding the important of both. However, as on-premises workloads continue to migrate to the cloud, SecOps practitioners report that they are concerned about alert fatigue, while having to choose tools that ensure the desired level of workload coverage. According to a recent survey conducted by Fortinet, 78% of the respondents were looking for a single cloud security platform that offers sufficient workload coverage to address all of their needs.

Fortinet FortiCNP
In response to this clear need for a single tool that addresses cloud workloads and cloud storage, Fortinet has launched FortiCNP (Cloud Native Protection). As the name implies, this security product is designed to offer simple & effective protection of cloud resources. It monitors and tracks multiple sources of security issues including configurations, user activity, and VPC Flow Logs. FortiCNP scans cloud storage for content that is sensitive or malicious, and also inspects containers for vulnerabilities and misconfigurations. The findings and alerts generated by all of this monitoring, tracking, and scanning is mapped into actionable insights and compliance reports, all available through a single dashboard.

Now in AWS Marketplace
I am happy to report that FortiCNP is now available in AWS Marketplace and that you can start your subscription today! It connects to multiple AWS security tools including Amazon Inspector, AWS Security Hub, and Amazon GuardDuty, with plans to add support for Amazon Macie, and other Fortinet products such as FortiEDR (Endpoint Detection and Response) and FortiGate-VM (next-generation firewall) later this year.

FortinCNP provides you with features that are designed to address your top risk management, threat management, compliance, and SecOps challenges. Drawing on all of the data sources and tools that I mentioned earlier, it runs hundreds of configuration assessments to identify risks, and then presents the findings in a scored, prioritized fashion.

Getting Started with FortiCNP
After subscribing to FortiCNP in AWS Marketplace, I set up my accounts and enable some services. In the screenshots that follow I will show you the highlights of each step, and link you to the docs for more information:

Enable Security Hub and EventBridge – Following the instructions in AWS Security Hub and EventBridge Configuration, I choose an AWS region to hold my aggregated findings, enable Amazon GuardDuty and Amazon Inspector, and route the findings to AWS Security Hub.

Add VPC Flow Logs – Again following the instructions (AWS Traffic Configuration), I enable VPC Flow Logs. This allows FortiCNP to access cloud traffic data and present it in the Traffic view.

Add AWS Accounts – FortiCNP can protect a single AWS account or all of the accounts in an entire Organization, or anywhere in-between. Accounts and Organizations can be added manually, or by using a CloudFormation template that sets up an IAM Role, enables CloudTrail, and takes care of other housekeeping. To learn more, read Amazon Web Services Account OnBoarding. Using the ADMIN page of FortiCNP, I choose to add a single account using a template:

Following the prompts, I run a CloudFormation template and review the resources that it creates:

After a few more clicks, FortiCNP verifies my license and then I am ready to go.

Enable Storage Guardian – I can enable data protection for individual S3 buckets, and initiate a scan (more info at Activate Data Protection on Bucket / Container).

With all of the setup steps complete, I can review and act on the findings. I start by reviewing the dashboard:

Because I just started using the product, the overall risk trend section at the top has just a few days worth of history. The Resource Overview shows that my resources are at low risk, with only informational messages. I have no exposed storage with sensitive data, and none with malware (always good to know).

I can click on a resource type to learn more the findings. Each resource has an associated risk score:

From here I can click on a resource to see which of the findings contribute to the risk score:

I can switch to the Changes tab to see all relevant configuration changes for the resource:

I can also add notes to the resource, and I can send notifications to several messaging and ticketing systems:

Compliance reports are generated automatically on a monthly, quarterly, and yearly basis. I can also generate a one-time compliance report to cover any desired time frame:

Reports are available immediately, and can be downloaded for review:

The policies that are used to generate findings are open and accessible,and can be enabled, disabled, and fine-tuned. For example, the Alert on activity from suspicious locations (sorry, all of you who are connecting from Antarctica):

There’s a lot more but I am just about out of space. Check out the online documentation to learn a lot more.

Available Today
You can subscribe to FortiCNP now and start enjoying the benefits today!

Jeff;

Amazon Prime Day 2022 – AWS for the Win!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-prime-day-2022-aws-for-the-win/

As part of my annual tradition to tell you about how AWS makes Prime Day possible, I am happy to be able to share some chart-topping metrics (check out my 2016, 2017, 2019, 2020, and 2021 posts for a look back).

My purchases this year included a first aid kit, some wood brown filament for my 3D printer, and a non-stick frying pan! According to our official news release, Prime members worldwide purchased more than 100,000 items per minute during Prime Day, with best-selling categories including Amazon Devices, Consumer Electronics, and Home.

Powered by AWS
As always, AWS played a critical role in making Prime Day a success. A multitude of two-pizza teams worked together to make sure that every part of our infrastructure was scaled, tested, and ready to serve our customers. Here are a few examples:

Amazon Aurora – On Prime Day, 5,326 database instances running the PostgreSQL-compatible and MySQL-compatible editions of Amazon Aurora processed 288 billion transactions, stored 1,849 terabytes of data, and transferred 749 terabytes of data.

Amazon EC2 – For Prime Day 2022, Amazon increased the total number of normalized instances (an internal measure of compute power) on Amazon Elastic Compute Cloud (Amazon EC2) by 12%. This resulted in an overall server equivalent footprint that was only 7% larger than that of Cyber Monday 2021 due to the increased adoption of AWS Graviton2 processors.

Amazon EBS – For Prime Day, the Amazon team added 152 petabytes of EBS storage. The resulting fleet handled 11.4 trillion requests per day and transferred 532 petabytes of data per day. Interestingly enough, due to increased efficiency of some of the internal Amazon services used to run Prime Day, Amazon actually used about 4% less EBS storage and transferred 13% less data than it did during Prime Day last year. Here’s a graph that shows the increase in data transfer during Prime Day:

Amazon SES – In order to keep Prime Day shoppers aware of the deals and to deliver order confirmations, Amazon Simple Email Service (SES) peaked at 33,000 Prime Day email messages per second.

Amazon SQS – During Prime Day, Amazon Simple Queue Service (SQS) set a new traffic record by processing 70.5 million messages per second at peak:

Amazon DynamoDB – DynamoDB powers multiple high-traffic Amazon properties and systems including Alexa, the Amazon.com sites, and all Amazon fulfillment centers. Over the course of Prime Day, these sources made trillions of calls to the DynamoDB API. DynamoDB maintained high availability while delivering single-digit millisecond responses and peaking at 105.2 million requests per second.

Amazon SageMaker – The Amazon Robotics Pick Time Estimator, which uses Amazon SageMaker to train a machine learning model to predict the amount of time future pick operations will take, processed more than 100 million transactions during Prime Day 2022.

Package Planning – In North America, and on the highest traffic Prime 2022 day, package-planning systems performed 60 million AWS Lambda invocations, processed 17 terabytes of compressed data in Amazon Simple Storage Service (Amazon S3), stored 64 million items across Amazon DynamoDB and Amazon ElastiCache, served 200 million events over Amazon Kinesis, and handled 50 million Amazon Simple Queue Service events.

Prepare to Scale
Every year I reiterate the same message: rigorous preparation is key to the success of Prime Day and our other large-scale events. If you are preparing for a similar chart-topping event of your own, I strongly recommend that you take advantage of AWS Infrastructure Event Management (IEM). As part of an IEM engagement, my colleagues will provide you with architectural and operational guidance that will help you to execute your event with confidence!

Jeff;

How We Sent an AWS Snowcone into Orbit

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/how-we-sent-an-aws-snowcone-into-orbit/

I have been a fan of space travel and the US space program since I was 4 or 5 years old. I remember reading about the Mercury and Gemini programs, and watched with excitement as Lunar Module Eagle landed on the Moon.

Today, with the cost to reach Low Earth Orbit (LEO) seemingly declining with each launch, there are more opportunities than ever before to push the boundaries of what we know, conducting ever-more bold experiments and producing vast amounts of raw data. Making the situation even more interesting, today’s experiments can use more types of sensors, each collecting data with higher resolution and at a greater sampling frequency. Dealing with this vast amount of data is a huge challenge. Bandwidth on NASA’s constellation of Tracking and Data Relay Satellites (TDRS) is limited, and must be shared equitably across a growing number of missions. Latency, while almost negligible from LEO, becomes a consideration when sending data from the Moon, Mars, or from beyond the bounds of the Solar System.

When we start to think about sending hardware into space, another set of challenges come about. The hardware must be as light as possible in order to minimize the cost to launch it. It must, however, be durable enough to withstand extreme vibration and G-forces during launch, and able to function in a microgravity environment once in orbit. Once in orbit, the hardware must be able to safely connect to the host spacecraft’s power, cooling, and network systems.

AWS Snowcone on the International Space Station
As you may have already read, we recently sent an AWS Snowcone SSD to the International Space Station. Since Amazon Prime does not currently make deliveries to the ISS, the Snowcone traveled aboard a Falcon 9 rocket as part of the first Axiom Space Mission (Ax-1). As part of this mission, four private astronauts ran experiments and conducted technology demos that spanned 17 days and 240 trips around Earth.

The Snowcone is designed to run edge computing workloads, all protected by multiple layers of encryption. After data has been collected and processed locally, the device is typically shipped back to AWS so that the processed data can be stored in the cloud and processed further. Alternatively, AWS DataSync can be used to copy data from a Snowcone device back to AWS.

On the hardware side, the Snowcone is small, exceptionally rugged, and lightweight. With 2 CPUs, 4 GB of memory, and 14 TB of SSD storage, it can do a lot of local processing and storage, making it ideal for the Ax-1 mission.

Preparing for the Journey
In preparation for the trip to space, teams from AWS, NASA, and Axiom Space worked together for seven months to test and validate the Snowcone. The validation process included a rigorous safety review, a detailed thermal analysis, and testing to help ensure that the device would survive vibration at launch and in flight to the ISS. The Snowcone was not modified, but it did end up wrapped in Kapton tape for additional electrical and thermal protection.

On the software side, the AWS Machine Learning Solutions Lab worked closely with Axiom to develop a sophisticated machine learning model that would examine photos taken aboard the ISS to help improve the crew experience in future Axiom missions. The photos were taken by on-board Nikon cameras, stored in Network Attached Storage (NAS) on the ISS, and then transferred from the NAS to the Snowcone over the Joint Station LAN. Once the photos are on the Snowcone, the model was was able to return results within three seconds.

On Board the ISS
After the Snowcone was up and running, the collective team did some initial testing and encountered a few issues. For example, the photos were stored on the NAS with upper-case extensions but the code on the Snowcone was looking for lower-case. Fortunately, the Earth-based team was able to connect to the device via SSH in order to do some remote diagnosis and reconfiguration.

An updated ML model became available during the mission. The team was able to upload a largish (30 GB) AMI to the Snowcone and put the model into production with ease.

The Snowcone remains aboard the ISS, and is available for other experiments through the end of 2022. If you are a researcher, student, or part of an organization that is interested in performing experiments that involve processing data remotely on the ISS, you can express your interest at [email protected]. One interesting use case should be of interest to medical researchers. Many of the medical measurements and experiments conducted on the ISS generate data that must currently be downloaded to Earth for processing and analysis. Taking care of these steps while in orbit could reduce the end-to-end time for downloading and analysis from 20 hours to just 20 minutes, creating the potential for a 60-fold increase in the number of possible experiments.

Success
Net-net, the experiment showed that it is possible to extend cloud computing to the final frontier.

The Earth-based team was able to communicate remotely with the orbiting Snowcone in order to launch, test, and update the model. They then demonstrated that they were able to repeat this process as needed, processing photos from onboard research experiments and making optimal use of the limited bandwidth that is available between space stations and Earth.

All-in-all, this experiment was deemed a success, and we learned a lot as we identified and addressed the challenges that arise when sending AWS hardware into space to serve the needs of our customers.

Check out AWS for Aerospace and Satellite to learn more about how we support our customers and our partners as they explore the final frontier. If you are ready to deploy a Snowcone on a space or an Earth-bound mission of your own, we are ready, willing, and able to work with you!

Jeff;

Now in Preview – Amazon CodeWhisperer- ML-Powered Coding Companion

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-in-preview-amazon-codewhisperer-ml-powered-coding-companion/

As I was getting ready to write this post I spent some time thinking about some of the coding tools that I have used over the course of my career. This includes the line-oriented editor that was an intrinsic part of the BASIC interpreter that I used in junior high school, the IBM keypunch that I used when I started college, various flavors of Emacs, and Visual Studio. The earliest editors were quite utilitarian, and grew in sophistication as CPU power become more plentiful. At first this increasing sophistication took the form of lexical assistance, such as dynamic completion of partially-entered variable and function names. Later editors were able to parse source code, and to offer assistance based on syntax and data types — Visual Studio‘s IntelliSense, for example. Each of these features broke new ground at the time, and each one had the same basic goal: to help developers to write better code while reducing routine and repetitive work.

Announcing CodeWhisperer
Today I would like to tell you about Amazon CodeWhisperer. Trained on billions of lines of code and powered by machine learning, CodeWhisperer has the same goal. Whether you are a student, a new developer, or an experienced professional, CodeWhisperer will help you to be more productive.

We are launching in preview form with support for multiple IDEs and languages. To get started, you simply install the proper AWS IDE Toolkit, enable the CodeWhisperer feature, enter your preview access code, and start typing:

CodeWhisperer will continually examine your code and your comments, and present you with syntactically correct recommendations. The recommendations are synthesized based on your coding style and variable names, and are not simply snippets.

CodeWhisperer uses multiple contextual clues to drive recommendations including the cursor location in the source code, code that precedes the cursor, comments, and code in other files in the same projects. You can use the recommendations as-is, or you can enhance and customize them as needed. As I mentioned earlier, we trained (and continue to train) CodeWhisperer on billions of lines of code drawn from open source repositories, internal Amazon repositories, API documentation, and forums.

CodeWhisperer in Action
I installed the CodeWhisperer preview in PyCharm and put it through its paces. Here are a few examples to show you what it can do. I want to build a list of prime numbers. I type # See if a number is pr. CodeWhisperer offers to complete this, and I press TAB (the actual key is specific to each IDE) to accept the recommendation:

On the next line, I press Alt-C (again, IDE-specific), and I can choose between a pair of function definitions. I accept the first one, and CodeWhisperer recommends the function body, and here’s what I have:

I write a for statement, and CodeWhisperer recommends the entire body of the loop:

CodeWhisperer can also help me to write code that accesses various AWS services. I start with # create S3 bucket and TAB-complete the rest:

I could show you many more cool examples, but you will learn more by simply joining the preview and taking CodeWhisperer for a spin.

Join the Preview
The preview supports code written in Python, Java, and JavaScript, using VS Code, IntelliJ IDEA, PyCharm, WebStorm, and AWS Cloud9. Support for the AWS Lambda Console is in the works and should be ready very soon.

Join the CodeWhisperer preview and let me know what you think!

Jeff;

AWS Week in Review – June 13, 2022

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-june-13-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Last Week’s Launches
I made a short trip to Austin, Texas last week in order to visit and learn from some customers. As is always the case, the days when I was traveling were filled with AWS launches; here’s my recap of a few that caught my eye:

R6id Instances – In her first post for the AWS News Blog, Senior Developer Advocate Veliswa Boya wrote about the new R6id instances. These are a variant of our sixth generation of x86-based R6i instances, and feature up to 7.6 TB of NVMe Local Instance Storage. Powered by 3rd Generation Intel Xeon Scalable (Ice Lake) processors, the instances offer higher compute performance, a new larger size (.32xlarge), always-on memory encryption, and double the EBS and network performance as the previous generation. The instances are available now in four AWS Regions.

AWS MGN Post-Launch Actions AWS Application Migration Service (AWS MGN) helps you to migrate your existing servers to AWS, with automation that can handle a wide variety of applications. We launched a set of optional post-migration actions to provide additional support for your migration and modernization efforts. The initial set of actions install the AWS Systems Manager agent, install the AWS Elastic Disaster Recovery Service Agent, migrate from CentOS to Rocky Linux, and convert SUSE Linux subscriptions to AWS-provided subscriptions. You can read my blog post to learn more.

Mainframe Modernization Service – This new service helps you to modernize your mainframe applications and to deploy them to AWS fully-managed runtime environments. As Seb notes in his post, Modernize Your Mainframe Applications & Deploy Them In The Cloud, the application modernization journey is composed of four phases: assessing the situation, mobilizing the project, migrating & modernizing, and operating & optimizing. The Mainframe Modernization Service provides assistance during each phase, and you can review each one in the blog post.

Amazon Aurora – We made multiple Amazon Aurora announcements (all for the PostgreSQL-compatible edition) including support for the Large Objects (LO) module, zero-downtime patching (ZDP), support for bug-fix versions 13.7, 12.11, 11.16, and 10.21, and updates to the pglogical and wal2json extensions.

Amazon SageMaker – There were also multiple announcements related to Amazon SageMaker and Amazon SageMaker Data Wrangler including the ability to split data into train and test sets with a few clicks, Data Wrangler support for model training with Amazon SageMaker Autopilot & the power to export features to Amazon SageMaker Feature Store, new interactive product tours & sample data sets for Amazon SageMaker Canvas, provisioning and management of ML models with CloudFormation templates, and Amazon SageMaker Experiments support for common chart types.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here are some other updates that caught my eye last week:

Open Source News – My colleague Ricardo Sueiras published installment #116 of his AWS Open Source News and Updates. He’s got the latest and greatest on tools for Lambda, Lambda@Edge, CDK, schema management, EMR, and a whole lot more!

aws-prod-infrastructure is an open source tool that generates Terraform code from an AWS production account. Terragen does the same thing, with pricing plans for hobbyists (free), professionals, and enterprises.

resoto creates an inventory of your cloud, provides deep visibility, and reacts to changes in your infrastructure.

green-boost from AWS Labs helps you to quickly build full stack serverless web apps on AWS.

Upcoming AWS Events
Here are some events that may be of interest to you:

re:Mars (June 21-24) – I’ll be heading to Las Vegas next week to attend re:Mars; there’s still some time to register and attend!

AWS Summits (June & July)AWS Summits will take place in-person in June (Toronto and Milan) and July (New York), with more cities on the list for later in the year.

And that’s all for this week!

Jeff;

AWS MGN Update – Configure DR, Convert CentOS Linux to Rocky Linux, and Convert SUSE Linux Subscription

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-mgn-update-configure-dr-convert-centos-linux-to-rocky-linux-and-convert-suse-linux-subscription/

Just about a year ago, Channy showed you How to Use the New AWS Application Migration Server for Lift-and-Shift Migrations. In his post, he introduced AWS Application Migration Service (AWS MGN) and said:

With AWS MGN, you can minimize time-intensive, error-prone manual processes by automatically replicating entire servers and converting your source servers from physical, virtual, or cloud infrastructure to run natively on AWS. The service simplifies your migration by enabling you to use the same automated process for a wide range of applications.

Since launch, we have added agentless replication along with support for Windows 10 and multiple versions of Windows Server (2003, 2008, and 2022). We also expanded into additional regions throughout 2021.

New Post-Launch Actions
As the title of Channy’s post stated, AWS MGN initially supported direct, lift-and-shift migrations. In other words, the selected disk volumes on the source servers were directly copied, bit-for-bit to EBS volumes attached to freshly launched Amazon Elastic Compute Cloud (Amazon EC2) instances.

Today we are adding a set of optional post-launch actions that provide additional support for your migration and modernization efforts. The actions are initiated and managed by the AWS Systems Manager agent, which can be automatically installed as the first post-launch action. We are launching with an initial set of four actions, and plan to add more over time:

Install Agent – This action installs the AWS Systems Manager agent, and is a prerequisite to the other actions.

Disaster Recovery – Installs the AWS Elastic Disaster Recovery Service agent on each server and configures replication to a specified target region.

CentOS Conversion – If the source server is running CentOS, the instances can be migrated to Rocky Linux.

SUSE Subscription Conversion – If the source service is running SUSE Linux via a subscription provided by SUSE, the instance is changed to use an AWS-provided SUSE subscription.

Using Post-Launch Actions
My AWS account has a post-launch settings template that serves as a starting point, and provides the default settings each time I add a new source server. I can use the values from the template as-is, or I can customize them as needed. I open the Application Migration Service Console and click Settings to view and edit my template:

I click Post-launch settings template, and review the default values. Then I click Edit to make changes:

As I noted earlier, the Systems Manager agent executes the other post-launch actions, and is a prerequisite, so I enable it:

Next, I choose to run the post-launch actions on both my test and cutover instances, since I want to test against the final migrated configuration:

I can now configure any or all of the post-launch options, starting with disaster recovery. I check Configure disaster recovery on migrated servers and choose a target region:

Next, I check Convert CentOS to Rocky Linux distribution. This action converts a CentOS 8 distribution to a Rocky Linux 8 distribution:

Moving right along, I check Change SUSE Linux Subscription to AWS provided SUSE Linux subscription, and then click Save template:

To learn more about pricing for the SUSE subscriptions, visit the Amazon EC2 On-Demand Pricing page.

After I have set up my template, I can view and edit the settings for each of my source servers. I simply select the server and choose Edit post-launch settings from the Replication menu:

The post-launch actions will be run at the appropriate time on the test or the cutover instances, per my selections. Any errors that arise during the execution of an action are written to the SSM execution log. I can also examine the Migration dashboard for each source server and review the Post-launch actions:

Available Now
The post-launch actions are available now and you can start using them today in all regions where AWS Application Migration Service (AWS MGN) is supported.

Jeff;

AWS Backup Now Supports Amazon FSx for NetApp ONTAP

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-backup-now-supports-amazon-fsx-for-netapp-ontap/

If you are a long-time reader of this blog, you know that I categorize some posts as “chocolate and peanut butter” in homage to an ancient (1970 or so) series of TV commercials for Reese’s Peanut Butter Cups. Today, I am happy to bring you the latest such post, combining AWS Backup and Amazon FSx for NetApp ONTAP. Before I dive into the specifics, let’s review each service:

AWS Backup helps you to automate and centrally manage your backups (read my post, AWS Backup – Automate and Centrally Manage Your Backups, for a detailed look). After you create policy-driven plans, you can monitor the status of on-going backups, verify compliance, and find/restore backups, all from a central console. We launched in 2019 with support for Amazon EBS volumes, Amazon EFS file systems, Amazon RDS databases, Amazon DynamoDB tables, and AWS Storage Gateway volumes. After that, we added support for EC2 instances, Amazon Aurora clusters, Amazon FSx for Lustre and Amazon FSx for Window File Server file systems, Amazon Neptune databases, VMware workloads, Amazon DocumentDB clusters, and Amazon S3.

Amazon FSx for NetApp ONTAP gives you the features, performance, and APIs of NetApp ONTAP file systems with the agility, scalability, security, and resiliency of AWS (again, read my post, New – Amazon FSx for NetApp ONTAP to learn more). ONTAP is an enterprise data management product that is designed to provide high-performance storage suitable for use with Oracle, SAP, VMware, Microsoft SQL Server, and so forth. Each file system supports multi-protocol access and can scale up to 176 PiB, along with inline data compression, deduplication, compaction, thin provisioning, replication, and point-in-time cloning. We launched with a multi-AZ deployment type, and introduced a single-AZ deployment type earlier this year.

Chocolate and Peanut Butter
AWS Backup now supports Amazon FSx for NetApp ONTAP file systems. All of the existing AWS Backup features apply, and you can add this support to an existing backup plan or you can create a new one.

Suppose I have a couple of ONTAP file systems:

I go to the AWS Backup Console and click Create Backup plan to get started:

I decide to Start with a template, and choose Daily-Monthly-1yr-Retention, then click Create plan:

Next, I examine the Resource assignments section of my plan and click Assign resources:

I create a resource assignment (Jeff-ONTAP-Resources), and select the FSx resource type. I can leave the assignment as-is in order to include all of my Amazon FSx volumes in the assignment, or I can uncheck All file systems, and then choose volumes on the file systems that I showed you earlier:

I review all of my choices, and click Assign resources to proceed. My backups will be performed in accord with the backup plan.

I can also create an on-demand backup. To do this, I visit the Protected resources page and click Create on-demand backup:

I choose a volume, set a one week retention period for my on-demand backup, and click Create on-demand backup:

The backup job starts within seconds, and is visible on the Backup jobs page:

After the job completes I can examine the vault and see my backup. Then I can select it and choose Restore from the Actions menu:

To restore the backup, I choose one of the file systems from it, enter a new volume name, and click Restore backup.

Also of Interest
We recently launched two new features for AWS Backup that you may find helpful. Both features can now be used in conjunction with Amazon FSx for ONTAP:

AWS Backup Audit Manager – You can use this feature to monitor and evaluate the compliance status of your backups. This can help you to meet business and regulatory requirements, and lets you generate reports that you can use to demonstrate compliance to auditors and regulators. To learn more, read Monitor, Evaluate, and Demonstrate Backup Compliance with AWS Backup Audit Manager.

AWS Backup Vault Lock – This feature lets you prevent your backups from being accidentally or maliciously deleted, and also enhances protection against ransomware. You can use this feature to make selected backup values WORM (write-once-read-many) compliant. Once you have done this, the backups in the vault cannot be modified manually. You can also set minimum and maximum retention periods for each vault. To learn more, read Enhance the security posture of your backups with AWS Backup Vault Lock.

Available Now
This new feature is available now and you can start using it today in all regions where AWS Backup and Amazon FSx for NetApp ONTAP are available.

Jeff;

New – Storage-Optimized Amazon EC2 Instances (I4i) Powered by Intel Xeon Scalable (Ice Lake) Processors

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-storage-optimized-amazon-ec2-instances-i4i-powered-by-intel-xeon-scalable-ice-lake-processors/

Over the years we have released multiple generations of storage-optimized Amazon Elastic Compute Cloud (Amazon EC2) instances including the HS1 (2012) , D2 (2015), I2 (2013) , I3 (2017), I3en (2019), D3/D3en (2020), and Im4gn/Is4gen (2021). These instances are used to host high-performance real-time relational databases, distributed file systems, data warehouses, key-value stores, and more.

New I4i Instances
Today I am happy to introduce the new I4i instances, powered by the latest generation Intel Xeon Scalable (Ice Lake) Processors with an all-core turbo frequency of 3.5 GHz.

The instances offer up to 30 TB of NVMe storage using AWS Nitro SSD devices that are custom-built by AWS, and are designed to minimize latency and maximize transactions per second (TPS) on workloads that need very fast access to medium-sized datasets on local storage. This includes transactional databases such as MySQL, Oracle DB, and Microsoft SQL Server, as well as NoSQL databases: MongoDB, Couchbase, Aerospike, Redis, and the like. They are also an ideal fit for workloads that can benefit from very high compute performance per TB of storage such as data analytics and search engines.

Here are the specs:

Instance Name vCPUs
Memory (DDR4) Local NVMe Storage
(AWS Nitro SSD)
Sequential Read Throughput
(128 KB Blocks)
Bandwidth
EBS-Optimized
Network
i4i.large 2 16 GiB 468 GB 350 MB/s Up to 10 Gbps Up to 10 Gbps
i4i.xlarge 4 32 GiB 937 GB 700 MB/s Up to 10 Gbps Up to 10 Gbps
i4i.2xlarge 8 64 GiB 1,875 GB 1,400 MB/s Up to 10 Gbps Up to 12 Gbps
i4i.4xlarge 16 128 GiB 3,750 GB 2,800 MB/s Up to 10 Gbps Up to 25 Gbps
i4i.8xlarge 32 256 GiB 7,500 GB
(2 x 3,750 GB)
5,600 MB/s 10 Gbps 18.75 Gbps
i4i.16xlarge 64 512 GiB 15,000 GB
(4 x 3,750 GB)
11,200 MB/s 20 Gbps 37.5 Gbps
i4i.32xlarge 128 1024 GiB 30,000 GB
(8 x 3,750 GB)
22,400 MB/s 40 Gbps 75 Gbps

In comparison to the Xen-based I3 instances, the Nitro-powered I4i instances give you:

  • Up to 60% lower storage I/O latency, along with up to 75% lower storage I/O latency variability.
  • A new, larger instance size (i4i.32xlarge).
  • Up to 30% better compute price/performance.

The i4i.16xlarge and i4.32xlarge instances give you control over C-states, and the i4i.32xlarge instances support non-uniform memory access (NUMA). All of the instances support AVX-512, and use Intel Total Memory Encryption (TME) to deliver always-on memory encryption.

From Our Customers
AWS customers and AWS service teams have been putting these new instances to the test ahead of today’s launch. Here’s what they had to say:

Redis Enterprises powers mission-critical applications for over 8,000 organizations. According to Yiftach Shoolman (Co-Founder and CTO of Redis):

We are thrilled with the performance we are seeing from the Amazon EC2 I4i instances which use the new low latency AWS Nitro SSDs. Our testing shows I4i instances delivering an astonishing 2.9x higher query throughput than the previous generation I3 instances. We have also tested with various read and write mixes, and observed consistent and linearly scaling performance.

ScyllaDB is a high performance NoSQL database that can take advantage of high performance cloud computing instances.
Avi Kivity (Co-Founder and CTO of ScyllaDB) told us:


When we tested I4i instances, we observed up to 2.7x increase in throughput per vCPU compared to I3 instances for reads. With an even mix of reads and writes, we observed 2.2x higher throughput per vCPU, with a 40% reduction in average latency than I3 instances. We are excited for the incredible performance and value these new instances will enable for our customers going forward.

Amazon QuickSight is a business intelligence service. After testing,
Tracy Daugherty (General Manager, Amazon Quicksight) reported that:

I4i instances have demonstrated superior performance over previous generation I instances, with a 30% improvement across operations. We look forward to using I4i to further elevate performance for our customers.

Available Now

You can launch I4i instances today in the AWS US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions (with more to come) in On-Demand and Spot form. Savings Plans and Reserved Instances are available, as are Dedicated Instances and Dedicated Hosts.

In order to take advantage of the performance benefits of these new instances, be sure to use recent AMIs that include current ENA drivers and support for NVMe 1.4.

To learn more, visit the I4i instance home page.

Jeff;

AWS Week in Review – April 25, 2022

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-april-25-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

The first in this year’s series of AWS Summits took place in San Francisco this past week and we had a bunch of great announcements. Let’s take a closer look…

Last Week’s Launches
Here are some launches that caught my eye this week:

AWS Migration Hub Orchestrator – Building on AWS Migration Hub (launched in 2017), this service helps you to reduce migration costs by automating manual tasks, managing dependencies between tools, and providing better visibility into the migration progress. It makes use of workflow templates that you can modify and extend, and includes a set of predefined templates to get you started. We are launching with support for applications based on SAP NetWeaver with HANA databases, along with support for rehosting of applications using AWS Application Migration Service (AWS MGN). To learn more, read Channy’s launch post: AWS Migration Hub Orchestrator – New Migration Orchestration Capability with Customizable Workflow Templates.

Amazon DevOps Guru for Serverless – This is a new capability for Amazon DevOps Guru, our ML-powered cloud operations service which helps you to improve the availability of your application using models informed by years of Amazon.com and AWS operational excellence. This launch helps you to automatically detect operational issues in your Lambda functions and DynamoDB tables, giving you actionable recommendations that help you to identify root causes and fix issues as quickly as possible, often before they affect the performance of your serverless application. Among other insights you will be notified of concurrent executions that reach the account limit, lower than expected use of provisioned concurrency, and reads or writes to DynamoDB tables that approach provisioned limits. To learn more and to see the full list of insights, read Marcia’s launch post: Automatically Detect Operational Issues in Lambda Functions with Amazon DevOps Guru for Serverless.

AWS IoT TwinMaker – Launched in preview at re:Invent 2021 (Introducing AWS IoT TwinMaker), this service helps you to create digital twins of real-world systems and to use them in applications. There’s a flexible model builder that allows you to create workspaces that contain entity models and visual assets, connectors to bring in data from data stores to add context, a console-based 3D scene composition tool, and plugins to help you create Grafana and Amazon Managed Grafana dashboards. To learn more and to see AWS IoT TwinMaker in action, read Channy’s post, AWS IoT TwinMaker is now Generally Available.

AWS Amplify Studio – Also launched in preview at re:Invent 2021 (AWS Amplify Studio: Visually build full-stack web apps fast on AWS), this is a point-and-click visual interface that simplifies the development of frontend and backends for web and mobile applications. During the preview we added integration with Figma so that to make it easier for designers and front-end developers to collaborate on design and development tasks. As Steve described in his post (Announcing the General Availability of AWS Amplify Studio), you can easily pull component designs from Figma, attach event handlers, and extend the components with your own code. You can modify default properties, override child UI elements, extend collection items with additional data, and create custom business logic for events. On the visual side, you can use Figma’s Theme Editor plugin to make UI components to your organization’s brand and style.

Amazon Aurora Serverless v2Amazon Aurora separates compute and storage, and allows them to scale independently. The first version of Amazon Aurora Serverless was launched in 2018 as a cost-effective way to support workloads that are infrequent, intermittent, or unpredictable. As Marcia shared in her post (Amazon Aurora Serverless v2 is Generally Available: Instant Scaling for Demanding Workloads), the new version is ready to run your most demanding workloads, with instant, non-disruptive scaling, fine-grained capacity adjustments, read replicas, Multi-AZ deployments, and Amazon Aurora Global Database. You pay only for the capacity that you consume, and can save up to 90% compared to provisioning for peak load.

Amazon SageMaker Serverless InferenceAmazon SageMaker already makes it easy for you to build, train, test, and deploy your machine learning models. As Antje descibed in her post (Amazon SageMaker Serverless Inference – Machine Learning Inference without Worrying about Servers), different ML inference use cases pose varied requirements on the infrastructure that is used to host the models. For example, applications that have intermittent traffic patterns can benefit from the ability to automatically provision and scale compute capacity based on the volume of requests. The new serverless inferencing option that Antje wrote about provides this much-desired automatic provisioning and scaling, allowing you to focus on developing your model and your inferencing code without having to manage or worry about infrastructure.

Other AWS News
Here are a few other launches and news items that caught my eye:

AWS Open Source News and Updates – My colleague Ricardo Sueiras writes this weekly open-source newsletter where he highlights new open source projects, tools, and demos from the AWS community. Read edition #109 here.

Amazon Linux AMI – An Amazon Linux 2022 AMI that is optimized for Amazon ECS is now available. Read the What’s New to learn more.

AWS Step Functions – AWS Step Functions now supports over 20 new AWS SDK integrations and over 1000 new AWS API actions. Read the What’s New to learn more.

AWS CloudFormation Registry – There are 35 new resource types in the AWS CloudFormation Registry, including AppRunner, AppStream, Billing Conductor, ECR, EKS, Forecast, Lightsail, MSK, and Personalize. Check out the full list in the What’s New.

Upcoming AWS Events
AWS SummitThe AWS Summit season is in full swing – The next AWS Summits are taking place in London (on April 27), Madrid (on May 4-5), Korea (online, on May 10-11), and Stockholm (on May 11). AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Summits are held in major cities around the world. Besides in-person summits, we also offer a series of online summits across the regions. Find an AWS Summit near you, and get notified when registration opens in your area.

.NET Enterprise Developer Day EMEA .NET Enterprise Developer Day EMEA 2022 is a free, one-day virtual conference providing enterprise developers with the most relevant information to swiftly and efficiently migrate and modernize their .NET applications and workloads on AWS. It takes place online on April 26. Attendees can also opt-in to attend the free, virtual DeveloperWeek Europe event, taking place April 27-28.

AWS Innovate - Data EditionAWS Innovate – Data Edition Americas AWS Innovate Online Conference – Data Edition is a free virtual event designed to inspire and empower you to make better decisions and innovate faster with your data. You learn about key concepts, business use cases, and best practices from AWS experts in over 30 technical and business sessions. This event takes place on May 11.

That’s all for this week. Check back again next week for the another AWS Week in Review!

Jeff;

AWS Partner Network (APN) – 10 Years and Going Strong

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-partner-network-apn-10-years-and-going-strong/

AWS 10 Years with animated flamesTen years ago we launched AWS Partner Network (APN) in beta form for our partners and our customers. In his post for the beta launch, my then-colleague Jinesh Varia noted that:

Partners are an integral part of the AWS ecosystem as they enable customers and help them scale their business globally. Some of our greatest wins, particularly with enterprises, have been influenced by our Partners.

A decade later, as our customers work toward digital transformation, their needs are becoming more complex. As part of their transformation they are looking for innovation, differentiating solutions, and routinely ask us to refer them to partners with the right skills and the specialized capabilities that will help them to make the best of use AWS services.

The partners, in turn, are stepping up to the challenge and driving innovation on behalf of their customers in ways that transform multiple industries. This includes migration of workloads, modernization of existing code & architectures, and the development of cloud-native applications.

Thank You, Partners
AWS Partners all around the world are doing amazing work! Integrators like Presidio in the US, NEC in Japan, Versent in Australia, T-Systems International in Germany, and Compasso UOL in Latin America are delivering some exemplary transformations on AWS. On the product side, companies like Megazone Cloud (Asia/Pacific) are partnering with global ISVs such as Databricks, Datadog, and New Relic to help them go to market. Many other ISV Partners are working to reinvent their offerings in order to take advantage of specific AWS services and features. The list of such partners is long, and includes Infor, VTEX, and Iron Mountain, to name a few.

In 2021, AWS and our partners worked together to address hundreds of thousands of customer opportunities. Partners like Snowflake, logz.io, and Confluent have told us that AWS Partner program such as ISV Accelerate and AWS Global Startup Program are having a measurable impact on their businesses.

These are just a few examples (we have many more success stories), but the overall trend should be pretty clear — transformation is essential, and AWS Partners are ready, willing, and able to make it happen.

As part of our celebration of this important anniversary, the APN Blog will be sharing a series of success stories that focus on partner-driven customer transformation!

A Decade of Partner-Driven Innovation and Evolution
We launched APN in 2012 with a few hundred partners. Today, AWS customers can choose offerings from more than 100,000 partners in more than 150 countries.

A lot of this growth can be traced back to our first Leadership Principle, Customer Obsession. Most of our services and major features have their origins in customer requests and APN is no different: we build programs that are designed to meet specific, expressed needs of our customers. Today, we continue to seek and listen to partner feedback, use that feedback to innovate and to experiment, and to get it to market as quickly as possible.

Let’s take a quick trip through history and review some of the most interesting APN milestones of the last decade:

In 2012, we first announced the partner type (Consulting and Technology) model when APN came to life. With each partner type, partners could qualify for one of the three tiers (Select, Advanced, and Premier) and achieve benefits based on their tier.

In 2013, AWS Partners told us they wanted to stand out in the industry. To allow partners to differentiate their offerings to customers and show expertise in building solutions, we introduced the first two of what is now a very long list of competencies.

In 2014, we launched the AWS Managed Service Provider Program to help customers find partners who can help with migration to the AWS cloud, along with the AWS SaaS Factory program to support partners looking to build and accelerate delivery of SaaS (Software as a Service) solutions on behalf of their customers. We also launched the APN Blog channel to bring partner success stories with AWS and customers to life. Today, the APN Blog is one of the most popular blogs at AWS.

Next in 2016, customers started to ask us where to go when looking for a partner that can help design, migrate, manage, and optimize their workloads on AWS, or for partner-built tools that can help them achieve their goals. To help them more easily find the right partner and solution for their specific business needs, we launched the AWS Partner Solutions Finder, a new website where customers could search for, discover, and connect with AWS Partners.

In 2017, to allow partners to showcase their earned AWS designations to customers, we introduced the Badge Manager. The dynamic tool allows partners to build customized AWS Partner branded badges to highlight their success with AWS to customers.

In 2018, we launched several new programs and features to better support our partners gain AWS expertise and promote their offerings to customers including AWS Device Qualification program, AWS Well-Architected Partner Program, and several competencies.

In 2019, for mid-to-late stage startups seeking support with product development, go-to-market and co-sell, we launched the AWS Global Startup program. We also launched the AWS Service Ready Program to help customers find validated partner products that work with AWS services.

Next, in 2020 to help organizations co-sell, drive new business and accelerate sales cycles we launched the AWS ISV Accelerate program.

In 2021 our partners told us that they needed more (and faster) ways to work with AWS so that they could meet the ever-growing needs of their customers. We launched AWS Partner Paths in order to accelerate partner engagement with AWS.

Partner Paths replace technology and consulting partner type models—evolving to an offering type model. We now offer five Partner Paths—Software Path, Hardware Path, Training Path, Distribution Path, and Services Path—which represents consulting, professional, managed, or value-add resale services. This new framework provides a curated journey through partner resources, benefits, and programs.

Looking Ahead
As I mentioned earlier, Customer Obsession is central to everything that we do at AWS. We see partners as our customers, and we continue to obsess over ways to make it easier and more efficient for them to work with us. For example, we continue to focus on partner specialization and have developed a deep understanding of the ways that our customers find it to be of value.

Our goal is to empower partners with tools that make it easy for them to navigate through our selection of enablement resources, benefits, and programs and find those that help them to showcase their customer expertise and to get-to-market with AWS faster than ever. The new AWS Partner Central (login required) and AWS Partner Marketing Central experiences that we launched earlier this year are part of this focus.

To wrap up, I would like to once again thank our partner community for all of their support and feedback. We will continue to listen, learn, innovate, and work together with you to invent the future!

Jeff;

Amazon FSx for NetApp ONTAP Update – New Single-AZ Deployment Type

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-fsx-for-netapp-ontap-update-new-single-az-deployment-type/

Last year I told you about Amazon FSx for NetApp ONTAP and showed you how you can create a file system in minutes with just a couple of clicks. You can use these high-performance, scalable (up to 176 PiB) file systems to migrate your on-premises applications to the cloud and to build new, cloud-native applications. As I noted in my original post, your file systems can have up to 192 TiB of fast, SSD-based primary storage, and many pebibytes of cost-optimized capacity pool storage. Your file systems also support many of ONTAP’s unique features including multi-protocol (NFS, SMB, and iSCSI) access, built-in deduplication & compression, cloning, and replication.

We launched Amazon FSx for NetApp ONTAP with a Multi-AZ deployment type that has AWS infrastructure in a pair of Availability Zones in the same AWS region, data replication between them, and automated failover/failback that is typically complete within seconds. This option has a 99.99% SLA (Service Level Agreement) for availability, and is suitable for hosting the most demanding storage workloads.

New Deployment Type
Today we are launching a new single-AZ deployment type that is designed to provide high availability and durability within an AZ, at a level similar to an on-premises file system. It is a great fit for many use cases including dev & test workloads, disaster recovery, and applications that manage their own replication. It is also a great for storing secondary copies of data that is stored elsewhere (either on-premises or AWS), or for data that can be recomputed if necessary.

The AWS infrastructure powering each single-AZ file system resides in separate fault domains within a single Availability Zone. As is the case with the multi-AZ option, the infrastructure is monitored and replaced automatically, and failover typically completes within seconds.

This new deployment type offers the same ease of use and data management capabilities as the multi-AZ option, with 50% lower storage costs and 40% lower throughput costs. File operations deliver sub-millisecond latency for SSD storage and tens of milliseconds for capacity pool storage, at up to hundreds of thousands of IOPS.

Creating a Single-AZ File System
I can create a single-AZ NetApp ONTAP file system using the Amazon FSx Console, the CLI (aws fsx create-file-system), or the Amazon FSx CreateFileSystem API function. From the console I click Create file system, select Amazon FSx for NetApp ONTAP, and enter a name. Then I select the Single-AZ deployment type, indicate the desired amount of storage, and click Next:

On the next page I review and confirm my choices, and then click Create file system. The file system Status starts out as Creating, then transitions to Available within 20 minutes or so, as detailed in my original post.

Depending on my architecture and use case, I can access my new file system in several different ways. I can simply mount it to an EC2 instance running in the same VPC. I can also access it from another VPC in the same region or in a different region across a peered (VPC or Transit Gateway) connection, and from my on-premises clients using AWS Direct Connect or AWS VPN.

Things to Know
Here are a couple of things to know:

Regions – The new deployment type is available in all regions where FSx for ONTAP is already available.

Pricing – Pricing is based on the same billing dimensions as the multi-AZ deployment type; see the Amazon FSx for NetApp Pricing page for more information.

Available Now
The new deployment type is available now and you can start using it today!

Jeff;

New – Cloud NGFW for AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-cloud-ngfw-for-aws/

In 2018 I wrote about AWS Firewall Manager (Central Management for Your Web Application Portfolio) and showed you how you could host multiple applications, perhaps spanning multiple AWS accounts and regions, while maintaining centralized control over your organization’s security settings and profile. In the same way that Amazon Relational Database Service (RDS) supports multiple database engines, Firewall Manager supports multiple types of firewalls: AWS Web Application Firewall, AWS Shield Advanced, VPC security groups, AWS Network Firewall, and Amazon Route 53 DNS Resolver DNS Firewall.

Cloud NGFW for AWS
Today we are introducing support for Palo Alto Networks Cloud NGFW in Firewall Manager. You can now use Firewall Manager to centrally provision & manage your Cloud next-generation firewall resources (also called NGFWs) and monitor for non-compliant configurations, all across multiple accounts and Virtual Private Clouds (VPCs). You get the best-in-class security features offered by Cloud NGFW as a managed service wrapped inside a native AWS experience, with no hardware hassles, no software upgrades, and pay-as-you-go pricing. You can focus on keeping your organization safe and secure, even as you add, change, and remove AWS resources.

Palo Alto Networks pioneered the concept of deep packet inspection in their NGFWs. Cloud NGFW for AWS can decrypt network packets, look inside, and then identify applications using signatures, protocol decoding, behavioral analysis, and heuristics. This gives you the ability to implement fine-grained, application-centric security management that is more effective than simpler models that are based solely on ports, protocols, and IP addresses. Using Advanced URL Filtering, you can create rules that take advantage of curated lists of sites (known as feeds) that distribute viruses, spyware, and other types of malware, and you have many other options for identifying and handling desirable and undesirable network traffic. Finally, Threat Prevention stops known vulnerability exploits, malware, and command-and-control communication.

The integration lets you choose the deployment model that works best with your network architecture:

Centralized – One firewall running in a centralized “inspection” VPC.

Distributed – Multiple firewalls, generally one for each VPC within the scope managed by Cloud NGFW for AWS.

Cloud NGFW protects outbound, inbound, and VPC-to-VPC traffic. We are launching with support for all traffic directions.

AWS Inside
In addition to centralized provisioning and management via Firewall Manager, Cloud NGFW for AWS makes use of many other parts of AWS. For example:

AWS Marketplace – The product is available in SaaS form on AWS Marketplace with pricing based on hours of firewall usage, traffic processed, and security features used. Cloud NGFW for AWS is deployed on a highly available compute cluster that scales up and down with traffic.

AWS Organizations – To list and identify new and existing AWS accounts and to drive consistent, automated cross-account deployment.

AWS Identity and Access Management (IAM) – To create cross-account roles for Cloud NGFW to access log destinations and certificates in AWS Secrets Manager.

AWS Config – To capture changes to AWS resources such as VPCs, VPC route configurations, and firewalls.

AWS CloudFormation – To run a StackSet that onboards each new AWS account by creating the IAM roles.

Amazon S3, Amazon CloudWatch, Amazon Kinesis – Destinations for log files and records.

Gateway Load Balancer – To provide resiliency, scale, and availability for the NGFWs.

AWS Secrets Manager – To store SSL certificates in support of deep packet inspection.

Cloud NGFW for AWS Concepts
Before we dive in and set up a firewall, let’s review a few important concepts:

Tenant – An installation of Cloud NGFW for AWS associated with an AWS customer account. Each purchase from AWS Marketplace creates a new tenant.

NGFW – A firewall resource that spans multiple AWS Availability Zones and is dedicated to a single VPC.

Rulestack – A set of rules that defines the access controls and threat protections for one or more NGFWs.

Global Rulestack – Represented by an FMS policy, contains rules that apply to all of the NGFWs in an AWS Organization.

Getting Started with Cloud NGFW for AWS
Instead of my usual step-by-step walk-through, I am going to show you the highlights of the purchasing and setup process. For a complete guide, read Getting Started with Cloud NGFW for AWS.

I start by visiting the Cloud NGFW Pay-As-You-Go listing in AWS Marketplace. I review the pricing and terms, click Continue to Subscribe, and proceed through the subscription process.

After I subscribe, Cloud NGFW for AWS will send me an email with temporary credentials for the Cloud NGFW console. I use the credential to log in, and then I replace the temporary password with a long-term one:

I click Add AWS Account and enter my AWS account Id. The console will show my account and any others that I subsequently add:

The NGFW console redirects me to the AWS CloudFormation console and prompts me to create a stack. This stack sets up cross-account IAM roles, designates (but does not create) logging destinations, and lets Cloud NGFW access certificates in Secrets Manager for packet decryption.

From here, I proceed to the AWS Firewall Manager console and click Settings. I can see that my cloud NGFW tenant is ready to be associated with my account. I select the radio button next to the name of the firewall, in this case “Palo Alto Networks Cloud NGFW” and then click the Associate button. Note that the subscription status will change to Active in a few minutes.

Screenshot showing the account association process

Once the NGFW tenant is associated with my account I return to the AWS Firewall Manager console and click Security policies to proceed. There are no policies yet, and I click Create policy to make one:

I select Palo Alto Networks Cloud NGFW, choose the Distributed model, pick an AWS region, and click Next to proceed (this model will create a Cloud NGFW endpoint in each in-scope VPC):

I enter a name for my policy (Distributed-1), and select one of the Cloud NGFW firewall policies that are available to my account. I can also click Create firewall policy to navigate to the Palo Alto Networks console and step through the process of creating a new policy. Today I select grs-1:

I have many choices and options when it comes to logging. Each of the three types of logs (Traffic, Decryption, and Threat) can be routed to an S3 bucket, a CloudWatch log group, or a Kinesis Firehose delivery stream. I choose an S3 bucket and click Next to proceed:

A screenshot showing the choices for logging.

Now I choose the Availability Zones where I need endpoints. I have the option to select by name or by ID, and I can optionally designate a CIDR block within each AZ that will be used for the subnets:

The next step is to choose the scope: the set of accounts and resources that are covered by this policy. As I noted earlier, this feature works hand-in-hand with AWS Organizations and gives me multiple options to choose from:

The CloudFormation template linked above is used to create an essential IAM role in each member account. When I run it, I will need to supply values for the CloudNGFW Account ID and ExternalId parameters, both of which are available from within the Palo Alto Networks console. On the next page I can tag my newly created policy:

On the final page I review and confirm all of my choices, and click Create policy to do just that:

My policy is created right away, and it will start to list the in-scope accounts within minutes. Under the hood, AWS Firewall Manager calls Cloud NGFW APIs to create NGFWs for the VPCs in my in-scope accounts, and the global rules are automatically associated with the created NGFWs. When the NGFWs are ready to process traffic, AWS Firewall Manager creates the NGFW endpoints in the subnets.

As new AWS accounts join my organization, AWS Firewall Manager automatically ensures they are compliant by creating new NGFWs as needed.

Next I review the Cloud NGFW threat logs to see what threats are being blocked by Cloud NGFW. In this example Cloud NGFW protected my VPC against SIPVicious scanning activity:

Screenshot showing the threat log detecting SIPVicious activity

And in this example, Cloud NGFW protected my VPC against a malware download:

a screenshot showing the threat log of malware detection

Things to Know
Both AWS Firewall Manager and Cloud NGFW are regional services and my AWS Firewall Manager policy is therefore regional. Cloud NGFW is currently available in the US East (N. Virginia) and US West (N. Califormia) Regions, with plans to expand in the near future.

Jeff;

Welcome to AWS Pi Day 2022

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/welcome-to-aws-pi-day-2022/

We launched Amazon Simple Storage Service (Amazon S3) sixteen years ago today!

As I often told my audiences in the early days, I wanted them to think big thoughts and dream big dreams! Looking back, I think it is safe to say that the launch of S3 empowered them to do just that, and initiated a wave of innovation that continues to this day.

Bigger, Busier, and more Cost-Effective
Our customers count on Amazon S3 to provide them with reliable and highly durable object storage that scales to meet their needs, while growing more and more cost-effective over time. We’ve met those needs and many others; here are some new metrics that prove my point:

Object Storage – Amazon S3 now holds more than 200 trillion (2 x 1014) objects. That’s almost 29,000 objects for each resident of planet Earth. Counting at one object per second, it would take 6.342 million years to reach this number! According to Ethan Siegel, there are about 2 trillion galaxies in the visible Universe, so that’s 100 objects per galaxy! Shortly after the 2006 launch of S3, I was happy to announce the then-impressive metric of 800 million stored objects, so the object count has grown by a factor of 250,000 in less than 16 years.

Request Rate – Amazon S3 now averages over 100 million requests per second.

Cost Effective – Over time we have added multiple storage classes to S3 in order to optimize cost and performance for many different workloads. For example, AWS customers are making great use of Amazon S3 Intelligent Tiering (the only cloud storage class that delivers automatic storage cost savings when data access patterns change), and have saved more than $250 million in storage costs as compared to Amazon S3 Standard. When I first wrote about this storage class in 2018, I said:

In order to make it easier for you to take advantage of S3 without having to develop a deep understanding of your access patterns, we are launching a new storage class, S3 Intelligent-Tiering.

With the improved cost optimizations for small and short-lived objects and the archiving capabilities that we launched late last year, you can now use S3 Intelligent-Tiering as the default storage class for just about every workload, especially data lakes, analytics use cases, and new applications.

Customer Innovation
As you can see from the metrics above, our customers use S3 to store and protect vast amounts of data in support of an equally vast number of use cases and applications. Here are just a few of the ways that our customers are innovating:

NASCARAfter spending 15 years collecting video, image, and audio assets representing over 70 years of motor sports history, NASCAR built a media library that encompassed over 8,600 LTO 6 tapes and a few thousand LTO 4 tapes, with a growth rate of between 1.5 PB and 2 PB per year. Over the course of 18 months they migrated all of this content (a total of 15 PB) to AWS, making use of the Amazon S3 Standard, Amazon S3 Glacier Flexible Retrieval, and Amazon S3 Glacier Deep Archive storage classes. To learn more about how they migrated this massive and invaluable archive, read Modernizing NASCAR’s multi-PB media archive at speed with AWS Storage.

Electronic Arts
This game maker’s core telemetry systems handle tens of petabytes of data, tens of thousands of tables, and over 2 billion objects. As their games became more popular and the volume of data grew, they were facing challenges around data growth, cost management, retention, and data usage. In a series of updates, they moved archival data to Amazon S3 Glacier Deep Archive, implemented tag-driven retention management, and implemented Amazon S3 Intelligent-Tiering. They have reduced their costs and made their data assets more accessible; read
Electronic Arts optimizes storage costs and operations using Amazon S3 Intelligent-Tiering and S3 Glacier to learn more.

NRGene / CRISPR-IL
This team came together to build a best-in-class gene-editing prediction platform. CRISPR (
A Crack In Creation is a great introduction) is a very new and very precise way to edit genes and effect changes to an organism’s genetic makeup. The CRISPR-IL consortium is built around an iterative learning process that allows researchers to send results to a predictive engine that helps to shape the next round of experiments. As described in
A gene-editing prediction engine with iterative learning cycles built on AWS, the team identified five key challenges and then used AWS to build GoGenome, a web service that performs predictions and delivers the results to users. GoGenome stores over 20 terabytes of raw sequencing data, and hundreds of millions of feature vectors, making use of Amazon S3 and other
AWS storage services as the foundation of their data lake.

Some other cool recent S3 success stories include Liberty Mutual (How Liberty Mutual built a highly scalable and cost-effective document management solution), Discovery (Discovery Accelerates Innovation, Cuts Linear Playout Infrastructure Costs by 61% on AWS), and Pinterest (How Pinterest worked with AWS to create a new way to manage data access).

Join Us Online Today
In celebration of AWS Pi Day 2022 we have put together an entire day of educational sessions, live demos, and even a launch or two. We will also take a look at some of the newest S3 launches including Amazon S3 Glacier Instant Retrieval, Amazon S3 Batch Replication and AWS Backup Support for Amazon S3.

Designed for system administrators, engineers, developers, and architects, our sessions will bring you the latest and greatest information on security, backup, archiving, certification, and more. Join us at 9:30 AM PT on Twitch for Kevin Miller’s kickoff keynote, and stick around for the entire day to learn a lot more about how you can put Amazon S3 to use in your applications. See you there!

Jeff;

AWS Week in Review – March 7, 2022

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-7-2022/

This post is part of our Week in Review series. Check back each week for a quick round up of interesting news and announcements from AWS!

Hello Again
The AWS Week in Review is back! Many years ago, I tried to write a weekly post that captured the most significant AWS activity. This was easy at first but quickly grew to consume a good fraction of a working day. After a lot of thought and planning, we are making a fresh start with the goal of focusing on some of the most significant AWS launches of the previous week. Each week, one member of the AWS News Blog team will write and publish a post similar to this one. We will do our best to make sure that our effort is scalable and sustainable.

Last Week’s Launches
Here are some launches that caught my eye last week:

AWS Health Dashboard – This new destination brings together the AWS Service Health Dashboard and the Personal Health Dashboard into a single connected experience. You get a more responsive and accurate view, better usability, and greater operational resilience. The new page is mobile-friendly and follows the latest AWS design standard. It includes a searchable history of events, fast page-load times, and automatic in-line refresh. It also provides a more responsive view when multiple AWS services are affected by a common underlying root cause. To learn more, read the blog post or just visit the AWS Health Dashboard.

AWS DeepRacer Student Virtual League – High school and undergraduate students 16 and older can now compete in the DeepRacer Student Virtual League for the chance to win prizes, glory, and a trip to AWS re:Invent 2022 in Las Vegas. The student league provides access to dozens of hours of free machine learning model training, along with educational materials that cover the theoretical and practical aspects of machine learning. Competitions run monthly until September 30; the top participants each month qualify for the Global AWS DeepRacer Student League Championships in October. To learn more, read the What’s New or visit AWS DeepRacer Student.

Customer Carbon Footprint Tool – This tool will help you to learn more about the carbon footprint of your cloud infrastructure, and will help you to meet your goals for sustainability. It is part of the AWS Billing console, and is available to all AWS customers at no cost. When you open the tool, you will see your carbon emissions in several forms, all with month-level granularity. You can also see your carbon emission statistics on a monthly, quarterly, or annual basis. To learn more, read my blog post.

RDS Multi-AZ Deployment Option – You can now take advantage of a new Amazon RDS deployment option that has a footprint in three AWS Availability Zones and gives you up to 2x faster transaction commit latency, automated failovers that typically take 35 seconds or less, and readable standby instances. This new option takes advantage of Graviton2 processors and fast NVME SSD storage; to learn more, read Seb’s blog post.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Serverless Architecture Book – The second edition of Serverless Architectures on AWS is now available.

AWS Cookbook AWS Cookbook: Recipes for Success on AWS is now available.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS Pi Day (March 14) – We have an entire day of online content to celebrate 16 years of innovation with Amazon S3. Sessions will cover data protection, security, compliance, archiving, data lakes, backup, and more. Sign up today, and I will see you there!

.NET Application Modernization Webinar (March 23) – Learn about .NET modernization, what it is, and why you might want to modernize. See a deep dive that focuses on the AWS Microservice Extractor for .NET. Sign up today.

And that’s all for this week. Leave me a comment and let me know if this was helpful to you!

Jeff;

New – Customer Carbon Footprint Tool

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-customer-carbon-footprint-tool/

Carbon is the fourth-most abundant element in the universe, and is also a primary component of all known life on Earth. When combined with oxygen it creates carbon dioxide (CO2). Many industrial activities, including the burning of fossil fuels such as coal and oil, release CO2 into the atmosphere and cause climate change.

As part of Amazon’s efforts to increase sustainability and reduce carbon emissions, we co-founded The Climate Pledge in 2019. Along with the 216 other signatories to the Pledge, we are committed to reaching net-zero carbon by 2040, 10 years ahead of the Paris Agreement. We are driving carbon out of our business in a multitude of ways, as detailed on our Carbon Footprint page. When I share this information with AWS customers, they respond positively. They now understand that running their applications in AWS Cloud can help them to lower their carbon footprint by 88% (when compared to the enterprise data centers that were surveyed), as detailed in The Carbon Reduction Opportunity of Moving to Amazon Web Services, published by 451 Research.

In addition to our efforts, organizations in many industries are working to set sustainability goals and to make commitments to reach them. In order to help them to measure progress toward their goals they are implementing systems and building applications to measure and monitor their carbon emissions data.

Customer Carbon Footprint Tool
After I share information about our efforts to decarbonize with our customers, they tell me that their organization is on a similar path, and that they need to know more about the carbon footprint of their cloud infrastructure. Today I am happy to announce the new Customer Carbon Footprint Tool. This tool will help you to meet your own sustainability goals, and is available to all AWS customers at no cost. To access the calculator, I open the AWS Billing Console and click Cost & Usage Reports:

Then I scroll down to Customer Carbon Footprint Tool and review the report:

Let’s review each section. The first one allows me to select a time period with month-level granularity, and shows my carbon emissions in summary, geographic, and per-service form. In all cases, emissions are in Metric Tons of Carbon Dioxide Equivalent, abbreviated as MTCO2e:

All of the values in this section reflect the selected time period. In this example (all of which is sample data), my AWS resources emit an estimated 0.3 MTCO2e from June to August of 2021. If I had run the same application in my own facilities instead of in the AWS Cloud, I would have used an additional 0.9 MTCO2e. Of this value, 0.7 MTCO2e was saved due to renewable energy purchases made by AWS, and an additional 0.2 MTCO2e was saved due to the fact that AWS uses resources more efficiently.

I can also see my emissions by geography (all in America for this time period), and by AWS service in this section.

The second section shows my carbon emission statistics on a monthly, quarterly, or annual basis:

The third and final section projects how the AWS path to 100% renewable energy for our data centers will have a positive effect on my carbon emissions over time:

If you are an AWS customer, then you are already benefiting from our efforts to decarbonize and to reach 100% renewable energy usage by 2025, five years ahead of our original target.

You should also take advantage of the new Sustainability Pillar of AWS Well-Architected. This pillar contains six design principles for sustainability in the cloud, and will show you how to understand impact and to get the best utilization from the minimal number of necessary resources, while also reducing downstream impacts.

Things to Know
Here are a couple of important facts to keep in mind:

Regions – The emissions displayed reflect your AWS usage in all commercial AWS regions.

Timing – Emissions are calculated monthly. However, there is a three month delay due to the underlying billing cycle of the electric utilities that supply us with power.

Scope – The calculator shows Scope 1 and Scope 2 emissions, as defined here.

Jeff;

New – Additional Checksum Algorithms for Amazon S3

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-additional-checksum-algorithms-for-amazon-s3/

Amazon Simple Storage Service (Amazon S3) is designed to provide 99.999999999% (11 9s) of durability for your objects and for the metadata associated with your objects. You can rest assured that S3 stores exactly what you PUT, and returns exactly what is stored when you GET. In order to make sure that the object is transmitted back-and-forth properly, S3 uses checksums, basically a kind of digital fingerprint.

S3’s PutObject function already allows you to pass the MD5 checksum of the object, and only accepts the operation if the value that you supply matches the one computed by S3. While this allows S3 to detect data transmission errors, it does mean that you need to compute the checksum before you call PutObject or after you call GetObject. Further, computing checksums for large (multi-GB or even multi-TB) objects can be computationally intensive, and can lead to bottlenecks. In fact, some large S3 users have built special-purpose EC2 fleets solely to compute and validate checksums.

New Checksum Support
Today I am happy to tell you about S3’s new support for four checksum algorithms. It is now very easy for you to calculate and store checksums for data stored in Amazon S3 and to use the checksums to check the integrity of your upload and download requests. You can use this new feature to implement the digital preservation best practices and controls that are specific to your industry. In particular, you can specify the use of any one of four widely used checksum algorithms (SHA-1, SHA-256, CRC-32, and CRC-32C) when you upload each of your objects to S3.

Here are the principal aspects of this new feature:

Object Upload – The newest versions of the AWS SDKs compute the specified checksum as part of the upload, and include it in an HTTP trailer at the conclusion of the upload. You also have the option to supply a precomputed checksum. Either way, S3 will verify the checksum and accept the operation if the value in the request matches the one computed by S3. In combination with the use of HTTP trailers, this feature can greatly accelerate client-side integrity checking.

Multipart Object Upload – The AWS SDKs now take advantage of client-side parallelism and compute checksums for each part of a multipart upload. The checksums for all of the parts are themselves checksummed and this checksum-of-checksums is transmitted to S3 when the upload is finalized.

Checksum Storage & Persistence – The verified checksum, along with the specified algorithm, are stored as part of the object’s metadata. If Server-Side Encryption with KMS Keys is requested for the object, then the checksum is stored in encrypted form. The algorithm and the checksum stick to the object throughout its lifetime, even if it changes storage classes or is superseded by a newer version. They are also transferred as part of S3 Replication.

Checksum Retrieval – The new GetObjectAttributes function returns the checksum for the object and (if applicable) for each part.

Checksums in Action
You can access this feature from the AWS Command Line Interface (CLI), AWS SDKs, or the S3 Console. In the console, I enable the Additional Checksums option when I prepare to upload an object:

Then I choose a Checksum function:

If I have already computed the checksum I can enter it, otherwise the console will compute it.

After the upload is complete I can view the object’s properties to see the checksum:

The checksum function for each object is also listed in the S3 Inventory Report.

From my own code, the SDK can compute the checksum for me:

with open(file_path, 'rb') as file:
    r = s3.put_object(
        Bucket=bucket,
        Key=key,
        Body=file,
        ChecksumAlgorithm='sha1'
    )

Or I can compute the checksum myself and pass it to put_object:

with open(file_path, 'rb') as file:
    r = s3.put_object(
        Bucket=bucket,
        Key=key,
        Body=file,
        ChecksumSHA1='fUM9R+mPkIokxBJK7zU5QfeAHSy='
    )

When I retrieve the object, I specify checksum mode to indicate that I want the returned object validated:

r = s3.get_object(Bucket=bucket, Key=key, ChecksumMode='ENABLED')

The actual validation happens when I read the object from r['Body'], and an exception will be raised if there’s a mismatch.

Watch the Demo
Here’s a demo (first shown at re:Invent 2021) of this new feature in action:

Available Now
The four additional checksums are now available in all commercial AWS Regions and you can start using them today at no extra charge.

Jeff;

Amazon Elastic File System Update – Sub-Millisecond Read Latency

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-elastic-file-system-update-sub-millisecond-read-latency/

Amazon Elastic File System (Amazon EFS) was announced in early 2015 and became generally available in 2016. We launched EFS in order to make it easier for you to build applications that need shared access to file data. EFS is (and always has been) simple and serverless: you simply create a file system, attach it to any number of EC2 instances, Lambda functions, or containers, and go about your work. EFS is highly durable and scalable, and gives you a strong read-after-write consistency model.

Since the 2016 launch we have added many new features and capabilities including encryption data at rest and in transit, an Infrequent Access storage class, and several other lower cost storage classes. We have also worked to improve performance, delivering a 400% increase in read operations per second, a 100% increase in per-client throughput, and then a further tripling of read throughput.

Our customers use EFS file systems to support many different applications and use cases including home directories, build farms, content management (WordPress and Drupal), DevOps (Git, GitLab, Jenkins, and Artifactory), and machine learning inference, to name a few of each.

Sub-Millisecond Read Latency
Faster is always better, and today I am thrilled to be able to tell you that your latency-sensitive EFS workloads can now run about twice as fast as before!

Up until today, EFS latency for read operations (both data and metadata) was typically in the low single-digit milliseconds. Effective today, new and existing EFS file systems now provide average latency as low as 600 microseconds for the majority of read operations on data and metadata.

This performance boost applies to One Zone and Standard General Purpose EFS file systems. New or old, you will still get the same availability, durability, scalability, and strong read-after-write consistency that you have come to expect from EFS, at no additional cost and with no configuration changes.

We “flipped the switch” and enabled this performance boost for all existing EFS General Purpose mode file systems over the course of the last few weeks, so you may already have noticed the improvement. Of course, any new file systems that you create will also benefit.

Learn More
To learn more about the performance characteristics of EFS, read Amazon EFS Performance.

Jeff;

PS – Our multi-year roadmap contains a bunch of short-term and long-term performance enhancements, so stay tuned for more good news!

New – Replication for Amazon Elastic File System (EFS)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-replication-for-amazon-elastic-file-system-efs/

Amazon Elastic File System (Amazon EFS) allows EC2 instances, AWS Lambda functions, and containers to share access to a fully-managed file system. First announced in 2015 and generally available in 2016, Amazon EFS delivers low-latency performance for a wide variety of workloads and can scale to thousands of concurrent clients or connections. Since the 2016 launch we have continued to listen and to innovate, and have added many new features and capabilities in response to your feedback. These include on-premises access via Direct Connect (2016), encryption of data at rest (2017), provisioned throughput and encryption of data in transit (2018), an infrequent access storage class (2019), IAM authorization & access points (2020), lower-cost one zone storage classes (2021), and more.

Introducing Replication
Today I am happy to announce that you can now use replication to automatically maintain copies of your EFS file systems for business continuity or to help you to meet compliance requirements as part of your disaster recovery strategy. You can set this up in minutes for new or existing EFS file systems, with replication either within a single AWS region or between two AWS regions in the same AWS partition.

Once configured, replication begins immediately. All replication traffic stays on the AWS global backbone, and most changes are replicated within a minute, with an overall Recovery Point Objective (RPO) of 15 minutes for most file systems. Replication does not consume any burst credits and it does not count against the provisioned throughput of the file system.

Configuring Replication
To configure replication, I open the Amazon EFS Console , view the file system that I want to replicate, and select the Replication tab:

I click Create replication, choose the desired destination region, and select the desired storage (Regional or One Zone). I can use the default KMS key for encryption or I can choose another one. I review my settings and click Create replication to proceed:

Replication begins right away and I can see the new, read-only file system immediately:

A new CloudWatch metric, TimeSinceLastSync, is published when the initial replication is complete, and periodically after that:

The replica is created in the selected region. I create any necessary mount targets and mount the replica on an EC2 instance:

EFS tracks modifications to the blocks (currently 4 MB) that are used to store files and metadata, and replicates the changes at a rate of up to 300 MB per second. Because replication is block-based, it is not crash-consistent; if you need crash-consistency you may want to take a look at AWS Backup.

After I have set up replication, I can change the lifecycle management, intelligent tiering, throughput mode, and automatic backup setting for the destination file system. The performance mode is chosen when the file system is created, and cannot be changed.

Initiating a Fail-Over
If I need to fail over to the replica, I simply delete the replication. I can do this from either side (source or destination), by clicking Delete and confirming my intent:

I enter delete, and click Delete replication to proceed:

The former read-only replica is now a writable file system that I can use as part of my recovery process. To fail-back, I create a replica in the original location, wait for replication to finish, and delete the replication.

I can also use the command line and the EFS APIs to manage replication. For example:

createreplication-configuration / CreateReplicationConfiguration – Establish replication for an existing file system.

describe-replication-configurations / DescribeReplicationConfigurations – See the replication configuration for a source or destination file system, or for all replication configurations in an AWS account. The data returned for a destination file system also includes LastReplicatedTimestamp, the time of the last successful sync.

delete-replication-configuration / DeleteReplicationConfiguration – End replication for a file system.

Available Now
This new feature is available now and you can start using it today in the AWS US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (São Paulo), and GovCloud Regions.

You pay the usual storage fees for the original and replica file systems and any applicable cross-region or intra-region data transfer charges.

Jeff;