All posts by Antje Barth

AWS Week In Review – June 6, 2022

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/aws-week-in-review-june-6-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

I’ve just come back from a long (extended) holiday weekend here in the US and I’m still catching up on all the AWS launches that happened this past week. I’m particularly excited about some of the data, machine learning, and quantum computing news. Let’s have a look!

Last Week’s Launches
The launches that caught my attention last week are the following:

Amazon EMR Serverless is now generally available Amazon EMR Serverless allows you to run big data applications using open-source frameworks such as Apache Spark and Apache Hive without configuring, managing, and scaling clusters. The new serverless deployment option for Amazon EMR automatically scales resources up and down to provide just the right amount of capacity for your application, and you only pay for what you use. To learn more, check out Channy’s blog post and listen to The Official AWS Podcast episode on EMR Serverless.

AWS PrivateLink is now supported by additional AWS services AWS PrivateLink provides private connectivity between your virtual private cloud (VPC), AWS services, and your on-premises networks without exposing your traffic to the public internet. The following AWS services just added support for PrivateLink:

  • Amazon S3 on Outposts has added support for PrivateLink to perform management operations on your S3 storage by using private IP addresses in your VPC. This eliminates the need to use public IPs or proxy servers. Read the June 1 What’s New post for more information.
  • AWS Panorama now supports PrivateLink, allowing you to access AWS Panorama from your VPC without using public endpoints. AWS Panorama is a machine learning appliance and software development kit (SDK) that allows you to add computer vision (CV) to your on-premises cameras. Read the June 2 What’s New post for more information.
  • AWS Backup has added PrivateLink support for VMware workloads, providing direct access to AWS Backup from your VMware environment via a private endpoint within your VPC. Read the June 3 What’s New post for more information.

Amazon SageMaker JumpStart now supports incremental model training and automatic tuning – Besides ready-to-deploy solution templates for common machine learning (ML) use cases, SageMaker JumpStart also provides access to more than 300 pre-trained, open-source ML models. You can now incrementally train all the JumpStart models with new data without training from scratch. Through this fine-tuning process, you can shorten the training time to reach a better model. SageMaker JumpStart now also supports model tuning with SageMaker Automatic Model Tuning from its pre-trained model, solution templates, and example notebooks. Automatic tuning allows you to automatically search for the best hyperparameter configuration for your model.

Amazon Transcribe now supports automatic language identification for multi-lingual audioAmazon Transcribe converts audio input into text using automatic speech recognition (ASR) technology. If your audio recording contains more than one language, you can now enable multi-language identification, which identifies all languages spoken in the audio file and creates a transcript using each identified language. Automatic language identification for multilingual audio is supported for all 37 languages that are currently supported for batch transcriptions. Read the What’s New post from Amazon Transcribe to learn more.

Amazon Braket adds support for Borealis, the first publicly accessible quantum computer that is claimed to offer quantum advantage – If you are interested in quantum computing, you’ve likely heard the term “quantum advantage.” It refers to the technical milestone when a quantum computer outperforms the world’s fastest supercomputers on a well-defined task. Until now, none of the devices claimed to demonstrate quantum advantage have been accessible to the public. The Borealis device, a new photonic quantum processing unit (QPU) from Xanadu, is the first publicly available quantum computer that is claimed to have achieved quantum advantage. Amazon Braket, the quantum computing service from AWS, has just added support for Borealis. To learn more about how you can test a quantum advantage claim for yourself now on Amazon Braket, check out the What’s New post covering the addition of Borealis support.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates and news that you may have missed:

New AWS Heroes – A warm welcome to our newest AWS Heroes! The AWS Heroes program is a worldwide initiative that acknowledges individuals who have truly gone above and beyond to share knowledge in technical communities. Get to know them in the June 2022 introduction blog post!

AWS open-source news and updates – My colleague Ricardo Sueiras writes this weekly open-source newsletter in which he highlights new open-source projects, tools, and demos from the AWS Community. Read edition #115 here.

Upcoming AWS Events
Join me in Las Vegas for Amazon re:MARS 2022. The conference takes place June 21–24 and is all about the latest innovations in machine learning, automation, robotics, and space. I will deliver a talk on how machine learning can help to improve disaster response. Say “Hi!” if you happen to be around and see me.

We also have more AWS Summits coming up over the next couple of months, both in-person and virtual.

In Europe:

In North America:

In South America:

Find an AWS Summit near you, and get notified when registration opens in your area.

Imagine Conference 2022You can now register for IMAGINE 2022 (August 3, Seattle). The IMAGINE 2022 conference is a no-cost event that brings together education, state, and local leaders to learn about the latest innovations and best practices in the cloud.

Sign up for the SQL Server Database Modernization webinar on June 21 to learn how to modernize and cost-optimize Microsoft SQL Server on AWS.

That’s all for this week. Check back next Monday for another Week in Review!

— Antje

Amazon SageMaker Serverless Inference – Machine Learning Inference without Worrying about Servers

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/amazon-sagemaker-serverless-inference-machine-learning-inference-without-worrying-about-servers/

In December 2021, we introduced Amazon SageMaker Serverless Inference (in preview) as a new option in Amazon SageMaker to deploy machine learning (ML) models for inference without having to configure or manage the underlying infrastructure. Today, I’m happy to announce that Amazon SageMaker Serverless Inference is now generally available (GA).

Different ML inference use cases pose different requirements on your model hosting infrastructure. If you work on use cases such as ad serving, fraud detection, or personalized product recommendations, you are most likely looking for API-based, online inference with response times as low as a few milliseconds. If you work with large ML models, such as in computer vision (CV) applications, you might require infrastructure that is optimized to run inference on larger payload sizes in minutes. If you want to run predictions on an entire dataset, or larger batches of data, you might want to run an on-demand, one-time batch inference job instead of hosting a model-serving endpoint. And what if you have an application with intermittent traffic patterns, such as a chatbot service or an application to process forms or analyze data from documents? In this case, you might want an online inference option that is able to automatically provision and scale compute capacity based on the volume of inference requests. And during idle time, it should be able to turn off compute capacity completely so that you are not charged.

Amazon SageMaker, our fully managed ML service, offers different model inference options to support all of those use cases:

Amazon SageMaker Serverless Inference in More Detail
In a lot of conversations with ML practitioners, I’ve picked up the ask for a fully managed ML inference option that lets you focus on developing the inference code while managing all things infrastructure for you. SageMaker Serverless Inference now delivers this ease of deployment.

Based on the volume of inference requests your model receives, SageMaker Serverless Inference automatically provisions, scales, and turns off compute capacity. As a result, you pay for only the compute time to run your inference code and the amount of data processed, not for idle time.

You can use SageMaker’s built-in algorithms and ML framework-serving containers to deploy your model to a serverless inference endpoint or choose to bring your own container. If traffic becomes predictable and stable, you can easily update from a serverless inference endpoint to a SageMaker real-time endpoint without the need to make changes to your container image. Using Serverless Inference, you also benefit from SageMaker’s features, including built-in metrics such as invocation count, faults, latency, host metrics, and errors in Amazon CloudWatch.

Since its preview launch, SageMaker Serverless Inference has added support for the SageMaker Python SDK and model registry. SageMaker Python SDK is an open-source library for building and deploying ML models on SageMaker. SageMaker model registry lets you catalog, version, and deploy models to production.

New for the GA launch, SageMaker Serverless Inference has increased the maximum concurrent invocations per endpoint limit to 200 (from 50 during preview), allowing you to use Amazon SageMaker Serverless Inference for high-traffic workloads. Amazon SageMaker Serverless Inference is now available in all the AWS Regions where Amazon SageMaker is available, except for the AWS GovCloud (US) and AWS China Regions.

Several customers have already started enjoying the benefits of SageMaker Serverless Inference:

Bazaarvoice leverages machine learning to moderate user-generated content to enable a seamless shopping experience for our clients in a timely and trustworthy manner. Operating at a global scale over a diverse client base, however, requires a large variety of models, many of which are either infrequently used or need to scale quickly due to significant bursts in content. Amazon SageMaker Serverless Inference provides the best of both worlds: it scales quickly and seamlessly during bursts in content and reduces costs for infrequently used models.” — Lou Kratz, PhD, Principal Research Engineer, Bazaarvoice

Transformers have changed machine learning, and Hugging Face has been driving their adoption across companies, starting with natural language processing and now with audio and computer vision. The new frontier for machine learning teams across the world is to deploy large and powerful models in a cost-effective manner. We tested Amazon SageMaker Serverless Inference and were able to significantly reduce costs for intermittent traffic workloads while abstracting the infrastructure. We’ve enabled Hugging Face models to work out of the box with SageMaker Serverless Inference, helping customers reduce their machine learning costs even further.” — Jeff Boudier, Director of Product, Hugging Face

Now, let’s see how you can get started on SageMaker Serverless Inference.

For this demo, I’ve built a text classifier to turn e-commerce customer reviews, such as “I love this product!” into positive (1), neutral (0), and negative (-1) sentiments. I’ve used the Women’s E-Commerce Clothing Reviews dataset to fine-tune a RoBERTa model from the Hugging Face Transformers library and model hub. I will now show you how to deploy the trained model to an Amazon SageMaker Serverless Inference Endpoint.

Deploy Model to an Amazon SageMaker Serverless Inference Endpoint
You can create, update, describe, and delete a serverless inference endpoint using the SageMaker console, the AWS SDKs, the SageMaker Python SDK, the AWS CLI, or AWS CloudFormation. In this first example, I will use the SageMaker Python SDK as it simplifies the model deployment workflow through its abstractions. You can also use the SageMaker Python SDK to invoke the endpoint by passing the payload in line with the request. I will show you this in a bit.

First, let’s create the endpoint configuration with the desired serverless configuration. You can specify the memory size and maximum number of concurrent invocations. SageMaker Serverless Inference auto-assigns compute resources proportional to the memory you select. If you choose a larger memory size, your container has access to more vCPUs. As a general rule of thumb, the memory size should be at least as large as your model size. The memory sizes you can choose are 1024 MB, 2048 MB, 3072 MB, 4096 MB, 5120 MB, and 6144 MB. For my RoBERTa model, let’s configure a memory size of 5120 MB and a maximum of five concurrent invocations.

import sagemaker
from sagemaker.serverless import ServerlessInferenceConfig

serverless_config = ServerlessInferenceConfig(
	memory_size_in_mb=5120, 
	max_concurrency=5
)

Now let’s deploy the model. You can use the estimator.deploy() method to deploy the model directly from the SageMaker training estimator, together with the serverless inference endpoint configuration. I also provide my custom inference code in this example.


endpoint_name="roberta-womens-clothing-serverless-1"

estimator.deploy(
	endpoint_name = endpoint_name, 
	entry_point="inference.py",
	serverless_inference_config=serverless_config
)

SageMaker Serverless Inference also supports model registry when you use the AWS SDK for Python (Boto3). I will show you how to deploy the model from the model registry later in this post.

Let’s check the serverless inference endpoint settings and deployment status. Go to the SageMaker console and browse to the deployed inference endpoint:

Review Amazon SageMaker Serverless Endpoint configuration in the SageMaker Console

From the SageMaker console, you can also create, update, or delete serverless inference endpoints if needed. In Amazon SageMaker Studio, select the endpoint tab and your serverless inference endpoint to review the endpoint configuration details.

Review Amazon SageMaker Serverless Endpoint configuration in SageMaker Studio

Once the endpoint status shows InService, you can start sending inference requests.

Now, let’s run a few sample predictions. My fine-tuned RoBERTa model expects the inference requests in JSON Lines format with the review text to classify as the input feature. A JSON Lines text file comprises several lines where each individual line is a valid JSON object, delimited by a newline character. This is an ideal format for storing data that is processed one record at a time, such as in model inference. You can learn more about JSON Lines and other common data formats for inference in the Amazon SageMaker Developer Guide. Note that the following code might look different depending on your model’s accepted inference request format.


from sagemaker.predictor import Predictor
from sagemaker.serializers import JSONLinesSerializer
from sagemaker.deserializers import JSONLinesDeserializer

sess = sagemaker.Session(sagemaker_client=sm)

inputs = [
    {"features": ["I love this product!"]},
    {"features": ["OK, but not great."]},
    {"features": ["This is not the right product."]},
]

predictor = Predictor(
    endpoint_name=endpoint_name,
    serializer=JSONLinesSerializer(),
    deserializer=JSONLinesDeserializer(),
    sagemaker_session=sess
)

predicted_classes = predictor.predict(inputs)

for predicted_class in predicted_classes:
    print("Predicted class {} with probability {}".format(predicted_class['predicted_label'], predicted_class['probability']))

The result will look similar to this, classifying the sample reviews into the corresponding sentiment classes.


Predicted class 1 with probability 0.9495596289634705
Predicted class 0 with probability 0.5395089387893677
Predicted class -1 with probability 0.7887083292007446

You can also deploy your model from the model registry to a SageMaker Serverless Inference endpoint. This is currently only supported through the AWS SDK for Python (Boto3). Let me walk you through another quick demo.

Deploy Model from the SageMaker Model Registry
To deploy the model from the model registry using Boto3, let’s first create a model object from the model version by calling the create_model() method. Then, I pass the Amazon Resource Name (ARN) of the model version as part of the containers for the model object.

import boto3
import sagemaker

sm = boto3.client(service_name='sagemaker')
role = sagemaker.get_execution_role()
model_name="roberta-womens-clothing-serverless"

container_list = 
	[{'ModelPackageName': <MODEL_PACKAGE_ARN>}]

create_model_response = sm.create_model(
    ModelName = model_name,
    ExecutionRoleArn = role,
    Containers = container_list
)

Next, I create the serverless inference endpoint. Remember that you can create, update, describe, and delete a serverless inference endpoint using the SageMaker console, the AWS SDKs, the SageMaker Python SDK, the AWS CLI, or AWS CloudFormation. For consistency, I keep using Boto3 in this second example.

Similar to the first example, I start by creating the endpoint configuration with the desired serverless configuration. I specify the memory size of 5120 MB and a maximum number of five concurrent invocations for my endpoint.

endpoint_config_name="roberta-womens-clothing-serverless-ep-config"

create_endpoint_config_response = sm.create_endpoint_config(
    EndpointConfigName = endpoint_config_name,
    ProductionVariants=[{
        'ServerlessConfig':{
            'MemorySizeInMB' : 5120,
            'MaxConcurrency' : 5
        },
        'ModelName':model_name,
        'VariantName':'AllTraffic'}])

Next, I create the SageMaker Serverless Inference endpoint by calling the create_endpoint() method.


endpoint_name="roberta-womens-clothing-serverless-2"

create_endpoint_response = sm.create_endpoint(
    EndpointName=endpoint_name,
    EndpointConfigName=endpoint_config_name)

Once the endpoint status shows InService, you can start sending inference requests. Again, for consistency, I choose to run the sample prediction using Boto3 and the SageMaker runtime client invoke_endpoint() method.

sm_runtime = boto3.client("sagemaker-runtime")
response = sm_runtime.invoke_endpoint(
    EndpointName=endpoint_name,
    ContentType="application/jsonlines",
    Accept="application/jsonlines",
    Body=bytes('{"features": ["I love this product!"]}', 'utf-8')
)

print(response['Body'].read().decode('utf-8'))
{"probability": 0.966135561466217, "predicted_label": 1}

How to Optimize Your Model for SageMaker Serverless Inference
SageMaker Serverless Inference automatically scales the underlying compute resources to process requests. If the endpoint does not receive traffic for a while, it scales down the compute resources. If the endpoint suddenly receives new requests, you might notice that it takes some time for the endpoint to scale up the compute resources to process the requests.

This cold-start time greatly depends on your model size and the start-up time of your container. To optimize cold-start times, you can try to minimize the size of your model, for example, by applying techniques such as knowledge distillation, quantization, or model pruning.

Knowledge distillation uses a larger model (the teacher model) to train smaller models (student models) to solve the same task. Quantization reduces the precision of the numbers representing your model parameters from 32-bit floating-point numbers down to either 16-bit floating-point or 8-bit integers. Model pruning removes redundant model parameters that contribute little to the training process.

Availability and Pricing
Amazon SageMaker Serverless Inference is now available in all the AWS Regions where Amazon SageMaker is available except for the AWS GovCloud (US) and AWS China Regions.

With SageMaker Serverless Inference, you only pay for the compute capacity used to process inference requests, billed by the millisecond, and the amount of data processed. The compute capacity charge also depends on the memory configuration you choose. For detailed pricing information, visit the SageMaker pricing page.

Get Started Today with Amazon SageMaker Serverless Inference
To learn more about Amazon SageMaker Serverless Inference, visit the Amazon SageMaker machine learning inference webpage. Here are SageMaker Serverless Inference example notebooks that will help you get started right away. Give them a try from the SageMaker console, and let us know what you think.

Antje

AWS Week in Review – April 18, 2022

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/aws-week-in-review-april-18-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Here we are with another roundup of the most significant AWS launches from the previous week. Among the news, we have a new deployment option for Amazon FSx for NetApp ONTAP, performance and scaling improvements done in AWS Fargate, and an update on the AWS AI & ML Scholarship program.

Last Week’s Launches
Here are some launches that caught my attention last week:

Amazon FSx for NetApp ONTAP introduces a single Availability Zone (AZ) deployment option – Amazon FSx for NetApp ONTAP allows you to launch and run fully managed ONTAP file systems in the cloud. With the new single-AZ deployment option, you can now implement use cases that need storage replicated within an Availability Zone but do not require resiliency across AZs. This could be use cases such as development and test workloads or storing secondary copies of data already stored on-premises or in other AWS Regions. Check out Jeff’s launch blog post to learn more.

Amazon FSx for NetApp ONTAP - Single AZ Deployment

AWS Fargate now delivers faster scaling of applications – AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS). The team has made several improvements over the last year that enable you to scale applications up to 16X faster, making it easier to build and run applications at a larger scale on Fargate. Check out Nathan’s blog post to learn more.

AWS Fargate now delivers faster scaling of applications

AWS AI & ML Scholarship Program opens applications for underrepresented and underserved students – You can now apply for the AWS AI & ML Scholarship Program that will launch this summer. The scholarship program aims to help underserved and underrepresented high school and college students learn foundational ML concepts to prepare them for careers in AI and ML. The program uses AWS DeepRacer Student to teach foundational ML concepts, offer hands-on learning, and track scholarship prerequisites. Check out Anastacia’s blog post for more information and how to apply.

Apply for the AWS AI & ML Scholarship Program through AWS DeepRacer Student

AWS App Runner launches AWS X-Ray support – AWS App Runner is a fully managed service that developers can use to quickly deploy containerized web applications and APIs at scale with little to no infrastructure experience. App Runner now supports tracing as part of its observability suite. You can trace your containerized applications in AWS X-Ray by instrumenting applications with the AWS Distro for OpenTelemetry (ADOT). Check out Yiming’s blog post for more information.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here are additional news and a blog post that caught my attention:

AWS Open-Source News and Updates – My colleague Ricardo Sueiras writes this weekly open-source newsletter in which he highlights new open-source projects, tools, and demos from the AWS Community. Read edition #108 here.

Scheduling Jupyter Notebooks with AWS Orbit Workbench – In this blog post, Olalekan Elesin, Head of Data Platform & Data Architect at HRS Group and AWS Machine Learning Hero, describes how the HRS Group is scheduling Jupyter Notebooks with AWS Orbit Workbench. AWS Orbit Workbench is an open-source framework that provides a single, unified experience for your data, analytics and machine learning projects. Check out Olalekan’s blog post to learn more.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS SummitThe AWS Summit season is in full swing – The next AWS Summits are taking place in San Francisco (on April 20-21), London (on April 27), Madrid (on May 4-5) and Korea (online, on May 10-11). AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Summits are held in major cities around the world. Besides in-person summits, we also offer a series of online summits across the regions. Find an AWS Summit near you, and get notified when registration opens in your area.

.NET Enterprise Developer Day EMEA .NET Enterprise Developer Day EMEA 2022 is a free, one-day virtual conference providing enterprise developers with the most relevant information to swiftly and efficiently migrate and modernize their .NET applications and workloads on AWS. It takes place online on April 26. Attendees can also opt-in to attend the free, virtual DeveloperWeek Europe event, taking place April 27-28.

AWS Innovate - Data EditionAWS Innovate – Data Edition Americas AWS Innovate Online Conference – Data Edition is a free virtual event designed to inspire and empower you to make better decisions and innovate faster with your data. You learn about key concepts, business use cases, and best practices from AWS experts in over 30 technical and business sessions. This event takes place on May 11.

That’s all for this week. Check back next Monday for another Week in Review!

Antje

New AWS Scholarship Program Helps Underrepresented and Underserved Students Prep for Careers in AI and ML

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/new-aws-scholarship-program-helps-underrepresented-and-underserved-students-prep-for-careers-in-ai-and-ml/

As a woman working in information technology (IT) for many years, it has always been close to my heart to challenge long-standing gender stereotypes and inspire more young learners to consider a career in tech. With artificial intelligence (AI) and machine learning (ML) defining the future of technology, this future also depends on diverse representation.

The World Economic Forum estimates that technological advances and automation will create 97 million new technology jobs by 2025, including in the field of AI and ML. Yet, according to their research, women make up just 32% of AI jobs globally. The Pew Research Center found that Black and Hispanic workers in the U.S. comprise just 9% and 8% of workers in the science, technology, engineering, and mathematics (STEM) careers respectively.

At Amazon, we believe that technology should be built in a way that’s inclusive, diverse, and equitable. We already offer STEM programs to enable underrepresented groups to build or advance their technical skills and open up new career possibilities. Programs such as We Power Tech, Amazon Future Engineer, AWS Girls’ Tech Day, and AWS GetIT aim to build a future of tech that is inclusive, diverse, and accessible.

Today, I am excited to announce the launch of the AWS AI & ML Scholarship program in collaboration with Intel and Udacity, designed to prepare underrepresented and underserved students globally for careers in ML.

AWS AI & ML Scholarship Program Debuts with AWS DeepRacer Student
The AWS AI & ML Scholarship program is launching as part of the all-new AWS DeepRacer Student service and Student League. This is a new student division of the popular AWS DeepRacer program, a cloud-based 3D car racing simulator that provides a fun way to learn about ML and reinforcement learning (RL). Through DeepRacer Student, you have access to free online trainings to learn the ML and RL basics. You also have access to 10 hours of model training and 5 GB of storage per month to participate in the DeepRacer Student League, a global autonomous racing competition exclusively for AWS AI & ML students.

As part of DeepRacer Student, the AWS AI & ML Scholarship program in collaboration with Intel and Udacity is geared toward underserved and underrepresented high school and college students globally who are at least 16 years old. These are students who may have faced financial barriers growing up or are part of underrepresented groups, including women, people with disabilities, LGBTQ+ persons, as well as people of color. Students in these groups who take part in the DeepRacer Student League are eligible for a chance to win one or two of 2,500 annual scholarships from Udacity, an online learning platform focused on technology skills.

How to Qualify for the AWS AI & ML Scholarship
AWS DeepRacer Student League LogoTo be considered for a scholarship, you must successfully finish all AWS DeepRacer Student learning modules and achieve a score of at least 80% on all course quizzes, reach a certain lap time performance with your DeepRacer car in the Student League, and submit an essay.

Each year, 2,000 students will win a scholarship to the AI Programming with Python Udacity Nanodegree program. Udacity Nanodegrees are massive open online courses (MOOCs) designed to bridge the gap between learning and career goals. This aims to equip students with programming and ML fundamentals to solve real world problems with ML. The top 500 participants in this first Nanodegree will be eligible to join a second customized Nanodegree program curated specifically for AWS AI & ML Scholarship program recipients.

Coaching and Mentorship Opportunities for Scholarship Recipients
Scholarship recipients not only get access to the educational content, but also receive up to 85 hours of support from Udacity instructors to make sure that students successfully learn and progress through the Nanodegree. Instructors and students meet weekly in small groups, review the content for the week, answer questions, and work on a case study as a group exercise.

The top 500 participants who advance into the second Nanodegree will receive 12 months of mentorship from tenured technology leaders at Amazon and Intel to help prepare for a tech career.

All scholarship recipients will be given exclusive access to Ask-Me-Anythings (AMAs), fireside chats, and office hours with AI/ML professionals and diversity experts from Amazon, Intel, and AWS collaborators such as Girls in Tech. These will help familiarize students with different job functions in the AI/ML field.

Enroll via AWS DeepRacer Student
To enroll in the AWS AI & ML Scholarship program, first sign up at the AWS DeepRacer Student service with a valid email address. Note that this student player account is separate from an AWS account and doesn’t require you to provide any billing or credit card information.

AWS DeepRacer Student Sign Up

You will receive a verification code via email. Once you have verified your email address, log in to the DeepRacer Student service and complete the sign-up process.

AWS DeepRacer Student Complete Sign Up

The form will ask for some additional information, including your school, major, and planned year of graduation. You must also self-certify that you are a student enrolled in either high school, university, or community college.

AWS DeepRacer Student Complete Sign Up with Personal Information

Next, enroll in the AWS AI & ML Scholarship program by selecting the corresponding checkbox. The scholarship qualification will start on March 1, 2022.

AWS DeepRacer Student Opt In To Scholarship

Welcome to AWS DeepRacer Student! Start learning the fundamentals of ML through the provided online trainings. You can also participate in the pre-season DeepRacer Student League before the qualifying races start on March 1, 2022.

AWS DeepRacer Student Dashboard

Available Now
Start preparing for the AWS AI & ML Scholarship program by signing up for AWS DeepRacer Student, available globally today, and opt in to the scholarship program. Start learning with learning modules, train your first AWS DeepRacer model, and see how it performs in AWS DeepRacer Student League Pre-season happening now. Join the AWS Slack community to connect with experts and ask questions.

Sign up for AWS DeepRacer Student and the AWS AI & ML Scholarship program today.

Antje

Now in Preview – Amazon SageMaker Studio Lab, a Free Service to Learn and Experiment with ML

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/now-in-preview-amazon-sagemaker-studio-lab-a-free-service-to-learn-and-experiment-with-ml/

Our mission at AWS is to make machine learning (ML) more accessible. Through many conversations over the past years, I learned about barriers that many ML beginners face. Existing ML environments are often too complex for beginners, or too limited to support modern ML experimentation. Beginners want to quickly start learning and not worry about spinning up infrastructure, configuring services, or implementing billing alarms to avoid going over budget. This emphasizes another barrier for many people: the need to provide billing and credit card information at sign-up.

What if you could have a predictable and controlled environment for hosting your Jupyter notebooks in which you can’t accidentally run up a big bill? One that doesn’t require billing and credit card information at all at sign-up?

Today, I am extremely happy to announce the public preview of Amazon SageMaker Studio Lab, a free service that enables anyone to learn and experiment with ML without needing an AWS account, credit card, or cloud configuration knowledge.

At AWS, we believe technology has the power to solve the world’s most pressing issues. And, we proudly support the new and innovative ways that our customers are using these technologies to deliver social impacts.

This is why I am also excited to announce the launch of the AWS Disaster Response Hackathon using Amazon SageMaker Studio Lab. The hackathon, starting today and running through February 7, 2022, is a great way to start learning ML while doing good in the world. I will share more details on how to get involved at the end of the post.

Getting Started with Amazon SageMaker Studio Lab
Studio Lab is based on open-source JupyterLab and gives you free access to AWS compute resources to quickly start learning and experimenting with ML. Studio Lab is also simple to set up. In fact, the only configuration you have to do is one click to choose whether you need a CPU or GPU instance for your project. Let me show you.

The first step is to request a free Studio Lab account here.

Amazon SageMaker Studio Lab

When your account request is approved, you will receive an email with a link to the Studio Lab account registration page. You can now create your account with your approved email address and set a password and your username. This account is separate from an AWS account and doesn’t require you to provide any billing information.

Amazon SageMaker Studio Lab - Create Account

Once you have created your account and verified your email address, you can sign in to Studio Lab. Now, you can select the compute type for your project. You can choose between 12 hours of CPU or 4 hours of GPU per user session, with an unlimited number of user sessions available to you. Furthermore, you get a minimum of 15 GB of persistent storage per project. When your session expires, Studio Lab will take a snapshot of your environment. This enables you to pick up right where you left off. Let’s select CPU for this demo, and choose Start runtime.

Amazon SageMaker Studio Lab - Select Compute

Once the instance is running, select Open project to go to your free Studio Lab environment and start building. No additional configuration is required.

Amazon SageMaker Studio Lab - Open Project

Amazon SageMaker Studio Lab Environment

Customize your environment
Studio Lab comes with a Python base image to get you started. The image only has a few libraries pre-installed to save the available space for the frameworks and libraries that you actually need.

Amazon SageMaker Studio Lab - Select Kernel

You can customize the Conda environment and install additional packages using the %conda install <package> or %pip install <package> command right from within your notebook. You can also create entirely new, custom Conda environments, or install open-source JupyterLab and Jupyter Server extensions. For detailed instructions, see the Studio Lab documentation.

GitHub integration
Studio Lab is tightly integrated with GitHub and offers full support for the Git command line. This lets you easily clone, copy, and save your projects. Moreover, you can add an Open in Studio Lab badge to the README.md file or notebooks in your public GitHub repo to share your work with others.

Open in Amazon SageMaker Studio Lab Badge

This will let everyone open and view the notebook in Studio Lab. If they have a Studio Lab account, then they can also run the notebook. Add the following markdown to the top of your README.md file or notebook to add the Open in Studio Lab badge:

[![Open In Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/org/repo/blob/master/path/to/notebook.ipynb)

Replace org, repo, path and the notebook filename with those for your repo. Then, when you click the Open in Studio Lab badge, it will preview the notebook in Studio Lab. If your repo is private within a GitHub account or organization and you would like other people to use it, then you must additionally install the Amazon SageMaker GitHub App at the GitHub account or organization level.

Amazon SageMaker Studio Lab Notebook Preview

If you have a Studio Lab account, you can click Copy to project and choose to either copy just the notebook or to clone the entire repo into your Studio Lab account. Moreover, Studio Lab can check if the repository contains a Conda environment file and build the custom Conda environment for you.

Learn the Fundamentals of ML
If you are new to ML, then Studio Lab provides access to free, educational content to get you started. Dive into Deep Learning (D2L) is a free interactive book that teaches the ideas, the math, and the code behind ML and DL. The AWS Machine Learning University (MLU) gives you access to the same ML courses used to train Amazon’s own developers on ML. Hugging Face is a large open source community and a hub for pre-trained deep learning (DL) models. This is mainly aimed at natural language processing. In just a few clicks, you can import the relevant notebooks from D2L, MLU, and Hugging Face into your Studio Lab environment.

Join the AWS Disaster Response Hackathon using Amazon SageMaker Studio Lab
The frequency and severity of natural disasters are increasing. This year alone, we have seen significant wildfires across the Western United States and in countries like Greece and Turkey; major floods across Europe; and Hurricane Ida’s impact to the coast of Louisiana. In response, governments, businesses, nonprofits, and international organizations are placing more emphasis on disaster preparedness and response than ever before.

AWS Disaster Response Hackathon

Through the AWS Disaster Response Hackathon offering a total of $54,000 USD in prices, we hope to stimulate ways of applying ML to solve pressing challenges in natural disaster preparedness and response.

Join the hackathon today, start building, and don’t forget to submit your project before February 7, 2022. This hackathon is also an attempt to set the Guinness World Record for the “largest machine learning competition.”

Join the Preview
You can request a free Amazon SageMaker Studio Lab account starting today. The number of new account registrations will be limited to ensure a high quality of experience for all customers. You can find sample notebooks in the Studio Lab GitHub repository. Give it a try and let us know your feedback.

Request a free Amazon SageMaker Studio Lab account.

Antje