Tag Archives: AWS DeepLens

Amazon Kinesis Video Streams Adds Support For HLS Output Streams

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/amazon-kinesis-video-streams-adds-support-for-hls-output-streams/

Today I’m excited to announce and demonstrate the new HTTP Live Streams (HLS) output feature for Amazon Kinesis Video Streams (KVS). If you’re not already familiar with KVS, Jeff covered the release for AWS re:Invent in 2017. In short, Amazon Kinesis Video Streams is a service for securely capturing, processing, and storing video for analytics and machine learning – from one device or millions. Customers are using Kinesis Video with machine learning algorithms to power everything from home automation and smart cities to industrial automation and security.

After iterating on customer feedback, we’ve launched a number of features in the past few months including a plugin for GStreamer, the popular open source multimedia framework, and docker containers which make it easy to start streaming video to Kinesis. We could talk about each of those features at length, but today is all about the new HLS output feature! Fair warning, there are a few pictures of my incredibly messy office in this post.

HLS output is a convenient new feature that allows customers to create HLS endpoints for their Kinesis Video Streams, convenient for building custom UIs and tools that can playback live and on-demand video. The HLS-based playback capability is fully managed, so you don’t have to build any infrastructure to transmux the incoming media. You simply create a new streaming session, up to 5 (for now), with the new GetHLSStreamingSessionURL API and you’re off to the races. The great thing about HLS is that it’s already an industry standard and really easy to leverage in existing web-players like JW Player, hls.js, VideoJS, Google’s Shaka Player, or even rendering natively in mobile apps with Android’s Exoplayer and iOS’s AV Foundation. Let’s take a quick look at the API, feel free to skip to the walk-through below as well.

Kinesis Video HLS Output API

The documentation covers this in more detail than what we can go over in the Blog but I’ll cover the broad components.

  1. Get an endpoint with the GetDataEndpoint API
  2. Use that endpoint to get an HLS streaming URL with the GetHLSStreamingSessionURL API
  3. Render the content in the HLS URL with whatever tools you want!

This is pretty easy in a Jupyter notebook with a quick bit of Python and boto3.

import boto3
STREAM_NAME = "RandallDeepLens"
kvs = boto3.client("kinesisvideo")
# Grab the endpoint from GetDataEndpoint
endpoint = kvs.get_data_endpoint(
    APIName="GET_HLS_STREAMING_SESSION_URL",
    StreamName=STREAM_NAME
)['DataEndpoint']
# Grab the HLS Stream URL from the endpoint
kvam = boto3.client("kinesis-video-archived-media", endpoint_url=endpoint)
url = kvam.get_hls_streaming_session_url(
    StreamName=STREAM_NAME,
    PlaybackMode="LIVE"
)['HLSStreamingSessionURL']

You can even visualize everything right away in Safari which can render HLS streams natively.

from IPython.display import HTML
HTML(data='<video src="{0}" autoplay="autoplay" controls="controls" width="300" height="400"></video>'.format(url)) 

We can also stream directly from a AWS DeepLens with just a bit of code:

import DeepLens_Kinesis_Video as dkv
import time
aws_access_key = "super_fake"
aws_secret_key = "even_more_fake"
region = "us-east-1"
stream_name ="RandallDeepLens"
retention = 1 #in minutes.
wait_time_sec = 60*300 #The number of seconds to stream the data
# will create the stream if it does not already exist
producer = dkv.createProducer(aws_access_key, aws_secret_key, "", region)
my_stream = producer.createStream(stream_name, retention)
my_stream.start()
time.sleep(wait_time_sec)
my_stream.stop()

How to use Kinesis Video Streams HLS Output Streams

We definitely need a Kinesis Video Stream, which we can create easily in the Kinesis Video Streams Console.

Now, we need to get some content into the stream. We have a few options here. Perhaps the easiest is the docker container. I decided to take the more adventurous route and compile the GStreamer plugin locally on my mac, following the scripts on github. Be warned, compiling this plugin takes a while and can cause your computer to transform into a space heater.

With our freshly compiled GStreamer binaries like gst-launch-1.0 and the kvssink plugin we can stream directly from my macbook’s webcam, or any other GStreamer source, into Kinesis Video Streams. I just use the kvssink output plugin and my data will wind up in the video stream. There are a few parameters to configure around this, so pay attention.

Here’s an example command that I ran to stream my macbook’s webcam to Kinesis Video Streams:

gst-launch-1.0 autovideosrc ! videoconvert \
! video/x-raw,format=I420,width=640,height=480,framerate=30/1 \
! vtenc_h264_hw allow-frame-reordering=FALSE realtime=TRUE max-keyframe-interval=45 bitrate=500 \
! h264parse \
! video/x-h264,stream-format=avc,alignment=au,width=640,height=480,framerate=30/1 \
! kvssink stream-name="BlogStream" storage-size=1024 aws-region=us-west-2 log-config=kvslog

Now that we’re streaming some data into Kinesis, I can use the getting started sample static website to test my HLS stream with a few different video players. I just fill in my AWS credentials and ask it to start playing. The GetHLSStreamingSessionURL API supports a number of parameters so you can play both on-demand segments and live streams from various timestamps.

Additional Info

Data Consumed from Kinesis Video Streams using HLS is charged $0.0119 per GB in US East (N. Virginia) and US West (Oregon) and pricing for other regions is available on the service pricing page. This feature is available now, in all regions where Kinesis Video Streams is available.

The Kinesis Video team told me they’re working hard on getting more integration with the AWS Media services, like MediaLive, which will make it easier to serve Kinesis Video Stream content to larger audiences.

As always, let us know what you you think on twitter or in the comments. I’ve had a ton of fun playing around with this feature over the past few days and I’m excited to see customers build some new tools with it!

Randall

DeepLens Challenge #1 Starts Today – Use Machine Learning to Drive Inclusion

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/deeplens-challenge-1-starts-today-use-machine-learning-to-drive-inclusion/

Are you ready to develop and show off your machine learning skills in a way that has a positive impact on the world? If so, get your hands on an AWS DeepLens video camera and join the AWS DeepLens Challenge!

About the Challenge
Working together with our friends at Intel, we are launching the first in a series of eight themed challenges today, all centered around improving the world in some way. Each challenge will run for two weeks and is designed to help you to get some hands-on experience with machine learning.

We will announce a fresh challenge every two weeks on the AWS Machine Learning Blog. Each challenge will have a real-world theme, a technical focus, a sample project, and a subject matter expert. You have 12 days to invent and implement a DeepLens project that resonates with the theme, and to submit a short, compelling video (four minutes or less) to represent and summarize your work.

We’re looking for cool submissions that resonate with the theme and that make great use of DeepLens. We will watch all of the videos and then share the most intriguing ones.

Challenge #1 – Inclusivity Challenge
The first challenge was inspired by the Special Olympics, which took place in Seattle last week. We invite you to use your DeepLens to create a project that drives inclusion, overcomes barriers, and strengthens the bonds between people of all abilities. You could gauge the physical accessibility of buildings, provide audio guidance using Polly for people with impaired sight, or create educational projects for children with learning disabilities. Any project that supports this theme is welcome.

For each project that meets the entry criteria we will make a donation of $249 (the retail price of an AWS DeepLens) to the Northwest Center, a non-profit organization based in Seattle. This organization works to advance equal opportunities for children and adults of all abilities and we are happy to be able to help them to further their mission. Your work will directly benefit this very worthwhile goal!

As an example of what we are looking for, ASLens is a project created by Chris Coombs of Melbourne, Australia. It recognizes and understands American Sign Language (ASL) and plays the audio for each letter. Chris used Amazon SageMaker and Polly to implement ASLens (you can watch the video, learn more and read the code).

To learn more, visit the DeepLens Challenge page. Entries for the first challenge are due by midnight (PT) on July 22nd and I can’t wait to see what you come up with!

Jeff;

PS – The DeepLens Resources page is your gateway to tutorial videos, documentation, blog posts, and other helpful information.

AWS DeepLens Now Shipping – Order One Today!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-deeplens-now-shipping-order-one-today/

AWS DeepLens is a video camera that runs deep learning models directly on the device, out in the field. I wrote about the hardware and system software in depth last year; here’s a quick recap:

Hardware – 4 megapixel camera (1080P video), 2D microphone array, Intel Atom® Processor, dual-band Wi-Fi, USB and micro HDMI ports, 8 GB of memory for models and code.

Software – Ubuntu 16.04, AWS Greengrass Core, device-optimized versions of MXNet and Intel® clDNN library, support for other deep learning frameworks.

The response to this AWS re:Invent was immediate and gratifying! Educators, students, and developers signed up for hands-on sessions and started to build and train models right away. Their enthusiasm continued throughout the preview period and into this year’s AWS Summit season, where we did our best to provide all interested parties with access to devices, tools, and training.

Hackathons and Challenges
We made DeepLens devices available to participants in last month’s HackTillDawn. I was fortunate enough to be able to attend the event and to help to choose the three winners. It was amazing to watch the teams, most with no previous machine learning or computer vision experience, dive right in and build interesting, sophisticated applications designed to enhance the attendee experience at large-scale music festivals. The three winners went on to compete at EDC Vegas, where the Grand Prize winner (Find Your Totem) was chosen. Congrats to the team, and have fun at EDC Orlando!

We also ran the AWS DeepLens Challenge, asking participants to build machine learning projects that made use of DeepLens, with bonus points for the use of Amazon SageMaker and/or AWS Lambda. The submissions were as diverse as they were interesting, with applications designed for children, adults, and animals. Details on all of the submissions, including demo videos and source code, are available on the Community Projects page. The three winning applications were ReadToMe (first place), Dee (second place), and SafeHaven (third place).

From what I can tell, DeepLens has proven itself as an excellent learning vehicle. While speaking to the attendees at HackTillDawn, I learned that many of them were eager to get some hands-on experience that they could use to broaden their skillsets and to help them to progress in their careers.

Preview Updates
During the preview period, the DeepLens team has stayed heads-down, focusing on making the device even more capable. Significant additions include:

Gluon Support – Computer vision models can be built using Gluon (an imperative interface to MXNet), trained, imported to DeepLens, and deployed.

SageMaker Import – Models can be built and trained in Amazon SageMaker and then imported to DeepLens.

Model Optimizer – The optimizer runs on the device and optimizes downloaded MXNet models so that they run efficiently on the DeepLens GPU.

Now Shipping
I am happy to report that DeepLens is now shipping and available to order from Amazon.com. You can get one of your very own and start building your own deep learning applications within days. Devices can be shipped to addresses in the United States, with additional destinations in the works.

We are also rounding out the initial feature set with the addition of some important new capabilities:

Expanded Framework Support – DeepLens now supports the TensorFlow and Caffe frameworks.

Expanded MXNet Layer Support – DeepLens now supports the Deconvolution, L2Normalization, and LRN layers provided by MXNet.

Kinesis Video Streams – The video stream from the DeepLens camera can now be used in conjunction with Amazon Kinesis Video Streams. You can stream the raw camera feed to the cloud and then use Amazon Rekognition Video to extract objects, faces, and content from the video.

New Sample Project – DeepLens now includes a sample project for head pose detection (powered by TensorFlow). You can examine this sample to see how the model was constructed; here’s an excerpt from the notebook:

I am looking forward to seeing what you build with your very own DeepLens. Drop me a line and let me know!

Jeff;