Tag Archives: Amazon Rekognition

Things Go Better With Step Functions

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/things-go-better-with-step-functions/

I often give presentations on Amazon’s culture of innovation, and start out with a slide that features a revealing quote from Amazon founder Jeff Bezos:

I love to sit down with our customers and to learn how we have empowered their creativity and to pursue their dreams. Earlier this year I chatted with Patrick from The Coca-Cola Company in order to learn how they used AWS Step Functions and other AWS services to support the Coke.com Vending Pass program. This program includes drink rewards earned by purchasing products at vending machines equipped to support mobile payments using the Coca-Cola Vending Pass. Participants swipe their NFC-enabled phones to complete an Apple Pay or Android Pay purchase, identifying themselves to the vending machine and earning credit towards future free vending purchases in the process

After the swipe, a combination of SNS topics and AWS Lambda functions initiated a pair of calls to some existing backend code to count the vending points and update the participant’s record. Unfortunately, the backend code was slow to react and had some timing dependencies, leading to missing updates that had the potential to confuse Vending Pass participants. The initial solution to this issue was very simple: modify the Lambda code to include a 90 second delay between the two calls. This solved the problem, but ate up process time for no good reason (billing for the use of Lambda functions is based on the duration of the request, in 100 ms intervals).

In order to make their solution more cost-effective, the team turned to AWS Step Functions, building a very simple state machine. As I wrote in an earlier blog post, Step Functions coordinate the components of distributed applications and microservices at scale, using visual workflows that are easy to build.

Coke built a very simple state machine to simplify their business logic and reduce their costs. Yours can be equally simple, or they can make use of other Step Function features such as sequential and parallel execution and the ability to make decisions and choose alternate states. The Coke state machine looks like this:

The FirstState and the SecondState states (Task states) call the appropriate Lambda functions while Step Functions implements the 90 second delay (a Wait state). This modification simplified their logic and reduced their costs. Here’s how it all fits together:

 

What’s Next
This initial success led them to take a closer look at serverless computing and to consider using it for other projects. Patrick told me that they have already seen a boost in productivity and developer happiness. Developers no longer need to wait for servers to be provisioned, and can now (as Jeff says) unleash their creativity and pursue their dreams. They expect to use Step Functions to improve the scalability, functionality, and reliability of their applications, going far beyond the initial use for the Coca-Cola Vending Pass. For example, Coke has built a serverless solution for publishing nutrition information to their food service partners using Lambda, Step Functions, and API Gateway.

Patrick and his team are now experimenting with machine learning and artificial intelligence. They built a prototype application to analyze a stream of photos from Instagram and extract trends in tastes and flavors. The application (built as a quick, one-day prototype) made use of Lambda, Amazon DynamoDB, Amazon API Gateway, and Amazon Rekognition and was, in Patrick’s words, a “big win and an enabler.”

In order to build serverless applications even more quickly, the development team has created an internal CI/CD reference architecture that builds on the Serverless Application Framework. The architecture includes a guided tour of Serverless and some boilerplate code to access internal services and assets. Patrick told me that this model allows them to easily scale promising projects from “a guy with a computer” to an entire development team.

Patrick will be on stage at AWS re:Invent next to my colleague Tim Bray. To meet them in person, be sure to attend SRV306 – State Machines in the Wild! How Customers Use AWS Step Functions.

Jeff;

AWS Hot Startups – August 2017

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-august-2017/

There’s no doubt about it – Artificial Intelligence is changing the world and how it operates. Across industries, organizations from startups to Fortune 500s are embracing AI to develop new products, services, and opportunities that are more efficient and accessible for their consumers. From driverless cars to better preventative healthcare to smart home devices, AI is driving innovation at a fast rate and will continue to play a more important role in our everyday lives.

This month we’d like to highlight startups using AI solutions to help companies grow. We are pleased to feature:

  • SignalBox – a simple and accessible deep learning platform to help businesses get started with AI.
  • Valossa – an AI video recognition platform for the media and entertainment industry.
  • Kaliber – innovative applications for businesses using facial recognition, deep learning, and big data.

SignalBox (UK)

In 2016, SignalBox founder Alain Richardt was hearing the same comments being made by developers, data scientists, and business leaders. They wanted to get into deep learning but didn’t know where to start. Alain saw an opportunity to commodify and apply deep learning by providing a platform that does the heavy lifting with an easy-to-use web interface, blueprints for common tasks, and just a single-click to productize the models. With SignalBox, companies can start building deep learning models with no coding at all – they just select a data set, choose a network architecture, and go. SignalBox also offers step-by-step tutorials, tips and tricks from industry experts, and consulting services for customers that want an end-to-end AI solution.

SignalBox offers a variety of solutions that are being used across many industries for energy modeling, fraud detection, customer segmentation, insurance risk modeling, inventory prediction, real estate prediction, and more. Existing data science teams are using SignalBox to accelerate their innovation cycle. One innovative UK startup, Energi Mine, recently worked with SignalBox to develop deep networks that predict anomalous energy consumption patterns and do time series predictions on energy usage for businesses with hundreds of sites.

SignalBox uses a variety of AWS services including Amazon EC2, Amazon VPC, Amazon Elastic Block Store, and Amazon S3. The ability to rapidly provision EC2 GPU instances has been a critical factor in their success – both in terms of keeping their operational expenses low, as well as speed to market. The Amazon API Gateway has allowed for operational automation, giving SignalBox the ability to control its infrastructure.

To learn more about SignalBox, visit here.

Valossa (Finland)

As students at the University of Oulu in Finland, the Valossa founders spent years doing research in the computer science and AI labs. During that time, the team witnessed how the world was moving beyond text, with video playing a greater role in day-to-day communication. This spawned an idea to use technology to automatically understand what an audience is viewing and share that information with a global network of content producers. Since 2015, Valossa has been building next generation AI applications to benefit the media and entertainment industry and is moving beyond the capabilities of traditional visual recognition systems.

Valossa’s AI is capable of analyzing any video stream. The AI studies a vast array of data within videos and converts that information into descriptive tags, categories, and overviews automatically. Basically, it sees, hears, and understands videos like a human does. The Valossa AI can detect people, visual and auditory concepts, key speech elements, and labels explicit content to make moderating and filtering content simpler. Valossa’s solutions are designed to provide value for the content production workflow, from media asset management to end-user applications for content discovery. AI-annotated content allows online viewers to jump directly to their favorite scenes or search specific topics and actors within a video.

Valossa leverages AWS to deliver the industry’s first complete AI video recognition platform. Using Amazon EC2 GPU instances, Valossa can easily scale their computation capacity based on customer activity. High-volume video processing with GPU instances provides the necessary speed for time-sensitive workflows. The geo-located Availability Zones in EC2 allow Valossa to bring resources close to their customers to minimize network delays. Valossa also uses Amazon S3 for video ingestion and to provide end-user video analytics, which makes managing and accessing media data easy and highly scalable.

To see how Valossa works, check out www.WhatIsMyMovie.com or enable the Alexa Skill, Valossa Movie Finder. To try the Valossa AI, sign up for free at www.valossa.com.

Kaliber (San Francisco, CA)

Serial entrepreneurs Ray Rahman and Risto Haukioja founded Kaliber in 2016. The pair had previously worked in startups building smart cities and online privacy tools, and teamed up to bring AI to the workplace and change the hospitality industry. Our world is designed to appeal to our senses – stores and warehouses have clearly marked aisles, products are colorfully packaged, and we use these designs to differentiate one thing from another. We tell each other apart by our faces, and previously that was something only humans could measure or act upon. Kaliber is using facial recognition, deep learning, and big data to create solutions for business use. Markets and companies that aren’t typically associated with cutting-edge technology will be able to use their existing camera infrastructure in a whole new way, making them more efficient and better able to serve their customers.

Computer video processing is rapidly expanding, and Kaliber believes that video recognition will extend to far more than security cameras and robots. Using the clients’ network of in-house cameras, Kaliber’s platform extracts key data points and maps them to actionable insights using their machine learning (ML) algorithm. Dashboards connect users to the client’s BI tools via the Kaliber enterprise APIs, and managers can view these analytics to improve their real-world processes, taking immediate corrective action with real-time alerts. Kaliber’s Real Metrics are aimed at combining the power of image recognition with ML to ultimately provide a more meaningful experience for all.

Kaliber uses many AWS services, including Amazon Rekognition, Amazon Kinesis, AWS Lambda, Amazon EC2 GPU instances, and Amazon S3. These services have been instrumental in helping Kaliber meet the needs of enterprise customers in record time.

Learn more about Kaliber here.

Thanks for reading and we’ll see you next month!

-Tina

 

AWS GovCloud (US) and Amazon Rekognition – A Powerful Public Safety Tool

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-govcloud-us-and-amazon-rekognition-a-powerful-public-safety-tool/

I’ve already told you about Amazon Rekognition and described how it uses deep neural network models to analyze images by detecting objects, scenes, and faces.

Today I am happy to tell you that Rekognition is now available in the AWS GovCloud (US) Region. To learn more, read the Amazon Rekognition FAQ, and the Amazon Rekognition Product Details, review the Amazon Rekognition Customer Use Cases, and then build your app using the information on the Amazon Rekognition for Developers page.

Motorola Solutions for Public Safety
While I have your attention, I would love to tell you how Motorola Solutions is exploring how Rekognition can enhance real-time intelligence for public safety personnel in the field and at the command center.

Motorola Solutions provides over 100,000 public safety and commercial customers in more than 100 countries with software, services, and tools for mobile intelligence and digital evidence management, many powered by images captured using body, dashboard, and stationary cameras. Due to the exceptionally sensitive nature of these images, they must be stored in an environment that meets stringent CJIS (Criminal Justice Information Systems) security standards defined by the FBI.

For several years, researchers at Motorola Solutions have been exploring the use of artificial intelligence. For example, they have built prototype applications that use Rekognition, Lex, and Polly in conjunction with their own software to scan images from a body-worn camera for missing persons and to raise alerts without requiring continuous human attention or interaction. With approximately 100,000 missing people in the US alone, law enforcement agencies need to bring powerful tools to bear. At re:Invent 2016, Dan Law (Chief Data Scientist for Motorola Solutions) described how they use AWS to aid in this effort. Here’s the video (Dan’s section is titled AI for Public Safety):

AWS and CJIS
The applications that Dan described can run in AWS GovCloud (US). This is an isolated cloud built to protect and preserve sensitive IT data while meeting the FBI’s CJIS requirements (and many others). AWS GovCloud (US) resides on US soil and is managed exclusively by US citizens. AWS routinely signs CJIS security agreements with our customers and can either perform or allow background checks on our employees, as needed.

Here are some resources that you can use to learn more about AWS and CJIS:

Jeff;

 

 

Amazon Rekognition Update – Celebrity Recognition

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-rekognition-update-celebrity-recognition/

We launched Amazon Rekognition at re:Invent (Amazon Rekognition – Image Detection and Recognition Powered by Deep Learning) and added Image Moderation earlier this year.

Today we are adding celebrity recognition!

Rekognition has been trained to identify hundreds of thousands of people who are famous, noteworthy, or prominent in fields that includes politics, sports, entertainment, business, and media. The list is global, and is updated frequently.

To access this feature, simply call the new RecognizeCelebrities function. In addition to the bounding box and facial landmark feature returned by the existing DetectFaces function, the new function returns information about any celebrities that it recognizes:

"Id": "3Ir0du6", 
"MatchConfidence": 97, 
"Name": "Jeff Bezos", 
"Urls": [ "www.imdb.com/name/nm1757263" ]

The Urls provide additional information about the celebrity. The API currently return links to IMDB content; we may add other sources in the future.

You can use the Celebrity Recognition Demo in the AWS Management Console to experiment with this feature:

If you have an image archive you can now index it by celebrity. You could also use a combination of celebrity recognition and object detection to build all kinds of search tools. If your images are already stored in S3, you can process them in-place.

I’m sure that you will come up with all sorts of interesting uses for this new feature. Leave me a comment and let me know what you build!

Jeff;

 

AWS Online Tech Talks – June 2017

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2017/

As the sixth month of the year, June is significant in that it is not only my birth month (very special), but it contains the summer solstice in the Northern Hemisphere, the day with the most daylight hours, and the winter solstice in the Southern Hemisphere, the day with the fewest daylight hours. In the United States, June is also the month in which we celebrate our dads with Father’s Day and have month-long celebrations of music, heritage, and the great outdoors.

Therefore, the month of June can be filled with lots of excitement. So why not add even more delight to the month, by enhancing your cloud computing skills. This month’s AWS Online Tech Talks features sessions on Artificial Intelligence (AI), Storage, Big Data, and Compute among other great topics.

June 2017 – Schedule

Noted below are the upcoming scheduled live, online technical sessions being held during the month of June. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts. All schedule times for the online tech talks are shown in the Pacific Time (PDT) time zone.

Webinars featured this month are:

Thursday, June 1

Storage

9:00 AM – 10:00 AM: Deep Dive on Amazon Elastic File System

Big Data

10:30 AM – 11:30 AM: Migrating Big Data Workloads to Amazon EMR

Serverless

12:00 Noon – 1:00 PM: Building AWS Lambda Applications with the AWS Serverless Application Model (AWS SAM)

 

Monday, June 5

Artificial Intelligence

9:00 AM – 9:40 AM: Exploring the Business Use Cases for Amazon Lex

 

Tuesday, June 6

Management Tools

9:00 AM – 9:40 AM: Automated Compliance and Governance with AWS Config and AWS CloudTrail

 

Wednesday, June 7

Storage

9:00 AM – 9:40 AM: Backing up Amazon EC2 with Amazon EBS Snapshots

Big Data

10:30 AM – 11:10 AM: Intro to Amazon Redshift Spectrum: Quickly Query Exabytes of Data in S3

DevOps

12:00 Noon – 12:40 PM: Introduction to AWS CodeStar: Quickly Develop, Build, and Deploy Applications on AWS

 

Thursday, June 8

Artificial Intelligence

9:00 AM – 9:40 AM: Exploring the Business Use Cases for Amazon Polly

10:30 AM – 11:10 AM: Exploring the Business Use Cases for Amazon Rekognition

 

Monday, June 12

Artificial Intelligence

9:00 AM – 9:40 AM: Exploring the Business Use Cases for Amazon Machine Learning

 

Tuesday, June 13

Compute

9:00 AM – 9:40 AM: DevOps with Visual Studio, .NET and AWS

IoT

10:30 AM – 11:10 AM: Create, with Intel, an IoT Gateway and Establish a Data Pipeline to AWS IoT

Big Data

12:00 Noon – 12:40 PM: Real-Time Log Analytics using Amazon Kinesis and Amazon Elasticsearch Service

 

Wednesday, June 14

Containers

9:00 AM – 9:40 AM: Batch Processing with Containers on AWS

Security & Identity

12:00 Noon – 12:40 PM: Using Microsoft Active Directory across On-premises and Cloud Workloads

 

Thursday, June 15

Big Data

12:00 Noon – 1:00 PM: Building Big Data Applications with Serverless Architectures

 

Monday, June 19

Artificial Intelligence

9:00 AM – 9:40 AM: Deep Learning for Data Scientists: Using Apache MxNet and R on AWS

 

Tuesday, June 20

Storage

9:00 AM – 9:40 AM: Cloud Backup & Recovery Options with AWS Partner Solutions

Artificial Intelligence

10:30 AM – 11:10 AM: An Overview of AI on the AWS Platform

 

The AWS Online Tech Talks series covers a broad range of topics at varying technical levels. These sessions feature live demonstrations & customer examples led by AWS engineers and Solution Architects. Check out the AWS YouTube channel for more on-demand webinars on AWS technologies.

Tara

AWS Lambda Support for AWS X-Ray

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-lambda-support-for-aws-x-ray/

Today we’re announcing general availability of AWS Lambda support for AWS X-Ray. As you may already know from Jeff’s GA POST, X-Ray is an AWS service for analyzing the execution and performance behavior of distributed applications. Traditional debugging methods don’t work so well for microservice based applications, in which there are multiple, independent components running on different services. X-Ray allows you to rapidly diagnose errors, slowdowns, and timeouts by breaking down the latency in your applications. I’ll demonstrate how you can use X-Ray in your own applications in just a moment by walking us through building and analyzing a simple Lambda based application.

If you just want to get started right away you can easily turn on X-Ray for your existing Lambda functions by navigating to your function’s configuration page and enabling tracing:

Or in the AWS Command Line Interface (CLI) by updating the functions’s tracing-config (Be sure to pass in a --function-name as well):

$ aws lambda update-function-configuration --tracing-config '{"Mode": "Active"}'

When tracing mode is active Lambda will attempt to trace your function (unless explicitly told not to trace by an upstream service). Otherwise, your function will only be traced if it is explicitly told to do so by an upstream service. Once tracing is enabled, you’ll start generating traces and you’ll get a visual representation of the resources in your application and the connections (edges) between them. One thing to note is that the X-Ray daemon does consume some of your Lambda function’s resources. If you’re getting close to your memory limit Lambda will try to kill the X-Ray daemon to avoid throwing an out-of-memory error.

Let’s test this new integration out by building a quick application that uses a few different services.


As twenty-something with a smartphone I have a lot of pictures selfies (10000+!) and I thought it would be great to analyze all of them. We’ll write a simple Lambda function with the Java 8 runtime that responds to new images uploaded into an Amazon Simple Storage Service (S3) bucket. We’ll use Amazon Rekognition on the photos and store the detected labels in Amazon DynamoDB.

service map

First, let’s define a few quick X-Ray vocabulary words: subsegments, segments, and traces. Got that? X-Ray is easy to understand if you remember that subsegments and segments make up traces which X-Ray processes to generate service graphs. Service graphs make a nice visual representation we can see above (with different colors indicating various request responses). The compute resources that run your applications send data about the work they’re doing in the form of segments. You can add additional annotations about that data and more granular timing of your code by creating subsgements. The path of a request through your application is tracked with a trace. A trace collects all the segments generated by a single request. That means you can easily trace Lambda events coming in from S3 all the way to DynamoDB and understand where errors and latencies are cropping up.

So, we’ll create an S3 bucket called selfies-bucket, a DynamoDB table called selfies-table, and a Lambda function. We’ll add a trigger to our Lambda function for the S3 bucket on ObjectCreated:All events. Our Lambda function code will be super simple and you can look at it in it’s entirety here. With no code changes we can enable X-Ray in our Java function by including the aws-xray-sdk and aws-xray-sdk-recorder-aws-sdk-instrumentor packages in our JAR.

Let’s trigger some photo uploads and get a look at the traces in X-Ray.

We’ve got some data! We can click on one of these individual traces for a lot of detailed information on our invocation.

In the first AWS::Lambda segmet we see the dwell time of the function, how long it spent waiting to execute, followed by the number of execution attempts.

In the second AWS::Lambda::Function segment there are a few possible subsegments:

  • The inititlization subsegment includes all of the time spent before your function handler starts executing
  • The outbound service calls
  • Any of your custom subsegments (these are really easy to add)

Hmm, it seems like there’s a bit of an issue on the DynamoDB side. We can even dive deeper and get the full exception stacktrace by clicking on the error icon. You can see we’ve been throttled by DynamoDB because we’re out of write capacity units. Luckily we can add more with just a few clicks or a quick API call. As we do that we’ll see more and more green on our service map!

The X-Ray SDKs make it super easy to emit data to X-Ray, but you don’t have to use them to talk to the X-Ray daemon. For Python, you can check out this library from rackspace called fleece. The X-Ray service is full of interesting stuff and the best place to learn more is by hopping over to the documentation. I’ve been using it for my @awscloudninja bot and it’s working great! Just keep in mind that this isn’t an official library and isn’t supported by AWS.

Personally, I’m really excited to use X-Ray in all of my upcoming projects because it really will save me some time and effort debugging and operating. I look forward to seeing what our customers can build with it as well. If you come up with any cool tricks or hacks please let me know!

– Randall

AWS Online Tech Talks – May 2017

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-may-2017/

Spring has officially sprung. As you enjoy the blossoming of May flowers, it may be worthy to also note some of the great tech talks blossoming online during the month of May. This month’s AWS Online Tech Talks features sessions on topics like AI, DevOps, Data, and Serverless just to name a few.

May 2017 – Schedule

Below is the upcoming schedule for the live, online technical sessions scheduled for the month of May. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts. All schedule times for the online tech talks are shown in the Pacific Time (PDT) time zone.

Webinars featured this month are:

Monday, May 15

Artificial Intelligence

9:00 AM – 10:00 AM: Integrate Your Amazon Lex Chatbot with Any Messaging Service

 

Tuesday, May 16

Compute

10:30 AM – 11:30 AM: Deep Dive on Amazon EC2 F1 Instance

IoT

12:00 Noon – 1:00 PM: How to Connect Your Own Creations with AWS IoT

Wednesday, May 17

Management Tools

9:00 AM – 10:00 AM: OpsWorks for Chef Automate – Automation Made Easy!

Serverless

10:30 AM – 11:30 AM: Serverless Orchestration with AWS Step Functions

Enterprise & Hybrid

12:00 Noon – 1:00 PM: Moving to the AWS Cloud: An Overview of the AWS Cloud Adoption Framework

 

Thursday, May 18

Compute

9:00 AM – 10:00 AM: Scaling Up Tenfold with Amazon EC2 Spot Instances

Big Data

10:30 AM – 11:30 AM: Building Analytics Pipelines for Games on AWS

12:00 Noon – 1:00 PM: Serverless Big Data Analytics using Amazon Athena and Amazon QuickSight

 

Monday, May 22

Artificial Intelligence

9:00 AM – 10:00 AM: What’s New with Amazon Rekognition

Serverless

10:30 AM – 11:30 AM: Building Serverless Web Applications

 

Tuesday, May 23

Hands-On Lab

8:30 – 10:00 AM: Hands On Lab: Windows Workloads on AWS

Big Data

10:30 AM – 11:30 AM: Streaming ETL for Data Lakes using Amazon Kinesis Firehose

DevOps

12:00 Noon – 1:00 PM: Deep Dive: Continuous Delivery for AI Applications with ECS

 

Wednesday, May 24

Storage

9:00 – 10:00 AM: Moving Data into the Cloud with AWS Transfer Services

Containers

12:00 Noon – 1:00 PM: Building a CICD Pipeline for Container Deployment to Amazon ECS

 

Thursday, May 25

Mobile

9:00 – 10:00 AM: Test Your Android App with Espresso and AWS Device Farm

Security & Identity

10:30 AM – 11:30 AM: Advanced Techniques for Federation of the AWS Management Console and Command Line Interface (CLI)

 

Tuesday, May 30

Databases

9:00 – 10:00 AM: DynamoDB: Architectural Patterns and Best Practices for Infinitely Scalable Applications

Compute

10:30 AM – 11:30 AM: Deep Dive on Amazon EC2 Elastic GPUs

Security & Identity

12:00 Noon – 1:00 PM: Securing Your AWS Infrastructure with Edge Services

 

Wednesday, May 31

Hands-On Lab

8:30 – 10:00 AM: Hands On Lab: Introduction to Microsoft SQL Server in AWS

Enterprise & Hybrid

10:30 AM – 11:30 AM: Best Practices in Planning a Large-Scale Migration to AWS

Databases

12:00 Noon – 1:00 PM: Convert and Migrate Your NoSQL Database or Data Warehouse to AWS

 

The AWS Online Tech Talks series covers a broad range of topics at varying technical levels. These sessions feature live demonstrations & customer examples led by AWS engineers and Solution Architects. Check out the AWS YouTube channel for more on-demand webinars on AWS technologies.

Tara

AWS San Francisco Summit – Summary of Launches and Announcements

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-san-francisco-summit-summary-of-launches-and-announcements/

Many of my colleagues are in San Francisco for today’s AWS Summit. Here’s a summary of what we announced from the main stage and in the breakout sessions:

New Services

Newly Available

New Features

Jeff;

 

Amazon Rekognition Update – Image Moderation

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-rekognition-update-image-moderation/

We launched Amazon Rekognition late last year and I told you about it in my post Amazon Rekognition – Image Detection and Recognition Powered by Deep Learning. As I explained at the time, this service was built by our Computer Vision team over the course of many years and analyzes billions of images daily.

Today we are adding image moderation to Rekognition. If your web site or application allows users to upload profile photos or other imagery, you will love this new Rekognition feature.

Rekognition can now identify images that contain suggestive or explicit content that may not be appropriate for your site. The moderation labels provide detailed sub-categories, allowing you to fine-tune the filters that you use to determine what kinds of images you deem acceptable or objectionable. You can use this feature to improve photo sharing sites, forums, dating apps, content platforms for children, e-commerce platforms and marketplaces, and more.

To access this feature, call the DetectModerationLabels function from your code. The response will include a set of moderation labels drawn from a built-in taxonomy:

"ModerationLabels": [ 
  {
    "Confidence": 83.55088806152344, 
    "Name": "Suggestive",
    "ParentName": ""
   },
   {
    "Confidence": 83.55088806152344, 
    "Name": "Female Swimwear Or Underwear", 
    "ParentName": "Suggestive" 
   }
 ]

You can use the Image Moderation Demo in the AWS Management Console to experiment with this feature:

Image moderation is available now and you can start using it today!

Jeff;

 

Amazon Rekognition Update – Estimated Age Range for Faces

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-rekognition-update-estimated-age-range-for-faces/

Amazon Rekognition is one of our artificial intelligence services. In addition to detecting objects, scenes, and faces in images, Rekognition can also search and compare faces. Behind the scenes, Rekognition uses deep neural network models to analyze billions of images daily (read Amazon Rekognition – Image Detection and Recognition Powered by Deep Learning to learn more).

Amazon Rekognition returns an array of attributes for each face that it locates in an image. Today we are adding a new attribute, an estimated age range. This value is expressed in years, and is returned as a pair of integers. The age ranges can overlap; the face of a 5 year old might have an estimated range of 4 to 6 but the face of a 6 year old might have an estimated range of 4 to 8. You can use this new attribute to power public safety applications, collect demographics, or to assemble a set of photos that span a desired time frame.

In order to have some fun with this new feature (I am writing this post on a Friday afternoon), I dug into my photo archives and asked Rekognition to estimate my age. Here are the results.

Let’s start at the beginning! I was probably about 2 years old here:

This picture was taken at my grandmother’s house in the spring of 1966:

I was 6 years old; Rekognition estimated that I was between 6 and 13:

My first official Amazon PR photo from 2003 when I was 43:

That’s a range of 17 years and my actual age was right in the middle.

And my most recent (late 2015) PR photo, age 55:

Again a fairly wide range, and I’m right in the middle of it! In general, Rekognition the actual age for each face will fall somewhere within the indicated range, but you should not count on it falling precisely in the middle.

This feature is available now and you can start using it today.

Jeff;

 

AWS Webinars – January 2017 (Bonus: December Recap)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-webinars-january-2017-bonus-december-recap/

Have you had time to digest all of the announcements that we made at AWS re:Invent? Are you ready to debug with AWS X-Ray, analyze with Amazon QuickSight, or build conversational interfaces using Amazon Lex? Do you want to learn more about AWS Lambda, set up CI/CD with AWS CodeBuild, or use Polly to give your applications a voice?

January Webinars
In our continued quest to provide you with training and education resources, I am pleased to share the webinars that we have set up for January. These are free, but they do fill up and you should definitely register ahead of time. All times are PT and each webinar runs for one hour:

January 16:

January 17:

January 18::

January 19:

January 20

December Webinar Recap
The December webinar series is already complete; here’s a quick recap with links to the recordings:

December 12:

December 13:

December 14:

December 15:

Jeff;

PS – If you want to get a jump start on your 2017 learning objectives, the re:Invent 2016 Presentations and re:Invent 2016 Videos are just a click or two away.

Amazon Rekognition – Image Detection and Recognition Powered by Deep Learning

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-rekognition-image-detection-and-recognition-powered-by-deep-learning/

What do you see when you look at this picture?

You might simply see an animal. Maybe you see a pet, a dog, or a Golden Retriever. The association between the image and these labels is not hard-wired in to your brain. Instead, you learned the labels after seeing hundreds or thousands of examples. Operating on a number of different levels, you learned to distinguish an animal from a plant, a dog from a cat, and a Golden Retriever from other dog breeds.

Deep Learning for Image Detection
Giving computers the same level of comprehension has proven to be a very difficult task. Over the course of decades, computer scientists have taken many different approaches to the problem. Today, a broad consensus has emerged that the best way to tackle this problem is via deep learning. Deep learning uses a combination of feature abstraction and neural networks to produce results that can be (as Arthur C. Clarke once said) indistinguishable from magic. However, it comes at a considerable cost. First, you need to put a lot of work into the training phase. In essence, you present the learning network with a broad spectrum of labeled examples (“this is a dog”, “this is a pet”, and so forth) so that it can correlate features in the image with the labels. This phase is computationally expensive due to the size and the multi-layered nature of the neural networks. After the training phase is complete, evaluating new images against the trained network is far easier. The results are traditionally expressed in confidence levels (0 to 100%) rather than as cold, hard facts. This allows you to decide just how much precision is appropriate for your applications.

Introducing Amazon Rekognition
Today I would like to tell you about Amazon Rekognition. Powered by deep learning and built by our Computer Vision team over the course of many years, this fully-managed service already analyzes billions of images daily. It has been trained on thousands of objects and scenes, and is now available for you to use in your own applications. You can use the Rekognition Demos to put the service through its paces before dive in and start writing code that uses the Rekognition API.

Rekognition was designed from the get-go to run at scale. It comprehends scenes, objects, and faces. Given an image, it will return a list of labels. Given an image with one or more faces, it will return bounding boxes for each face, along with attributes. Let’s see what it has to say about the picture of my dog (her name is Luna, by the way):

As you can see, Rekognition labeled Luna as an animal, a dog, a pet, and as a golden retriever with a high degree of confidence. It is important to note that these labels are independent, in the sense that the deep learning model does not explicitly understand the relationship between, for example, dogs and animals. It just so happens that both of these labels were simultaneously present on the dog-centric training material presented to Rekognition.

Let’s see how it does with a picture of my wife and I:

Amazon Rekognition found our faces, set up bounding boxes, and let me know that my wife was happy (the picture was taken on her birthday, so I certainly hope she was).

You can also use Rekognition to compare faces and to see if a given image contains any one of a number of faces that you have asked it to recognize.

All of this power is accessible from a set of API functions (the console is great for quick demos). For example, you can call DetectLabels to programmatically reproduce my first example, or DetectFaces to reproduce my second one. You can make multiple calls to IndexFaces to prepare Rekognition to recognize some faces. Each time you do this, Rekognition extracts some features (known as face vectors) from the image, stores the vectors, and discards the image. You can create one or more Rekognition collections and store related groups of face vectors in each one.

Rekognition can directly process images stored in Amazon Simple Storage Service (S3). In fact, you can use AWS Lambda functions to process newly uploaded photos at any desired scale. You can use AWS Identity and Access Management (IAM) to control access to the Rekognition APIs.

Applications for Rekognition
So, what can you use this for? I’ve got plenty of ideas to get you started!

If you have a large collection of photos, you can tag and index them using Amazon Rekognition. Because Rekognition is a service, you can process millions of photos per day without having to worry about setting up, running, or scaling any infrastructure. You can implement visual search, tag-based browsing, and all sorts of interactive discovery models.

You can use Rekognition in several different authentication and security contexts. You can compare a face on a webcam to a badge photo before allowing an employee to enter a secure zone. You can perform visual surveillance, inspecting photos for objects or people of interest or concern.

You can build “smart” marketing billboards that collect demographic data about viewers.

Now Available
Rekognition is now available in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) Regions and you can start using it today. As part of the AWS Free Tier tier, you can analyze up to 5,000 images per month and store up to 1,000 face vectors each month for an entire year. After that (and at higher volume), you will pay tiered pricing based on the number of images that you analyze and the number of face vectors that you store.

Jeff;