Tag Archives: robotics

Video Friday: India Sending Humanoid Robot Into Space

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-india-space-humanoid-robot

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
HRI 2020 – March 23-26, 2020 – Cambridge, U.K.
ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores
ICRA 2020 – May 31-4, 2020 – Paris, France

Let us know if you have suggestions for next week, and enjoy today’s videos.


NASA Begins Testing Next Moon Rover

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/space-robots/nasa-begins-testing-next-moon-rover

NASA has decided that humans are going back to the Moon. That’s great! Before that actually happens, a whole bunch of other things have to happen, and excitingly, many of those things involve robots. As a sort of first-ish step, NASA is developing a new lunar rover called VIPER (Volatiles Investigating Polar Exploration Rover). VIPER’s job is to noodle around the permanently shaded craters at the Moon’s south pole looking for water ice, which can (eventually) be harvested and turned into breathable air and rocket fuel.

Water Vortex Suction Feet Help This Hexapod Sploosh Up Walls

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/water-vortex-suction-feet-help-this-hexapod-sploosh-up-walls

Suction is a useful tool in many robotic applications, as long as those applications are grasping objects that are suction-friendly—that is, objects that are impermeable and generally smooth-ish and flat-ish. If you can’t form a seal on a surface, your suction gripper is going to have a bad time, which is why you don’t often see suction systems working outside of an environment that’s at least semi-constrained. Warehouses? Yes. Kitchens? Maybe. The outdoors? Almost certainly not.

In general, getting robotic grippers (and robots themselves) to adhere to smooth surfaces and rough surfaces requires completely different technology. But researchers from Zhejiang University in China have come up with a new kind of suction gripper that can very efficiently handle surfaces like widely-spaced tile and even rough concrete, by augmenting the sealing system with a spinning vortex of water.

Video Friday: This Japanese Robot Can Conduct a Human Orchestra and Sing Opera

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/humanoids/video-friday-alter-3-robot-conductor

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
HRI 2020 – March 23-26, 2020 – Cambridge, U.K.
ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores
ICRA 2020 – May 31-4, 2020 – Paris, France

Let us know if you have suggestions for next week, and enjoy today’s videos.


PigeonBot Uses Real Feathers to Explore How Birds Fly

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/drones/pigeonbot-uses-real-feathers-to-explore-how-birds-fly

Birds have been doing their flying thing with flexible and feathery wings for about a hundred million years, give or take. And about a hundred years ago, give or take, humans decided that you know what, birds may be the experts but we’re just going to go off in our own direction with mostly rigid wings and propellers and stuff, because it’s easier or whatever. A century later, we’re still doing the rigid wings with discrete flappy bits, while birds (one has to assume) continue to judge us for our poor choices.

To be fair, the flexible wings and feathers that birds rely on aren’t something that they really have to construct on their own, and the few attempts at making artificial feathers that we’ve seen in the past have been sufficient for a few specific purposes but haven’t really come close to emulating the capabilities that real feathers bestow on the wings of birds.

In a paper published today in Science Robotics, researchers at Stanford University have presented some new work on understanding exactly how birds maintain control by morphing the shape of their wings. They put together a flying robot called PigeonBot with a pair of “biohybrid morphing wings” to test out new control principles, and instead of trying to develop some kind of fancy new artificial feather system, they did something that makes a lot more sense: they cheated, by just using real feathers instead. 

Converge Robotics Group Commercializing Immersive Telepresence

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/converge-robotics-group-commercializing-immersive-telepresence

At CES 2017, I got my mind blown by a giant mystery box from a company called AxonVR that was able to generate astonishingly convincing tactile sensations of things like tiny palm-sized animals running around on my palm in VR. An update in late 2017 traded the giant mystery box (and the ability to reproduce heat and cold) for a wearable glove with high resolution microfluidic haptics embedded inside of it. By itself, the HaptX system is constrained to virtual reality, but when combined with a pair of Universal Robotics UR10 arms, Shadow dexterous robotic hands, and SynTouch tactile sensors, you end up with a system that can reproduce physical reality instead.

Video Friday: Samsung Unveils Ball-Shaped Personal Robot

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/home-robots/video-friday-samsung-ballie

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
HRI 2020 – March 23-26, 2020 – Cambridge, U.K.
ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores
ICRA 2020 – May 31-4, 2020 – Paris, France

Let us know if you have suggestions for next week, and enjoy today’s videos.

10 Years of Automaton’s Most Popular Stories

Post Syndicated from Erico Guizzo original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/automaton-most-popular-stories

We’ve been writing about robots here at IEEE Spectrum for a long, long time. Erico started covering robotics for Spectrum in 2007, about the same time that Evan started BotJunkie.com. We joined forces in 2011, and have published thousands of articles since then, chronicling as many aspects of the field as we could. Autonomous cars, humanoids, legged robots, drones, robots in space—the last decade in robotics has been incredible.

To kick off 2020, we’re taking a look back at our most popular posts of the last 10 years. In order, listed below are the 100 articles with the highest total page views, providing a cross-section of what was the most interesting in robotics from 2010 until now.

Also, sometime in the next several weeks, we plan to post a selection of our favorite stories, focusing on what we think were the biggest developments in robotics of the past decade (including a few things that, surprisingly, did not make the list below). If you have suggestions of important robot stories we should include, let us know. Thank you for reading!


#1

robot thumbnail

How Google’s Self-Driving Car Works

Google engineers explain the technology behind their self-driving car and show videos of road tests

By Erico Guizzo
Posted 18 Oct 2011


#2

robot thumbnail

This Robot Can Do More Push-Ups Because It Sweats

A robot that uses artificial sweat can cool its motors without bulky radiators

By Evan Ackerman
Posted 13 Oct 2016


#3

robot thumbnail

Meet Geminoid F, a Smiling Female Android

Geminoid F displays facial expressions more naturally than previous androids

By Erico Guizzo
Posted 3 Apr 2010


#4

robot thumbnail

Latest Geminoid Is Incredibly Realistic

Geminoid DK is a realistic android nearly indistinguishable from a real human

By Evan Ackerman
Posted 5 Mar 2011


#5

robot thumbnail

The Next Generation of Boston Dynamics’ ATLAS Robot Is Quiet, Robust, and Tether Free

The latest ATLAS is by far the most advanced humanoid robot in existence

By Evan Ackerman & Erico Guizzo
Posted 23 Feb 2016


#6

robot thumbnail

The Uncanny Valley: The Original Essay by Masahiro Mori

“The Uncanny Valley” by Masahiro Mori is an influential essay in robotics. This is the first English translation authorized by Mori.

By Masahiro Mori
Posted 12 Jun 2012


#7

robot thumbnail

NASA JSC Unveils Valkyrie DRC Robot

NASA’s DARPA Robotics Challenge entry is much more than Robonaut with legs: it’s a completely new humanoid robot

By Evan Ackerman
Posted 10 Dec 2013


#8

robot thumbnail

Origami Robot Folds Itself Up, Does Cool Stuff, Dissolves Into Nothing

Tiny self-folding magnetically actuated robot creates itself when you want it, disappears when you don’t

By Evan Ackerman
Posted 28 May 2015


#9

robot thumbnail

Robots Bring Couple Together, Engagement Ensues

Yes, you really can find love at an IEEE conference

By Evan Ackerman & Erico Guizzo
Posted 31 Mar 2014


#10

robot thumbnail

Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter

The Deep Learning expert explains how convolutional nets work, why Facebook needs AI, what he dislikes about the Singularity, and more

By Lee Gomes
Posted 18 Feb 2015


#11

robot thumbnail

This Is the Most Amazing Biomimetic Anthropomorphic Robot Hand We’ve Ever Seen

Luke Skywalker, your new robotic hand is ready

By Evan Ackerman
Posted 17 Feb 2016


#12

robot thumbnail

Dutch Police Training Eagles to Take Down Drones

Attack eagles are training to become part of the Dutch National Police anti-drone arsenal

By Evan Ackerman
Posted 1 Feb 2016


#13

You (YOU!) Can Take Stanford’s ’Intro to AI’ Course Next Quarter, For Free

Sebastian Thrun and Peter Norvig are offering Stanford’s “Introduction to Artificial Intelligence” course online, for free, grades and all

By Evan Ackerman
Posted 4 Aug 2011


#14

Robot Hand Beats You at Rock, Paper, Scissors 100% Of The Time

Watch this high-speed robot hand cheat at rock, paper, scissors

By Evan Ackerman
Posted 26 Jun 2012


#15

You’ve Never Seen a Robot Drive System Like This Before

Using just a single spinning hemisphere mounted on a gimbal, this robot demonstrates some incredible agility

By Evan Ackerman
Posted 7 Jul 2011


#16

Fukushima Robot Operator Writes Tell-All Blog

An anonymous worker at Japan’s Fukushima Dai-ichi nuclear power plant has written dozens of blog posts describing his experience as a lead robot operator at the crippled facility

By Erico Guizzo
Posted 23 Aug 2011


#17

robot thumbnail

Should Quadrotors All Look Like This?

Researchers say we’ve been designing quadrotors the wrong way

By Evan Ackerman
Posted 13 Nov 2013


#18

robot thumbnail

Boston Dynamics’ PETMAN Humanoid Robot Walks and Does Push-Ups

Boston Dynamics releases stunning video showing off its most advanced humanoid robot

By Erico Guizzo
Posted 31 Oct 2011


#19

robot thumbnail

Boston Dynamics’ Spot Robot Dog Goes on Sale

Here’s everything we know about Boston Dynamics’ first commercial robot

By Erico Guizzo
Posted 24 Sep 2019


#20

robot thumbnail

Agility Robotics Introduces Cassie, a Dynamic and Talented Robot Delivery Ostrich

One day, robots like these will be scampering up your steps to drop off packages

By Evan Ackerman
Posted 9 Feb 2017


#21

Superfast Scanner Lets You Digitize Book By Flipping Pages

Tokyo University researchers develop scanner that can capture 200 pages in one minute

By Erico Guizzo
Posted 17 Mar 2010


#22

A Robot That Balances on a Ball

Masaaki Kumagai has built wheeled robots, crawling robots, and legged robots. Now he’s built a robot that rides on a ball

By Erico Guizzo
Posted 29 Apr 2010


#23

Top 10 Robotic Kinect Hacks

Microsoft’s Kinect 3D motion detector has been hacked into lots of awesome robots, and here are our 10 favorites

By Evan Ackerman
Posted 7 Mar 2011


#24

Latest AlphaDog Robot Prototypes Get Less Noisy, More Brainy

New video shows Boston Dynamics and DARPA putting AlphaDog through its paces

By Evan Ackerman
Posted 11 Sep 2012


#25

robot thumbnail

How South Korea’s DRC-HUBO Robot Won the DARPA Robotics Challenge

This transformer robot took first place because it was fast, adaptable, and didn’t fall down

By Erico Guizzo & Evan Ackerman
Posted 9 Jun 2015


#26

robot thumbnail

U.S. Army Considers Replacing Thousands of Soldiers With Robots

The U.S. Army could slash personnel numbers and toss in more robots instead

By Evan Ackerman
Posted 22 Jan 2014


#27

robot thumbnail

Google Acquires Seven Robot Companies, Wants Big Role in Robotics

The company is funding a major new robotics group and acquiring a bunch of robot startups

By Evan Ackerman & Erico Guizzo
Posted 4 Dec 2013


#28

robot thumbnail

Who Is SCHAFT, the Robot Company Bought by Google and Winner of the DRC?

Here’s everything we know about this secretive robotics startup

By Erico Guizzo & Evan Ackerman
Posted 6 Feb 2014


#29

Ground-Effect Robot Could Be Key To Future High-Speed Trains

Trains that levitate on cushions of air could be the future of fast and efficient travel, if this robot can figure out how to keep them stable

By Evan Ackerman
Posted 10 May 2011


#30

Hobby Robot Rides a Bike the Old-Fashioned Way

I don’t know where this little robot got its awesome bicycle, but it sure knows how to ride

By Evan Ackerman
Posted 24 Oct 2011


#31

robot thumbnail

SRI Demonstrates Abacus, the First New Rotary Transmission Design in 50 Years

Finally a gear system that could replace costly harmonic drives

By Evan Ackerman
Posted 19 Oct 2016


#32

robot thumbnail

Robotic Micro-Scallops Can Swim Through Your Eyeballs

Tiny robots modeled on scallops are able to swim through all the fluids in your body

By Evan Ackerman
Posted 4 Nov 2014


#33

robot thumbnail

Boston Dynamics Officially Unveils Its Wheel-Leg Robot: “Best of Both Worlds”

Handle is a humanoid robot on wheels, and it’s amazing

By Erico Guizzo & Evan Ackerman
Posted 27 Feb 2017


#34

robot thumbnail

iRobot Brings Visual Mapping and Navigation to the Roomba 980

The new robot vacuum uses VSLAM to navigate and clean larger spaces in satisfyingly straight lines

By Evan Ackerman & Erico Guizzo
Posted 16 Sep 2015


#35

robot thumbnail

When Will We Have Robots To Help With Household Chores?

Google, Microsoft, and Apple are investing in robots. Does that mean home robots are on the way?

By Satyandra K. Gupta
Posted 2 Jan 2014


#36

robot thumbnail

Robots Playing Ping Pong: What’s Real, and What’s Not?

Kuka’s robot vs. human ping pong match looks to be more hype than reality

By Evan Ackerman
Posted 12 Mar 2014


#37

robot thumbnail

BigDog Throws Cinder Blocks with Huge Robotic Face-Arm

I don’t know why BigDog needs a fifth limb to throw cinder blocks, but it’s incredibly awesome

By Evan Ackerman
Posted 28 Feb 2013


#38

robot thumbnail

Children Beating Up Robot Inspires New Escape Maneuver System

Japanese researchers show that children can act like horrible little brats towards robots

By Kate Darling
Posted 6 Aug 2015


#39

Boston Dynamics’ AlphaDog Quadruped Robot Prototype on Video

Boston Dynamics has just released some absolutely incredible video of their huge new quadruped robot, AlphaDog

By Evan Ackerman
Posted 30 Sep 2011


#40

robot thumbnail

Building a Super Robust Robot Hand

Researchers have built an anthropomorphic robot hand that can endure even strikes from a hammer without breaking into pieces

By Erico Guizzo
Posted 25 Jan 2011


#41

robot thumbnail

Who’s Afraid of the Uncanny Valley?

To design the androids of the future, we shouldn’t fear exploring the depths of the uncanny valley

By Erico Guizzo
Posted 2 Apr 2010


#42

robot thumbnail

Why AlphaGo Is Not AI

Google DeepMind’s artificial intelligence AlphaGo is a big advance but it will not get us to strong AI

By Jean-Christophe Baillie
Posted 17 Mar 2016


#43

robot thumbnail

Freaky Boneless Robot Walks on Soft Legs

This soft, inflatable, and totally creepy robot from Harvard can get up and walk on four squishy legs

By Evan Ackerman
Posted 29 Nov 2011


#44

robot thumbnail

Sweep Is a $250 LIDAR With Range of 40 Meters That Works Outdoors

Finally an affordable LIDAR for robots and drones

By Evan Ackerman
Posted 6 Apr 2016


#45

robot thumbnail

How Google Wants to Solve Robotic Grasping by Letting Robots Learn for Themselves

800,000 grasps is just the beginning for Google’s large-scale robotic grasping project

By Evan Ackerman
Posted 28 Mar 2016


#46

robot thumbnail

Whoa: Boston Dynamics Announces New WildCat Quadruped Robot

A new robot from Boston Dynamics can run outdoors, untethered, at up to 25 km/h

By Evan Ackerman
Posted 3 Oct 2013


#47

robot thumbnail

SCHAFT Unveils Awesome New Bipedal Robot at Japan Conference

SCHAFT demos a new bipedal robot designed to “help society”

By Evan Ackerman & Erico Guizzo
Posted 8 Apr 2016


#48

Riding Honda’s U3-X Unicycle of the Future

It only has one wheel, but Honda’s futuristic personal mobility device is no pedal-pusher

By Anne-Marie Corley
Posted 12 Apr 2010


#49

Lingodroid Robots Invent Their Own Spoken Language

These little robots make up their own words to tell each other where they are and where they want to go

By Evan Ackerman
Posted 17 May 2011


#50

robot thumbnail

Disney Robot With Air-Water Actuators Shows Off “Very Fluid” Motions

Meet Jimmy, a robot puppet powered by fluid actuators

By Erico Guizzo
Posted 1 Sep 2016


#51

Kilobots Are Cheap Enough to Swarm in the Thousands

What can you do with a $14 robot? Not much. What can you do with a thousand $14 robots? World domination

By Evan Ackerman
Posted 16 Jun 2011


#52

robot thumbnail

Honda Robotics Unveils Next-Generation ASIMO Robot

We heard some rumors that Honda was working on something big, and here it is: a brand new ASIMO

By Evan Ackerman
Posted 7 Nov 2011


#53

robot thumbnail

Cybernetic Third Arm Makes Drummers Even More Annoying

It keeps proper time and comes with an off switch, making this robotic third arm infinitely better than a human drummer

By Evan Ackerman
Posted 18 Feb 2016


#54

Chatbot Tries to Talk to Itself, Things Get Weird

“I am not a robot. I am a unicorn.”

By Evan Ackerman
Posted 29 Aug 2011


#55

robot thumbnail

Dean Kamen’s “Luke Arm” Prosthesis Receives FDA Approval

This advanced bionic arm for amputees has been approved for commercialization

By Erico Guizzo
Posted 13 May 2014


#56

Meet the Amazing Robots That Will Compete in the DARPA Robotics Challenge

Over the next two years, robotics will be revolutionized, and here’s how it’s going to happen

By Evan Ackerman
Posted 24 Oct 2012


#57

robot thumbnail

ReWalk Robotics’s New Exoskeleton Lets Paraplegic Stroll the Streets of NYC

Yesterday, a paralyzed man strapped on a pair of robotic legs and stepped out a hotel door in midtown Manhattan

By Eliza Strickland
Posted 15 Jul 2015


#58

robot thumbnail

Drone Uses AI and 11,500 Crashes to Learn How to Fly

Crashing into objects has taught this drone to fly autonomously, by learning what not to do

By Evan Ackerman
Posted 10 May 2017


#59

robot thumbnail

Lego Announces Mindstorms EV3, a More ’Hackable’ Robotics Kit

Lego’s latest Mindstorms kit has a new IR sensor, runs on Linux, and is compatible with Android and iOS apps

By Erico Guizzo & Stephen Cass
Posted 7 Jan 2013


#60

robot thumbnail

Boston Dynamics’ Marc Raibert on Next-Gen ATLAS: “A Huge Amount of Work”

The founder of Boston Dynamics describes how his team built one of the most advanced humanoids ever

By Erico Guizzo & Evan Ackerman
Posted 24 Feb 2016


#61

AR Drone That Infects Other Drones With Virus Wins DroneGames

Other projects included a leashed auto-tweeting drone, and code to control a swarm of drones all at once

By Evan Ackerman
Posted 6 Dec 2012


#62

robot thumbnail

DARPA Robotics Challenge: A Compilation of Robots Falling Down

Gravity is a bad thing for robots

By Erico Guizzo & Evan Ackerman
Posted 6 Jun 2015


#63

robot thumbnail

Bosch’s Giant Robot Can Punch Weeds to Death

A modular agricultural robot from Bosch startup Deepfield Robotics deals with weeds the old fashioned way: violently

By Evan Ackerman
Posted 12 Nov 2015


#64

How to Make a Humanoid Robot Dance

Japanese roboticists demonstrate a female android singing and dancing along with a troupe of human performers

By Erico Guizzo
Posted 2 Nov 2010


#65

What Technologies Enabled Drones to Proliferate?

Five years ago few people had even heard of quadcopters. Now they seem to be everywhere

By Markus Waibel
Posted 19 Feb 2010


#66

robot thumbnail

Video Friday: Professor Ishiguro’s New Robot Child, and More

Your weekly selection of awesome robot videos

By Evan Ackerman, Erico Guizzo & Fan Shi
Posted 3 Aug 2018


#67

robot thumbnail

Drone Provides Colorado Flooding Assistance Until FEMA Freaks Out

Drones can provide near real-time maps in weather that grounds other aircraft, but FEMA has banned them

By Evan Ackerman
Posted 16 Sep 2013


#68

robot thumbnail

A Thousand Kilobots Self-Assemble Into Complex Shapes

This is probably the most robots that have ever been in the same place at the same time, ever

By Evan Ackerman
Posted 14 Aug 2014


#69

robot thumbnail

Boston Dynamics’ SpotMini Is All Electric, Agile, and Has a Capable Face-Arm

A fun-sized version of Spot is the most domesticated Boston Dynamics robot we’ve seen

By Evan Ackerman
Posted 23 Jun 2016


#70

robot thumbnail

Kenshiro Robot Gets New Muscles and Bones

This humanoid is trying to mimic the human body down to muscles and bones

By Angelica Lim
Posted 10 Dec 2012


#71

robot thumbnail

Roomba Inventor Joe Jones on His New Weed-Killing Robot, and What’s So Hard About Consumer Robotics

The inventor of the Roomba tells us about his new solar-powered, weed-destroying robot

By Evan Ackerman
Posted 6 Jul 2017


#72

robot thumbnail

George Devol: A Life Devoted to Invention, and Robots

George Devol’s most famous invention—the first programmable industrial robot—started a revolution in manufacturing that continues to this day

By Bob Malone
Posted 26 Sep 2011


#73

World Robot Population Reaches 8.6 Million

Here’s an estimate of the number of industrial and service robots worldwide

By Erico Guizzo
Posted 14 Apr 2010


#74

robot thumbnail

U.S. Senator Calls Robot Projects Wasteful. Robots Call Senator Wasteful

U.S. Senator Tom Coburn criticizes the NSF for squandering “millions of dollars on wasteful projects,” including three that involve robots

By Erico Guizzo
Posted 14 Jun 2011


#75

robot thumbnail

Inception Drive: A Compact, Infinitely Variable Transmission for Robotics

A novel nested-pulley configuration forms the heart of a transmission that could make robots safer and more energy efficient

By Evan Ackerman & Celia Gorman
Posted 20 Sep 2017


#76

iRobot Demonstrates New Weaponized Robot

iRobot has released video showing a Warrior robot deploying an anti-personnel obstacle breaching system

By John Palmisano
Posted 30 May 2010


#77

Robotics Trends for 2012

Nearly a quarter of the year is already behind us, but we thought we’d spend some time looking at the months ahead and make some predictions about what’s going to be big in robotics

By Erico Guizzo & Travis Deyle
Posted 20 Mar 2012


#78

robot thumbnail

DRC Finals: CMU’s CHIMP Gets Up After Fall, Shows How Awesome Robots Can Be

The most amazing run we saw all day came from CHIMP, which was the only robot to fall and get up again

By Evan Ackerman & Erico Guizzo
Posted 5 Jun 2015


#79

robot thumbnail

Lethal Microdrones, Dystopian Futures, and the Autonomous Weapons Debate

The future of weaponized robots requires a reasoned discussion, not scary videos

By Evan Ackerman
Posted 15 Nov 2017


#80

Every Kid Needs One of These DIY Robotics Kits

For just $200, this kit from a CMU spinoff company is a great way for total beginners to get started building robots

By Evan Ackerman
Posted 11 Jul 2012


#81

robot thumbnail

Beautiful Fluid Actuators from Disney Research Make Soft, Safe Robot Arms

Routing forces through air and water allows for displaced motors and safe, high-performance arms

By Evan Ackerman
Posted 9 Oct 2014


#82

Boston Dynamics Sand Flea Robot Demonstrates Astonishing Jumping Skills

Watch a brand new video of Boston Dynamics’ Sand Flea robot jumping 10 meters into the air

By Evan Ackerman
Posted 28 Mar 2012


#83

Eyeborg: Man Replaces False Eye With Bionic Camera

Canadian filmmaker Rob Spence has replaced his false eye with a bionic camera eye. He showed us his latest prototype

By Tim Hornyak
Posted 11 Jun 2010


#84

robot thumbnail

We Should Not Ban ‘Killer Robots,’ and Here’s Why

What we really need is a way of making autonomous armed robots ethical, because we’re not going to be able to prevent them from existing

By Evan Ackerman
Posted 28 Jul 2015


#85

robot thumbnail

Yale’s Robot Hand Copies How Your Fingers Work to Improve Object Manipulation

These robotic fingers can turn friction on and off to make it easier to manipulate objects with one hand

By Evan Ackerman
Posted 12 Sep 2018


#86

robot thumbnail

France Developing Advanced Humanoid Robot Romeo

Nao, the small French humanoid robot, is getting a big brother

By Erico Guizzo
Posted 13 Dec 2010


#87

DARPA Wants to Give Soldiers Robot Surrogates, Avatar Style

Soldiers controlling bipedal robot surrogates on the battlefield? It’s not science fiction, it’s DARPA’s 2012 budget

By Evan Ackerman
Posted 17 Feb 2012


#88

robot thumbnail

Whoa: Quadrotors Play Catch With Inverted Pendulum

Watch these quadrotors balance a stick on its end, and then toss it back and forth

By Evan Ackerman
Posted 21 Feb 2013


#89

robot thumbnail

Why We Should Build Humanlike Robots

Humans are brilliant, beautiful, compassionate, and capable of love. Why shouldn’t we aspire to make robots humanlike in these ways?

By David Hanson
Posted 1 Apr 2011


#90

robot thumbnail

DARPA Robotics Challenge Finals: Know Your Robots

All 25 robots in a single handy poster-size image

By Erico Guizzo & Evan Ackerman
Posted 3 Jun 2015


#91

robot thumbnail

Here’s That Extra Pair of Robot Arms You’ve Always Wanted

MIT researchers develop wearable robotic arms that can give you an extra hand (or two)

By Evan Ackerman
Posted 2 Jun 2014


#92

robot thumbnail

Rat Robot Beats on Live Rats to Make Them Depressed

A robotic rat can be used to depress live rats to make them suitable for human drug trials

By Evan Ackerman
Posted 13 Feb 2013


#93

robot thumbnail

MIT Cheetah Robot Bounds Off Tether, Outdoors

The newest version of MIT’s Cheetah is fast, it’s quiet, and it jumps

By Evan Ackerman
Posted 15 Sep 2014


#94

robot thumbnail

Bizarre Soft Robots Evolve to Run

These simulated robots may be wacky looking, but they’ve evolved on their own to be fast and efficient

By Evan Ackerman
Posted 11 Apr 2013


#95

Robot Car Intersections Are Terrifyingly Efficient

In the future, robots will blow through intersections without stopping, and you won’t be able to handle it

By Evan Ackerman
Posted 13 Mar 2012


#96

robot thumbnail

iRobot’s New Roomba 800 Series Has Better Vacuuming With Less Maintenance

A redesigned cleaning system makes the new vacuum way better at dealing with hair (and everything else)

By Evan Ackerman
Posted 12 Nov 2013


#97

robot thumbnail

Sawyer: Rethink Robotics Unveils New Robot

It’s smaller, faster, stronger, and more precise: meet Sawyer, Rethink Robotics’ new manufacturing robot

By Evan Ackerman & Erico Guizzo
Posted 19 Mar 2015


#98

robot thumbnail

Cynthia Breazeal Unveils Jibo, a Social Robot for the Home

The famed MIT roboticist is launching a crowdfunding campaign to bring social robots to consumers

By Erico Guizzo
Posted 16 Jul 2014


#99

robot thumbnail

These Robots Will Teach Kids Programming Skills

Startup Play-i says its robots can make computer programming fun and accessible

By Erico Guizzo
Posted 30 Oct 2013


#100

robot thumbnail

Watch a Swarm of Flying Robots Build a 6-Meter Brick Tower

This is what happens when a bunch of roboticists and architects get together in an art gallery

By Erico Guizzo
Posted 2 Dec 2011


Hiro-chan Is a Faceless Robot Baby

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/humanoids/faceless-robot-baby

This robot is Hiro-chan. It’s made by Vstone, a Japanese robotics company known for producing a variety of totally normal educational and hobby robotics kits and parts. Hiro-chan is not what we would call totally normal, since it very obviously does not have a face. Vstone calls Hiro-chan a “healing communication device,” and while the whole faceless aspect is definitely weird, there is a reason for it, which unsurprisingly involves Hiroshi Ishiguro and his ATR Lab.

Anki’s Robots May Have a Future

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/home-robots/anki-robots-may-have-a-future

When Anki abruptly shut down in April of last year, things looked bleak for Vector, Cozmo, and the Overdrive little racing cars. Usually, abrupt shutdowns don’t end well, with assets and intellectual property getting liquidated and effectively disappearing forever. Despite some vague promises (more like hopes, really) from Anki at the time that their cloud-dependent robots would continue to operate, it was pretty clear that Anki’s robots wouldn’t have much of a future—at best, they’d continue to work only as long as there was money to support the cloud servers that gave them their spark of life.

A few weeks ago, The Robot Report reported that Anki’s intellectual property (patents, trademarks, and data) was acquired by Digital Dream Labs, an education tech startup based in Pittsburgh. Over the weekend, a new post on the Vector Kickstarter page (the campaign happened in 2018) from Digital Dream Labs CEO Jacob Hanchar announced that not only will Vector’s cloud servers keep running indefinitely, but that the next few months will see a new Kickstarter to add new features and future-proofing to Vectors everywhere.

Building an AWS IoT Core device using AWS Serverless and an ESP32

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-an-aws-iot-core-device-using-aws-serverless-and-an-esp32/

Using a simple Arduino sketch, an AWS Serverless Application Repository application, and a microcontroller, you can build a basic serverless workflow for communicating with an AWS IoT Core device.

A microcontroller is a programmable chip and acts as the brain of an electronic device. It has input and output pins for reading and writing on digital or analog components. Those components could be sensors, relays, actuators, or various other devices. It can be used to build remote sensors, home automation products, robots, and much more. The ESP32 is a powerful low-cost microcontroller with Wi-Fi and Bluetooth built in and is used this walkthrough.

The Arduino IDE, a lightweight development environment for hardware, now includes support for the ESP32. There is a large collection of community and officially supported libraries, from addressable LED strips to spectral light analysis.

The following walkthrough demonstrates connecting an ESP32 to AWS IoT Core to allow it to publish and subscribe to topics. This means that the device can send any arbitrary information, such as sensor values, into AWS IoT Core while also being able to receive commands.

Solution overview

This post walks through deploying an application from the AWS Serverless Application Repository. This allows an AWS IoT device to be messaged using a REST endpoint powered by Amazon API Gateway and AWS Lambda. The AWS SAR application also configures an AWS IoT rule that forwards any messages published by the device to a Lambda function that updates an Amazon DynamoDB table, demonstrating basic bidirectional communication.

The last section explores how to build an IoT project with real-world application. By connecting a thermal printer module and modifying a few lines of code in the example firmware, the ESP32 device becomes an AWS IoT–connected printer.

All of this can be accomplished within the AWS Free Tier, which is necessary for the following instructions.

An example of an AWS IoT project using an ESP32, AWS IoT Core, and an Arduino thermal printer

An example of an AWS IoT project using an ESP32, AWS IoT Core, and an Arduino thermal printer.

Required steps

To complete the walkthrough, follow these steps:

  • Create an AWS IoT device.
  • Install and configure the Arduino IDE.
  • Configure and flash an ESP32 IoT device.
  • Deploying the lambda-iot-rule AWS SAR application.
  • Monitor and test.
  • Create an IoT thermal printer.

Creating an AWS IoT device

To communicate with the ESP32 device, it must connect to AWS IoT Core with device credentials. You must also specify the topics it has permissions to publish and subscribe on.

  1. In the AWS IoT console, choose Register a new thing, Create a single thing.
  2. Name the new thing. Use this exact name later when configuring the ESP32 IoT device. Leave the remaining fields set to their defaults. Choose Next.
  3.  Choose Create certificate. Only the thing cert, private key, and Amazon Root CA 1 downloads are necessary for the ESP32 to connect. Download and save them somewhere secure, as they are used when programming the ESP32 device.
  4. Choose Activate, Attach a policy.
  5. Skip adding a policy, and choose Register Thing.
  6. In the AWS IoT console side menu, choose Secure, Policies, Create a policy.
  7. Name the policy Esp32Policy. Choose the Advanced tab.
  8. Paste in the following policy template.
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": "iot:Connect",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:client/THINGNAME"
        },
        {
          "Effect": "Allow",
          "Action": "iot:Subscribe",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:topicfilter/esp32/sub"
        },
    	{
          "Effect": "Allow",
          "Action": "iot:Receive",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:topic/esp32/sub"
        },
        {
          "Effect": "Allow",
          "Action": "iot:Publish",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:topic/esp32/pub"
        }
      ]
    }
  9. Replace REGION with the matching AWS Region you’re currently operating in. This can be found on the top right corner of the AWS console window.
  10.  Replace ACCOUNT_ID with your own, which can be found in Account Settings.
  11. Replace THINGNAME with the name of your device.
  12. Choose Create.
  13. In the AWS IoT console, choose Secure, Certification. Select the one created for your device and choose Actions, Attach policy.
  14. Choose Esp32Policy, Attach.

Your AWS IoT device is now configured to have permission to connect to AWS IoT Core. It can also publish to the topic esp32/pub and subscribe to the topic esp32/sub. For more information on securing devices, see AWS IoT Policies.

Installing and configuring the Arduino IDE

The Arduino IDE is an open-source development environment for programming microcontrollers. It supports a continuously growing number of platforms including most ESP32-based modules. It must be installed along with the ESP32 board definitions, MQTT library, and ArduinoJson library.

  1. Download the Arduino installer for the desired operating system.
  2. Start Arduino and open the Preferences window.
  3. For Additional Board Manager URLs, add
    https://dl.espressif.com/dl/package_esp32_index.json.
  4. Choose Tools, Board, Boards Manager.
  5. Search esp32 and install the latest version.
  6. Choose Sketch, Include Library, Manage Libraries.
  7. Search MQTT, and install the latest version by Joel Gaehwiler.
  8. Repeat the library installation process for ArduinoJson.

The Arduino IDE is now installed and configured with all the board definitions and libraries needed for this walkthrough.

Configuring and flashing an ESP32 IoT device

A collection of various ESP32 development boards.

A collection of various ESP32 development boards.

For this section, you need an ESP32 device. To check if your board is compatible with the Arduino IDE, see the boards.txt file. The following code connects to AWS IoT Core securely using MQTT, a publish and subscribe messaging protocol.

This project has been tested on the following devices:

  1. Install the required serial drivers for your device. Some boards use different USB/FTDI chips for interfacing. Here are the most commonly used with links to drivers.
  2. Open the Arduino IDE and choose File, New to create a new sketch.
  3. Add a new tab and name it secrets.h.
  4. Paste the following into the secrets file.
    #include <pgmspace.h>
    
    #define SECRET
    #define THINGNAME ""
    
    const char WIFI_SSID[] = "";
    const char WIFI_PASSWORD[] = "";
    const char AWS_IOT_ENDPOINT[] = "xxxxx.amazonaws.com";
    
    // Amazon Root CA 1
    static const char AWS_CERT_CA[] PROGMEM = R"EOF(
    -----BEGIN CERTIFICATE-----
    -----END CERTIFICATE-----
    )EOF";
    
    // Device Certificate
    static const char AWS_CERT_CRT[] PROGMEM = R"KEY(
    -----BEGIN CERTIFICATE-----
    -----END CERTIFICATE-----
    )KEY";
    
    // Device Private Key
    static const char AWS_CERT_PRIVATE[] PROGMEM = R"KEY(
    -----BEGIN RSA PRIVATE KEY-----
    -----END RSA PRIVATE KEY-----
    )KEY";
  5. Enter the name of your AWS IoT thing, as it is in the console, in the field THINGNAME.
  6. To connect to Wi-Fi, add the SSID and PASSWORD of the desired network. Note: The network name should not include spaces or special characters.
  7. The AWS_IOT_ENDPOINT can be found from the Settings page in the AWS IoT console.
  8. Copy the Amazon Root CA 1, Device Certificate, and Device Private Key to their respective locations in the secrets.h file.
  9. Choose the tab for the main sketch file, and paste the following.
    #include "secrets.h"
    #include <WiFiClientSecure.h>
    #include <MQTTClient.h>
    #include <ArduinoJson.h>
    #include "WiFi.h"
    
    // The MQTT topics that this device should publish/subscribe
    #define AWS_IOT_PUBLISH_TOPIC   "esp32/pub"
    #define AWS_IOT_SUBSCRIBE_TOPIC "esp32/sub"
    
    WiFiClientSecure net = WiFiClientSecure();
    MQTTClient client = MQTTClient(256);
    
    void connectAWS()
    {
      WiFi.mode(WIFI_STA);
      WiFi.begin(WIFI_SSID, WIFI_PASSWORD);
    
      Serial.println("Connecting to Wi-Fi");
    
      while (WiFi.status() != WL_CONNECTED){
        delay(500);
        Serial.print(".");
      }
    
      // Configure WiFiClientSecure to use the AWS IoT device credentials
      net.setCACert(AWS_CERT_CA);
      net.setCertificate(AWS_CERT_CRT);
      net.setPrivateKey(AWS_CERT_PRIVATE);
    
      // Connect to the MQTT broker on the AWS endpoint we defined earlier
      client.begin(AWS_IOT_ENDPOINT, 8883, net);
    
      // Create a message handler
      client.onMessage(messageHandler);
    
      Serial.print("Connecting to AWS IOT");
    
      while (!client.connect(THINGNAME)) {
        Serial.print(".");
        delay(100);
      }
    
      if(!client.connected()){
        Serial.println("AWS IoT Timeout!");
        return;
      }
    
      // Subscribe to a topic
      client.subscribe(AWS_IOT_SUBSCRIBE_TOPIC);
    
      Serial.println("AWS IoT Connected!");
    }
    
    void publishMessage()
    {
      StaticJsonDocument<200> doc;
      doc["time"] = millis();
      doc["sensor_a0"] = analogRead(0);
      char jsonBuffer[512];
      serializeJson(doc, jsonBuffer); // print to client
    
      client.publish(AWS_IOT_PUBLISH_TOPIC, jsonBuffer);
    }
    
    void messageHandler(String &topic, String &payload) {
      Serial.println("incoming: " + topic + " - " + payload);
    
    //  StaticJsonDocument<200> doc;
    //  deserializeJson(doc, payload);
    //  const char* message = doc["message"];
    }
    
    void setup() {
      Serial.begin(9600);
      connectAWS();
    }
    
    void loop() {
      publishMessage();
      client.loop();
      delay(1000);
    }
  10. Choose File, Save, and give your project a name.

Flashing the ESP32

  1. Plug the ESP32 board into a USB port on the computer running the Arduino IDE.
  2. Choose Tools, Board, and then select the matching type of ESP32 module. In this case, a Sparkfun ESP32 Thing was used.
  3. Choose Tools, Port, and then select the matching port for your device.
  4. Choose Upload. Arduino reads Done uploading when the upload is successful.
  5. Choose the magnifying lens icon to open the Serial Monitor. Set the baud rate to 9600.

Keep the Serial Monitor open. When connected to Wi-Fi and then AWS IoT Core, any messages received on the topic esp32/sub are logged to this console. The device is also now publishing to the topic esp32/pub.

The topics are set at the top of the sketch. When changing or adding topics, remember to add permissions in the device policy.

// The MQTT topics that this device should publish/subscribe
#define AWS_IOT_PUBLISH_TOPIC   "esp32/pub"
#define AWS_IOT_SUBSCRIBE_TOPIC "esp32/sub"

Within this sketch, the relevant functions are publishMessage() and messageHandler().

The publishMessage() function creates a JSON object with the current time in milliseconds and the analog value of pin A0 on the device. It then publishes this JSON object to the topic esp32/pub.

void publishMessage()
{
  StaticJsonDocument<200> doc;
  doc["time"] = millis();
  doc["sensor_a0"] = analogRead(0);
  char jsonBuffer[512];
  serializeJson(doc, jsonBuffer); // print to client

  client.publish(AWS_IOT_PUBLISH_TOPIC, jsonBuffer);
}

The messageHandler() function prints out the topic and payload of any message from a subscribed topic. To see all the ways to parse JSON messages in Arduino, see the deserializeJson() example.

void messageHandler(String &topic, String &payload) {
  Serial.println("incoming: " + topic + " - " + payload);

//  StaticJsonDocument<200> doc;
//  deserializeJson(doc, payload);
//  const char* message = doc["message"];
}

Additional topic subscriptions can be added within the connectAWS() function by adding another line similar to the following.

// Subscribe to a topic
  client.subscribe(AWS_IOT_SUBSCRIBE_TOPIC);

  Serial.println("AWS IoT Connected!");

Deploying the lambda-iot-rule AWS SAR application

Now that an ESP32 device has been connected to AWS IoT, the following steps walk through deploying an AWS Serverless Application Repository application. This is a base for building serverless integration with a physical device.

  1. On the lambda-iot-rule AWS Serverless Application Repository application page, make sure that the Region is the same as the AWS IoT device.
  2. Choose Deploy.
  3. Under Application settings, for PublishTopic, enter esp32/sub. This is the topic to which the ESP32 device is subscribed. It receives messages published to this topic. Likewise, set SubscribeTopic to esp32/pub, the topic on which the device publishes.
  4. Choose Deploy.
  5. When creation of the application is complete, choose Test app to navigate to the application page. Keep this page open for the next section.

Monitoring and testing

At this stage, two Lambda functions, a DynamoDB table, and an AWS IoT rule have been deployed. The IoT rule forwards messages on topic esp32/pub to TopicSubscriber, a Lambda function, which inserts the messages on to the DynamoDB table.

  1. On the application page, under Resources, choose MyTable. This is the DynamoDB table that the TopicSubscriber Lambda function updates.
  2. Choose Items. If the ESP32 device is still active and connected, messages that it has published appear here.

The TopicPublisher Lambda function is invoked by the API Gateway endpoint and publishes to the AWS IoT topic esp32/sub.

1.     On the application page, find the Application endpoint.

2.     To test that the TopicPublisher function is working, enter the following into a terminal or command-line utility, replacing ENDPOINT with the URL from above.

curl -d '{"text":"Hello world!"}' -H "Content-Type: application/json" -X POST https://ENDPOINT/publish

Upon success, the request returns a copy of the message.

Back in the Serial Monitor, the message published to the topic esp32/sub prints out.

Creating an IoT thermal printer

With the completion of the previous steps, the ESP32 device currently logs incoming messages to the serial console.

The following steps demonstrate how the code can be modified to use incoming messages to interact with a peripheral component. This is done by wiring a thermal printer to the ESP32 in order to physically print messages. The REST endpoint from the previous section can be used as a webhook in third-party applications to interact with this device.

A wiring diagram depicting an ESP32 connected to a thermal printer.

A wiring diagram depicting an ESP32 connected to a thermal printer.

  1. Follow the product instructions for powering, wiring, and installing the correct Arduino library.
  2. Ensure that the thermal printer is working by holding the power button on the printer while connecting the power. A sample receipt prints. On that receipt, the default baud rate is specified as either 9600 or 19200.
  3. In the Arduino code from earlier, include the following lines at the top of the main sketch file. The second line defines what interface the thermal printer is connected to. &Serial2 is used to set the third hardware serial interface on the ESP32. For this example, the pins on the Sparkfun ESP32 Thing, GPIO16/GPIO17, are used for RX/TX respectively.
    #include "Adafruit_Thermal.h"
    
    Adafruit_Thermal printer(&Serial2);
  4. Replace the setup() function with the following to initialize the printer on device bootup. Change the baud rate of Serial2.begin() to match what is specified in the test print. The default is 19200.
    void setup() {
      Serial.begin(9600);
    
      // Start the thermal printer
      Serial2.begin(19200);
      printer.begin();
      printer.setSize('S');
    
      connectAWS();
    }
    
  5. Replace the messageHandler() function with the following. On any incoming message, it parses the JSON and prints the message on the thermal printer.
    void messageHandler(String &topic, String &payload) {
      Serial.println("incoming: " + topic + " - " + payload);
    
      // deserialize json
      StaticJsonDocument<200> doc;
      deserializeJson(doc, payload);
      String message = doc["message"];
    
      // Print the message on the thermal printer
      printer.println(message);
      printer.feed(2);
    }
  6. Choose Upload.
  7. After the firmware has successfully uploaded, open the Serial Monitor to confirm that the board has connected to AWS IoT.
  8. Enter the following into a command-line utility, replacing ENDPOINT, as in the previous section.
    curl -d '{"message": "Hello World!"}' -H "Content-Type: application/json" -X POST https://ENDPOINT/publish

If successful, the device prints out the message “Hello World” from the attached thermal printer. This is a fully serverless IoT printer that can be triggered remotely from a webhook. As an example, this can be used with GitHub Webhooks to print a physical readout of events.

Conclusion

Using a simple Arduino sketch, an AWS Serverless Application Repository application, and a microcontroller, this post demonstrated how to build a basic serverless workflow for communicating with a physical device. It also showed how to expand that into an IoT thermal printer with real-world applications.

With the use of AWS serverless, advanced compute and extensibility can be added to an IoT device, from machine learning to translation services and beyond. By using the Arduino programming environment, the vast collection of open-source libraries, projects, and code examples open up a world of possibilities. The next step is to explore what can be done with an Arduino and the capabilities of AWS serverless. The sample Arduino code for this project and more can be found at this GitHub repository.

Video Friday: How to Train Your Robot to Pull an Airplane

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-iit-robot-airplane

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
HRI 2020 – March 23-26, 2020 – Cambridge, U.K.
ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores
ICRA 2020 – May 31-4, 2020 – Paris, France

Let us know if you have suggestions for next week, and enjoy today’s videos.


Your Next Salad Could Be Grown by a Robot

Post Syndicated from Erico Guizzo original https://spectrum.ieee.org/robotics/industrial-robots/your-next-salad-could-be-grown-by-a-robot

graphic link to special report landing page

At first glance, the crops don’t look any different from other crops blanketing the Salinas Valley, in California, which is often called “America’s salad bowl.” All you see are rows and rows of lettuce, broccoli, and cauliflower stretching to the horizon. But then the big orange robots roll through.

The machines are on a search-and-destroy mission. Their target? Weeds. Equipped with tractorlike wheels and an array of cameras and environmental sensors, they drive autonomously up and down the rows of produce, hunting for any leafy green invaders. Rather than spraying herbicides, they deploy a retractable hoe that kills the weeds swiftly and precisely.

The robots belong to FarmWise, a San Francisco startup that wants to use robotics and artificial intelligence to make agriculture more sustainable—and tastier. The company has raised US $14.5 million in a recent funding round, and in 2020 it plans to deploy its first commercial fleet of robots, with more than 10 machines serving farmers in the Salinas Valley.

FarmWise says that although its robots are currently optimized for weeding, future designs will do much more. “Our goal is to become a universal farming platform,” says cofounder and CEO ­Sébastien Boyer. “We want to automate pretty much all tasks from seeding all the way to harvesting.”

Boyer envisions the robots collecting vast amounts of data, including detailed images of the crops and parameters that affect their health such as temperature, humidity, and soil conditions. But it’s what the robots will do with the data that makes them truly remarkable. Using machine learning, they’ll identify each plant individually, determine whether it’s thriving, and tend to it accordingly. Thanks to these AI-powered robots, every broccoli stalk will get the attention it needs to be the best broccoli it can be.

Automation is not new to agriculture. Wheeled harvesters are increasingly autonomous, and farmers have long been flying drones to monitor their crops from above. Also under development are robots designed to pick fruits and vegetables—apples, peppers, strawberries, tomatoes, grapes, cucumbers, asparagus. More recently, a number of robotics companies have turned their attention to ways they can improve the quality or yield of crops.

Farming robots are still a “very nascent market,” says Rian Whitton, a senior analyst at ABI Research, in London, but it’s one that will “expand significantly over the next 10 years.” ABI forecasts that annual shipments of mobile robots for agriculture will exceed 100,000 units globally by 2030, 100 times the volume deployed today.

It’s still a small number compared with the millions of tractors and other farming vehicles sold each year, but Whitton notes that demand for automation will likely accelerate due to labor shortages in many parts of the world.

FarmWise says it has worked closely with farmers to understand their needs and develop its robots based on their feedback. So how do they work? Boyer is not prepared to reveal specifics about the company’s technology, but he says the machines operate in three steps.

First, the sensor array captures images and other relevant data about the crops and stores that information on both onboard computers and cloud servers. The second step is the decision-making process, in which specialized deep-learning algorithms analyze the data. There’s an algorithm trained to detect plants in an image, and the robots combine that output with GPS and other location data to precisely identify each plant. Another algorithm is trained to decide whether a plant is, say, a lettuce head or a weed. The final step is the physical action that the machines perform on the crops—for example, deploying the weeding hoe.

Boyer says the robots perform the three steps in less than a second. Indeed, the robots can drive through the fields clearing the soil at a pace that would be virtually impossible for humans to match. FarmWise says its robots have removed weeds from more than 10 million plants to date.

Whitton, the ABI analyst, says focusing on weeding as an initial application makes sense. “There are potentially billions of dollars to be saved from less pesticide use, so that’s the fashionable use case,” he says. But he adds that commercial success for agriculture automation startups will depend on whether they can expand their services to perform additional farming tasks as well as operate in a variety of regions and climates.

Already FarmWise has a growing number of competitors. Deepfield Robotics, a spin-out of the German conglomerate Robert Bosch, is testing an autonomous vehicle that kills weeds by punching them into the ground. The Australian startup Agerris is developing mobile robots for monitoring and spraying crops. And Sunnyvale, Calif.–based Blue River Technology, acquired by John Deere in 2017, is building robotic machines for weeding large field crops like cotton and soybeans.

FarmWise says it has recently completed a redesign of its robots. The new version is better suited to withstand the harsh conditions often found in the field, including mud, dust, and water. The company is now expanding its staff as it prepares to deploy its robotic fleet in California, and eventually in other parts of the United States and abroad.

Boyer is confident that farms everywhere will one day be filled with robots—and that they’ll grow some of the best broccoli you’ve ever tasted.

Video Friday: These Robots Wish You Happy Holidays!

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-robots-wish-happy-holidays

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores

Let us know if you have suggestions for next week, and enjoy today’s videos.

Thank you to our readers and Happy Holidays from IEEE Spectrum’s robotics team!
—Erico, Evan, and Fan


Smash This Robot

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/smash-this-robot

Researchers at EPFL have developed a soft robotic insect that uses artificial soft muscles called dielectric elastomer actuators to drive tiny feet that propel the little bot along at a respectable speed. And since the whole thing is squishy and already mostly flat, you can repeatedly smash it into the ground with a fly swatter, and then peel it off and watch it start running again. Get ready for one of the most brutal robot abuse videos you’ve ever seen.

A Robot That Explains Its Actions Is a First Step Towards AI We Can (Maybe) Trust

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/a-robot-that-explains-its-actions

For the most part, robots are a mystery to end users. And that’s part of the point: Robots are autonomous, so they’re supposed to do their own thing (presumably the thing that you want them to do) and not bother you about it. But as humans start to work more closely with robots, in collaborative tasks or social or assistive contexts, it’s going to be hard for us to trust them if their autonomy is such that we find it difficult to understand what they’re doing.

In a paper published in Science Robotics, researchers from UCLA have developed a robotic system that can generate different kinds of real-time, human-readable explanations about its actions, and then did some testing to figure which of the explanations were the most effective at improving a human’s trust in the system. Does this mean we can totally understand and trust robots now? Not yet—but it’s a start.

The Next Frontier in AI: Nothing

Post Syndicated from Max Versace original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/the-next-frontier-in-ai-nothing

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

At an early age, as we take our first steps into the world of math and numbers, we learn that one apple plus another apple equals two apples. We learn to count real things. Only later are we introduced to a weird concept: zero… or the number of apples in an empty box.

The concept of “zero” revolutionized math after Hindu-Arabic scholars and then the Italian mathematician Fibonacci introduced it into our modern numbering system. While today we comfortably use zero in all our mathematical operations, the concept of “nothing” has yet to enter the realm of artificial intelligence.

In a sense, AI and deep learning still need to learn how to recognize and reason with nothing.

Is it an apple or a banana? Neither!

Traditionally, deep learning algorithms such as deep neural networks (DNNs) are trained in a supervised fashion to recognize specific classes of things.

In a typical task, a DNN might be trained to visually recognize a certain number of classes, say pictures of apples and bananas. Deep learning algorithms, when fed a good quantity and quality of data, are really good at coming up with precise, low error, confident classifications.

The problem arises when a third, unknown object appears in front of the DNN. If an unknown object that was not present in the training set is introduced, such as an orange, then the network will be forced to “guess” and classify the orange as the closest class that captures the unknown object—an apple!

Basically, the world for a DNN trained on apples and bananas is completely made of apples and bananas. It can’t conceive the whole fruit basket.

Enter the world of nothing

While its usefulness is not immediately clear in all applications, the idea of “nothing” or a “class zero” is extremely useful in several ways when training and deploying a DNN.

During the training process, if a DNN has the ability to classify items as “apple,” “banana,” or “nothing,” the algorithm’s developers can determine if it hasn’t effectively learned to recognize a particular class. That said, if pictures of fruit continue to yield “nothing” responses, perhaps the developers need to add another “class” of fruit to identify, such as oranges.

Meanwhile, in a deployment scenario, a DNN trained to recognize healthy apples and bananas can answer “nothing” if there is a deviation from the prototypical fruit it has learned to recognize. In this sense, the DNN may act as an anomaly detection network—aside from classifying apples and bananas, it can also, without further changes, signal when it sees something that deviates from the norm.

As of today, there are no easy ways to train a standard DNN so that it can provide the functionality above.

One new approach called a lifelong DNN naturally incorporates the concept of “nothing” in its architecture. A lifelong DNN does this by cleverly utilizing feedback mechanisms to determine whether an input is a close match or instead a mismatch with what it has learned in the past.

This mechanism resembles how humans learn: we subconsciously and continuously check if our predictions match our world. For example, if somebody plays a trick on you and changes the height of your office chair, you’ll immediately notice it. That’s because you have a “model” of the height of your office chair that you’ve learned over time—if that model is disconfirmed, you realize the anomaly right away. We humans continuously check that our classifications match reality. If they don’t, our brains notice and emit an alert. For us, there are not only apples and bananas; there’s also the ability to reason that “I thought it was an apple, but it isn’t.”

A lifelong DNN captures this mechanism in its functioning, so it can output “nothing” when the model it has learned is disconfirmed.

Nothing to work with, no problem

Armed with a basic understanding of “nothing” using the example of apples and bananas, let’s now consider how this would play out in real-world applications beyond fruit identification.

Consider the manufacturing sector, where machines are tasked with producing massive volumes of products. Training a traditional computer-vision system to recognize different abnormalities in a product—say, surface scratches—is very challenging. On a well-run manufacturing line there aren’t many examples of what “bad” products look like, and “bad” can take an endless number of forms. There simply isn’t an abundance of data about bad products that can be used to train the system.

But with a lifelong DNN, a developer could train the computer-vision system to recognize what different examples of “good” products look like. Then, when the system detects a product that doesn’t match its definition of good, it can categorize that item as an anomaly to be handled appropriately.

For manufacturers, lifelong DNNs and the ability to detect anomalies can save time and improve efficiency in the production line. There may be similar benefits for countless other industries that are increasingly relying on AI.

Who knew that “nothing” could be so important?

Max Versace is CEO and co-founder of AI-powered visual inspections company Neurala.

Robot With Liquid Metal Tendons Can Heal Itself

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/robot-with-liquid-metal-tendons-can-heal-itself

The more dynamic robots get, the more likely they are to break. Or rather, all robots are 100 percent guaranteed to break eventually (this is one of their defining characteristics). More dynamic robots will also break more violently. While they’re in the lab, this isn’t a big deal, but for long term real-world use, wouldn’t it be great if we could rely on robots to repair themselves?

Video Friday: This Robot Is Learning About Personal Space to Avoid Pesky Humans

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-robot-personal-space

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores

Let us know if you have suggestions for next week, and enjoy today’s videos.


Facebook AI Launches Its Deepfake Detection Challenge

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/facebook-ai-launches-its-deepfake-detection-challenge

In September, Facebook sent out a strange casting call: We need all types of people to look into a webcam or phone camera and say very mundane things. The actors stood in bedrooms, hallways, and backyards, and they talked about topics such as the perils of junk food and the importance of arts education. It was a quick and easy gig—with an odd caveat. Facebook researchers would be altering the videos, extracting each person’s face and fusing it onto another person’s head. In other words, the participants had to agree to become deepfake characters. 

Facebook’s artificial intelligence (AI) division put out this casting call so it could ethically produce deepfakes—a term that originally referred to videos that had been modified using a certain face-swapping technique but is now a catchall for manipulated video. The Facebook videos are part of a training data set that the company assembled for a global competition called the Deepfake Detection Challenge. In this competition—produced in cooperation with Amazon, Microsoft, the nonprofit Partnership on AI, and academics from eight universities—researchers around the world are vying to create automated tools that can spot fraudulent media.

The competition launched today, with an announcement at the AI conference NeurIPS, and will accept entries through March 2020. Facebook has dedicated more than US $10 million for awards and grants.

Cristian Canton Ferrer helped organize the challenge as research manager for Facebook’s AI Red Team, which analyzes the threats that AI poses to the social media giant. He says deepfakes are a growing danger not just to Facebook but to democratic societies. Manipulated videos that make politicians appear to do and say outrageous things could go viral before fact-checkers have a chance to step in.

While such a full-blown synthetic scandal has yet to occur, the Italian public recently got a taste of the possibilities. In September, a satirical news show aired a deepfake video featuring a former Italian prime minister apparently lavishing insults on other politicians. Most viewers realized it was a parody, but a few did not.

The U.S. presidential elections in 2020 are an added incentive to get ahead of the problem, says Canton Ferrer. He believes that media manipulation will become much more common over the coming year, and that the deepfakes will get much more sophisticated and believable. “We’re thinking about what will be happening a year from now,” he says. “It’s a cat-and-mouse approach.” Canton ­Ferrer’s team aims to give the cat a head start, so it will be ready to pounce.

The growing threat of deepfakes

Just how easy is it to make deepfakes? A recent audit of online resources for altering videos found that the available open-source software still requires a good amount of technical expertise. However, the audit also turned up apps and services that are making it easier for almost anyone to get in on the action. In China, a deepfake app called Zao took the country by storm in September when it offered people a simple way to superimpose their own faces onto those of actors like Leonardo DiCaprio and Marilyn Monroe.

It may seem odd that the data set compiled for Facebook’s competition is filled with unknown people doing unremarkable things. But a deepfake detector that works on those mundane videos should work equally well for videos featuring politicians. To make the Facebook challenge as realistic as possible, Canton Ferrer says his team used the most common open-source techniques to alter the videos—but he won’t name the methods, to avoid tipping off contestants. “In real life, they will not be able to ask the bad actors, ‘Can you tell me what method you used to make this deepfake?’” he says.

In the current competition, detectors will be scanning for signs of facial manipulation. However, the Facebook team is keeping an eye on new and emerging attack methods, such as full-body swaps that change the appearance and actions of a person from head to toe. “There are some of those out there, but they’re pretty obvious now,” ­Canton Ferrer says. “As they get better, we’ll add them to the data set.” Even after the detection challenge concludes in March, he says, the Facebook team will keep working on the problem of deepfakes.

As for how the winning detection methods will be used and whether they’ll be integrated into Facebook’s operations, Canton Ferrer says those decisions aren’t up to him. The Partnership on AI’s steering committee on AI and media integrity, which is overseeing the competition, will decide on the next steps, he says. Claire Leibowicz, who leads that steering committee, says the group will consider “coordinated efforts” to fight back against the global challenge of synthetic and manipulated media.

DARPA’s efforts on deepfake detection

The Facebook challenge is far from the only effort to counter deepfakes. DARPA’s Media Forensics program launched in 2016, a year before the first deepfake videos surfaced on Reddit. Program manager Matt Turek says that as the technology took off, the researchers working under the program developed a number of detection technologies, generally looking for “digital integrity, physical integrity, or semantic integrity.”

Digital integrity is defined by the patterns in an image’s pixels that are invisible to the human eye. These patterns can arise from cameras and video processing software, and any inconsistencies that appear are a tip-off that a video has been altered. Physical integrity refers to the consistency in lighting, shadows, and other physical attributes in an image. Semantic integrity considers the broader context. If a video shows an outdoor scene, for example, a deepfake detector might check the time stamp and location to look up the weather report from that time and place. The best automated detector, Turek says, would “use all those techniques to produce a single integrity score that captures everything we know about a digital asset.”

Turek says his team has created a prototype Web portal (restricted to its government partners) to demonstrate a sampling of the detectors developed during the program. When the user uploads a piece of media via the Web portal, more than 20 detectors employ a range of different approaches to try to determine whether an image or video has been manipulated. Turek says his team continues to add detectors to the system, which is already better than humans at spotting fakes.

A successor to the Media Forensics program will launch in mid-2020: the Semantic Forensics program. This broader effort will cover all types of media—text, images, videos, and audio—and will go beyond simply detecting manipulation. It will also seek methods to understand the importance of the manipulations, which could help organizations decide which content requires human review. “If you manipulate a vacation photo by adding a beach ball, it really doesn’t matter,” Turek says. “But if you manipulate an image about a protest and add an object like a flag, that could change people’s understanding of who was involved.”

The Semantic Forensics program will also try to develop tools to determine if a piece of media really comes from the source it claims. Eventually, Turek says, he’d like to see the tech community embrace a system of watermarking, in which a digital signature would be embedded in the media itself to help with the authentication process. One big challenge of this idea is that every software tool that interacts with the image, video, or other piece of media would have to “respect that watermark, or add its own,” Turek says. “It would take a long time for the ecosystem to support that.”

A deepfake detection tool for consumers

In the meantime, the AI Foundation has a plan. This nonprofit is building a tool called Reality Defender that’s due to launch in early 2020. “It will become your personal AI guardian who’s watching out for you,” says Rob Meadows, president and chief technology officer for the foundation.

Reality Defender is a plug-in for Web browsers and an app for mobile phones. It scans everything on the screen using a suite of automatic detectors, then alerts the user about altered media. Detection alone won’t make for a useful tool, since ­Photoshop and other editing tools are widely used in fashion, advertising, and entertainment. If Reality Defender draws attention to every altered piece of content, Meadows notes, “it will flood consumers to the point where they say, ‘We don’t care anymore, we have to tune it out.’”

To avoid that problem, users will be able to dial the tool’s sensitivity up or down, depending on how many alerts they want. Meadows says beta testers are currently training the system, giving it feedback on which types of manipulations they care about. Once Reality Defender launches, users will be able to personalize their AI guardian by giving it a thumbs-up or thumbs-down on alerts, until it learns their preferences. “A user can say, ‘For my level of paranoia, this is what works for me,’ ” Meadows says.

He sees the software as a useful stopgap solution, but ultimately he hopes that his group’s technologies will be integrated into platforms such as Facebook, YouTube, and Twitter. He notes that Biz Stone, cofounder of Twitter, is a member of the AI Foundation’s board. To truly protect society from fake media, Meadows says, we need tools that prevent falsehoods from getting hosted on platforms and spread via social media. Debunking them after they’ve already spread is too late.

The researchers at Jigsaw, a unit of Alphabet that works on technology solutions for global challenges, would tend to agree. Technical research manager Andrew Gully says his team identified synthetic media as a societal threat some years back. To contribute to the fight, Jigsaw teamed up with sister company Google AI to produce a deepfake data set of its own in late 2018, which they contributed to the FaceForensics data set hosted by the Technical University of Munich.

Gully notes that while we haven’t yet seen a political crisis triggered by a deepfake, these videos are also used for bullying and “revenge porn,” in which a targeted woman’s face is pasted onto the face of an actor in a porno. (While pornographic deepfakes could in theory target men, a recent audit of deepfake content found that 100 percent of the pornographic videos focused on women.) What’s more, Gully says people are more likely to be credulous of videos featuring unknown individuals than famous politicians.

But it’s the threat to free and fair elections that feels most crucial in this U.S. election year. Gully says systems that detect deepfakes must take a careful approach in communicating the results to users. “We know already how difficult it is to convince people in the face of their own biases,” Gully says. “Detecting a deepfake video is hard enough, but that’s easy compared to how difficult it is to convince people of things they don’t want to believe.”