Tag Archives: robotics

Watch World Champion Soccer Robots Take on Humans at RoboCup

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/watch-world-champion-soccer-robots-take-on-humans-at-robocup

Humans may not be doomed at soccer quite yet

RoboCup 2019 took place earlier this month down in Sydney, Australia. While there are many different events including [email protected], RoboCup Rescue, and a bunch of different soccer leagues, one of the most compelling events is middle-size league (MSL), where mobile robots each about the size of a fire hydrant play soccer using a regular size FIFA soccer ball. The robots are fully autonomous, making their own decisions in real time about when to dribble, pass, and shoot.

The long-term goal of RoboCup is this:

By the middle of the 21st century, a team of fully autonomous humanoid robot soccer players shall win a soccer game, complying with the official rules of FIFA, against the winner of the most recent World Cup.

While the robots are certainly not there yet, they’re definitely getting closer.

Intel’s Neuromorphic System Hits 8 Million Neurons, 100 Million Coming by 2020

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/intels-neuromorphic-system-hits-8-million-neurons-100-million-coming-by-2020

Researchers can use the 64-chip Pohoiki Beach system to make systems that learn and see the world more like humans

At the DARPA Electronics Resurgence Initiative Summit today in Detroit, Intel plans to unveil an 8-million-neuron neuromorphic system comprising 64 Loihi research chips—codenamed Pohoiki Beach. Loihi chips are built with an architecture that more closely matches the way the brain works than do chips designed to do deep learning or other forms of AI. For the set of problems that such “spiking neural networks” are particularly good at, Loihi is about 1,000 times as fast as a CPU and 10,000 times as energy efficient. The new 64-Loihi system represents the equivalent of 8-million neurons, but that’s just a step to a 768-chip, 100-million-neuron system that the company plans for the end of 2019.

Intel and its research partners are just beginning to test what massive neural systems like Pohoiki Beach can do, but so far the evidence points to even greater performance and efficiency, says Mike Davies, director of neuromorphic research at Intel.

“We’re quickly accumulating results and data that there are definite benefits… mostly in the domain of efficiency. Virtually every one that we benchmark…we find significant gains in this architecture,” he says.

Video Friday: This NASA Robot Uses “Fishhook Grippers” to Climb Rock Walls

Post Syndicated from Evan Ackerman, Erico Guizzo and Fan Shi original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-nasa-lemur-robot

Your weekly selection of awesome robot videos

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam

Let us know if you have suggestions for next week, and enjoy today’s videos.


First the E-Bike, Next the Flying Car

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/robotics/industrial-robots/a-creator-of-3d-printing-technology-rolls-into-the-bike-manufacturing-business-with-carbon-fiber-frames

This company thinks its 3D-printing technology for carbon fiber can do anything

Carbon fiber composites are incredibly strong for their weight; that’s why they’re key to the newest aircraft designs. However, they’re only strong in one direction, so they’re generally layered or woven in grid patterns before being shaped into structures. That means one set of fibers carries the load some of the time, and another set carries it at other times—which is not the most efficient use of the material.

In 2014, Hemant Bheda was CEO of Quantum Polymers, a company that makes extruded plastic rods, plates, and other shapes for machined parts. The company used chopped up carbon fiber in some of its materials, but a potential customer asked for a material which would require continuous carbon fiber to be embedded in a polymer material in carefully laid paths that would give the material super mechanical properties.

“I said that we couldn’t do it,” Bheda recalls, but he kept thinking about that request. As he says was typical for him, an EE whose career has focused on software, and particularly on image compression and video, he kept wondering if the issue was a software problem instead of a hardware problem.

Then, he says, he came across a paper about 3D printing. Why, he thought, “can’t we come up with an algorithm for optimal orientation of the fiber in 3-D space and only use the fiber in the directions you need it?”

How High Fives Help Us Get in Touch With Robots

Post Syndicated from Naomi Fitter original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/how-high-fives-help-us-get-in-touch-with-robots

Social touch is a cornerstone of human interaction, and robots are learning how to do it too

The human sense of touch is so naturally ingrained in our everyday lives that we often don’t notice its presence. Even so, touch is a crucial sensing ability that helps people to understand the world and connect with others. As the market for robots grows, and as robots become more ingrained into our environments, people will expect robots to participate in a wide variety of social touch interactions. At Oregon State University’s Collaborative Robotics and Intelligent Systems (CoRIS) Institute, I research how to equip everyday robots with better social-physical interaction skills—from playful high-fives to challenging physical therapy routines.  

Robots Have a Hard Time Grasping These “Adversarial Objects”

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/uc-berkeley-adversarial-objects-for-robots

To make robot grasping more robust, researchers are designing objects that are as difficult as possible for robots to manipulate

There’s been a bunch of research recently into adversarial images, which are images of things that have been modified to be particularly difficult for computer vision algorithms to accurately identify. The idea is that these kinds of images can be used to help design more robust computer vision algorithms, because their “adversarial” nature is sort of a deliberate worst-case scenario—if your algorithm can handle adversarial images, then it can probably handle most other things.

Researchers at UC Berkeley have been extending this concept to robot grasping, with physical adversarial objects carefully designed to be tricky for conventional robot grippers to pick up. All it takes is a slight tweak to straightforward three-dimensional shapes, and a standard two-finger will have all kinds of trouble finding a solid grasp.

Watch This Drone Explode Into Maple Seed Microdrones in Midair

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/drones/watch-this-drone-explode-into-maple-seed-microdrones-in-midair

Starting out together and then splitting apart makes these bio-inspired drones fly farther and more precisely

As useful as conventional fixed-wing and quadrotor drones have become, they still tend to be relatively complicated, expensive machines that you really want to be able to use more than once. When a one-way trip is all that you have in mind, you want something simple, reliable, and cheap, and we’ve seen a bunch of different designs for drone gliders that more or less fulfill those criteria. 

For an even simpler gliding design, you want to minimize both airframe mass and control surfaces, and the maple tree provides some inspiration in the form of samara, those distinctive seed pods that whirl to the ground in the fall. Samara are essentially just an unbalanced wing that spins, and while the natural ones don’t steer, adding an actuated flap to the robotic version and moving it at just the right time results in enough controllability to aim for a specific point on the ground.

Roboticists at the Singapore University of Technology and Design (SUTD) have been experimenting with samara-inspired drones, and in a new paper in IEEE Robotics and Automation Letters they explore what happens if you attach five of the drones together and then separate them in mid air.

This Is the Most Powerful Robot Arm Ever Installed on a Mars Rover

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/space-robots/robot-arm-mars-2020-rover

How NASA’s Jet Propulsion Laboratory designed the robot arm for the Mars 2020 rover

Last month, engineers at NASA’s Jet Propulsion Laboratory wrapped up the installation of the Mars 2020 rover’s 2.1-meter-long robot arm. This is the most powerful arm ever installed on a Mars rover. Even though the Mars 2020 rover shares much of its design with Curiosity, the new arm was redesigned to be able to do much more complex science, drilling into rocks to collect samples that can be stored for later recovery.

JPL is well known for developing robots that do amazing work in incredibly distant and hostile environments. The Opportunity Mars rover, to name just one example, had a 90-day planned mission but remained operational for 5,498 days in a robot unfriendly place full of dust and wild temperature swings where even the most basic maintenance or repair is utterly impossible. (Its twin rover, Spirit, operated for 2,269 days.) 

To learn more about the process behind designing robotic systems that are capable of feats like these, we talked with Matt Robinson, one of the engineers who designed the Mars 2020 rover’s new robot arm.

Robots Made Out of Branches Use Deep Learning to Walk

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/robots-tree-branches-deep-learning-walk

Researchers used deep reinforcement learning to teach these strange robots how to move

Designing robots is a finicky process, requiring an exhaustive amount of thought and care. It’s usually necessary to have a very clear idea of what you want your robot to do and how you want it to do it, and then you build a prototype, discover everything that’s wrong with it, build something different and better, and repeat until you run out of time and/or money.

But robots don’t necessarily have to be this complicated, as long as your expectations for what they should be able to do are correspondingly low. In a paper presented at a NeurIPS workshop last December, a group of researchers from the University of Tokyo and Preferred Networks experimented with building mobile robots out of a couple of generic servos plus stuff you can find on the ground, like tree branches. 

Video Friday: NASA Is Sending This Flying Robot to Saturn’s Moon Titan

Post Syndicated from Evan Ackerman, Erico Guizzo and Fan Shi original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-nasa-flying-robot-saturn-moon-titan

Your weekly selection of awesome robot videos

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, UK
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, PA, USA

Let us know if you have suggestions for next week, and enjoy today’s videos.


Robot Squid and Robot Scallop Showcase Bio-inspired Underwater Propulsion

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/robot-squid-and-robot-scallop-showcase-bioinspired-underwater-propulsion

Animals have lots of creative ways of moving through the water, and robots are stealing them

Most underwater robots use one of two ways of getting around. Way one is with propellers, and way two is with fins. But animals have shown us that there are many more kinds of underwater locomotion, potentially offering unique benefits to robots. We’ll take a look at two papers from ICRA this year that showed bioinspired underwater robots moving in creative new ways: A jet-powered squid robot that can leap out of the water, plus a robotic scallop that moves just like the real thing.

Solar-Powered RoboBee X-Wing Flies Untethered

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/solar-powered-robobee-xwing-flies-untethered

Just this week, in this very galaxy, X-Wing achieves liftoff

The first generation of robotic bees were designed to be very bee-like, featuring two flapping wings at bee scale. After all, bees can do a lot with two wings, so why can’t robots? Turns out there are a lot of reasons why little winged robots can’t do what bees do, at least for now—things like yaw control has proved to be somewhat tricky, which is one reason why less explicitly bee-like designs that use four wings instead of two are appealing

We saw some impressive research at ICRA this year showing that yaw control with two wings is possible, but four wings have additional advantages— namely, more wings means more power for lifting more stuff. And with more lifting power, it’s possible to have a completely self-contained robot insect, even if it’s slightly weird looking.

In Nature this week, researchers from Harvard’s Microrobotics Lab, led by Professor Robert J. Wood, are presenting a four-winged version of their RoboBee platform. They are calling this version RoboBee X-Wing, and it’s capable of untethered flight thanks to solar cells and a light source that would put high noon(s) on Tatooine to shame.

Harnessing the Public’s Smartphones to Track Drones

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/robotics/drones/harnessing-the-publics-smartphones-to-track-drones

This proposed crowdsensing approach for tracking drones allows participants to make some side cash

Journal Watch report logo, link to report landing page

Because they’re so useful for so many things, drones will undoubtedly become a more common sight in the next few years. And as the number of drones in the sky increases, the need to track these mini-flying machines as they move from one spot to another will become more important.

In a recent study in IEEE Transactions on Mobile Computing, a team of scientists in China proposed an intriguing way to track unfamiliar drones through crowdsensing. Their approach leverages participants’ smartphones to detect the Wi-Fi signals of drones.

Tracking drones would be especially helpful in situations where the devices were being used for ill-intentioned purposes, such as for peeping in at someone or to transport illegal substances. But as Zhiguo Shi of Zhejiang University notes, “Detecting drones, especially in urban environments, is not easy. Traditional approaches are of huge cost, since the corresponding equipment, such as radars, cameras, and microphone arrays, are very expensive.”

His team sought to find a cheaper method. They realized that most drones use Wi-Fi technology to communicate with ground control stations. At the same time, virtually all smartphones can detect Wi-Fi signals and phones are abundant, especially in urban settings.

Boing Goes the Trampoline Robot

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/quadruped-robot-on-a-trampoline

If you can’t afford a fancy quadruped with springy legs, just use a springy floor instead

There are a handful of quadrupedal robots out there that are highly dynamic, with the ability to run and jump, but those robots tend to be rather expensive and complicated, requiring powerful actuators and legs with elasticity. Boxing Wang, a Ph.D. student in the College of Control Science and Engineering at Zhejiang University in China, contacted us to share a project he’s been working to investigate quadruped jumping with simple, affordable hardware.

“The motivation for this project is quite simple,” Boxing says. “I wanted to study quadrupedal jumping control, but I didn’t have custom-made powerful actuators, and I didn’t want to have to design elastic legs. So I decided to use a trampoline to make a normal servo-driven quadruped robot to jump.”

This AI Watched 100 Films to Learn How to Recognize a Kiss

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-learns-how-to-pucker-up-from-hollywood

A senior data scientist at Netflix trained an AI to detect kissing scenes in films—and had to take precautions to make sure the model didn’t confuse kissing with sex

Like someone who has never been kissed, AI began learning the basics by binge-watching romantic film clips to see how Hollywood stars lock lips. By training deep learning algorithms that have already proven adept at recognizing faces and objects to also recognize steamy kissing scenes dramatized by professional actors, a data scientist has shown how AI systems could gain greater insight into the most intimate human activities.

Video Friday: This Robot Is Like a Roomba for Your Garden

Post Syndicated from Evan Ackerman, Erico Guizzo and Fan Shi original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-tertill-weeding-robot

Your weekly selection of awesome robot videos

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RSS 2019 – June 22-26, 2019 – Freiburg, Germany
Hamlyn Symposium on Medical Robotics – June 23-26, 2019 – London, U.K.
ETH Robotics Summer School – June 27-1, 2019 – Zurich, Switzerland
MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.


iRobot Acquires Root Robotics to Boost STEM Education for Kids

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/irobot-acquires-root-robotics-to-boost-stem-education-for-kids

Root promises to teach coding skills to children, starting as young as 4 years old

In 2016, Harvard’s Wyss Institute introduced Root, a robot designed as a practical tool for teaching kids how to code. Root had been under development at Harvard for a solid three years at that point, and after a massive $400,000 Kickstarter showed that they really had something, Root Robotics was spun out in 2017 to take the little coding robot commercial.

Today, iRobot is announcing the acquisition of Root Robotics, in order to “support iRobot’s plans to diversify its educational robot product offerings, further demonstrating its commitment to make robotic technology more accessible to educators, students and parents.” This makes a lot of sense for iRobot, which has historically been a big supporter of STEM education—National Robotics Week was pretty much their idea, after all. But iRobot itself only really had the iRobot Create and Create 2 to advance STEM education directly, and those robots are really not for beginners.

As of right now, iRobot is selling Root for $200, along with a companion app and integrated K-12 curriculum. We’ll take a quick look at everything this robot can do, and hear a bit from both iRobot CEO Colin Angle and Root Robotics co-founder Zee Dubrovsky on exactly what this new partnership means. 

Robot Fish Powered by Synthetic Blood Just Keeps Swimming

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/robot-fish-synthetic-blood

A liquid battery that doubles as hydraulic fluid helps this robot swim for up to 36 hours

Living things are stupendously complicated, and when we make robots (even bio-inspired robots), we mostly just try and do the best we can to match the functionality of animals, rather than the details of their structure. One exception to this is hydraulic robots, which operate on the same principle as spiders do, by pumping pressurized fluid around to move limbs. This is more of a side effect than actual bio-inspiration, though, as spiders still beat robots in that they use their blood as both a hydraulic fluid and to do everything else that blood does, like transporting nutrients and oxygen where it’s needed.

In a paper published in Nature this week, researchers from Cornell and the University of Pennsylvania are presenting a robotic fish that uses synthetic blood pumped through an artificial circulatory system to provide both hydraulic power for muscles and a distributed source of electrical power. The system they came up with “combines the functions of hydraulic force transmission, actuation and energy storage into a single integrated design that geometrically increases the energy density of the robot to enable operation for long durations,” which sounds bloody amazing, doesn’t it?

Dishcraft Robotics Takes Over Dishwashing From Humans

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/industrial-robots/dishcraft-robotics-takes-over-dishwashing-from-humans

Washing dishes is a problem that robots can solve, at least in commercial kitchens

The kinds of jobs that robots are ideal for are the kinds of jobs that humans just straight up do not want to do. This is where the whole “dull, dirty, dangerous” thing comes in, but even in those categories, some jobs are duller, dirtier, or more dangerous than others. These are the jobs that we should be focusing on robotizing—not just jobs that are possible to automate, but jobs that need to be automated because you simply can’t find enough humans to reliably do them. 

One of these jobs is commercial dishwasher. It’s dull and dirty, and turnover is very high, with the average human quitting after just over a month and around 30 percent of dishwashing jobs going unfilled, according to one estimate. And if your dishwater doesn’t show up for work, everyone else in the kitchen has to pitch in to make sure that there are enough clean dishes, slowing everything down. 

Today, a startup called Dishcraft Robotics is announcing a new robotic dish cleaning system, designed to minimize the time and effort that humans spend scrubbing dishes. Brought to you by some of the folks behind Neato Robotics and Dash Robotics, the San Carlos, Calif.-based Dishcraft uses some clever engineering and practical constraints to make sure that dishes are done cleaner, faster, better, and cheaper.

Massive 3D Dataset Helps Robots Understand What Things Are

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/partnet-helps-robots-understand-what-things-are

PartNet is a new semantic database of common objects that brings a new level of real-world understanding to robots

One of the things that makes humans so great at adapting to the world around us is our ability to understand entire categories of things all at once, and then use that general understanding to make sense of specific things that we’ve never seen before. For example, consider something like a lamp. We’ve all seen some lamps. Nobody has seen every single lamp there is. But in most cases, we can walk into someone’s house for the first time and easily identify all their lamps and how they work. Every once in a while, of course, there will be something incredibly weird that’ll cause you to have to ask, “Uh, is that a lamp? How do I turn it on?” But most of the time, our generalized mental model of lamps keeps us out of trouble. 

It’s helpful that lamps, along with other categories of objects, have (by definition) lots of pieces in common with each other. Lamps usually have bulbs in them. They often have shades. There’s probably also a base to keep it from falling over, a body to get it off the ground, and a power cord. If you see something with all of those characteristics, it’s probably a lamp, and once you know that, you can make educated guesses about how to usefully interact with it.

This level of understanding is something that robots tend to be particularly bad at, which is a real shame because of how useful it is. You might even argue that robots will have to understand objects on a level close to this if we’re ever going to trust them to operate autonomously in unstructured environments. At the 2019 Conference on Computer Vision and Pattern Recognition (CVPR) this week, a group of researchers from Stanford, UCSD, SFU, and Intel are announcing PartNet, a huge database of common 3D objects that are broken down and annotated at the level required to, they hope, teach a robot exactly what a lamp is.