Tag Archives: robotics/robotics-software

Video Friday: These Robots Have Made 1 Million Autonomous Deliveries

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/video-friday-starship-robots-1-million-deliveries

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]

Let us know if you have suggestions for next week, and enjoy today’s videos.


UX Design – A Competitive Advantage for Robotics Companies

Post Syndicated from Rocos original https://spectrum.ieee.org/robotics/robotics-software/ux-design-a-competitive-advantage-for-robotics-companies

UX design differentiates great products from average products, and helps you stand out from the rest. Now, in the age of robotics, your teams may be creating products with no blueprint to bounce off.

Find out why a UX-led approach for the design and build of your products is critical, learn from the masters in getting this right – first time, speed up the time it takes you to get to market, and deliver products users love.

Download

Coordinated Robotics Wins DARPA SubT Virtual Cave Circuit

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/coordinated-robotics-wins-darpa-subt-virtual-cave-circuit

DARPA held the Virtual Cave Circuit event of the Subterranean Challenge on Tuesday in the form of a several hour-long livestream. We got to watch (along with all of the competing teams) as virtual robots explored virtual caves fully autonomously, dodging rockfalls, spotting artifacts, scoring points, and sometimes running into stuff and falling over.

Expert commentary was provided by DARPA, and we were able to watch multiple teams running at once, skipping from highlight to highlight. It was really very well done (you can watch an archive of the entire stream here), but they made us wait until the very end to learn who won: First place went to Coordinated Robotics, with BARCS taking second, and third place going to newcomer Team Dynamo.

How We Won the DARPA SubT Challenge: Urban Circuit Virtual Track

Post Syndicated from Richard Chase original https://spectrum.ieee.org/automaton/robotics/robotics-software/darpa-subt-challenge-urban-circuit-virtual

This is a guest post. The views expressed here are those of the authors and do not necessarily represent positions of IEEE or its organizational units.​

“Do you smell smoke?” It was three days before the qualification deadline for the Virtual Tunnel Circuit of the DARPA Subterranean Challenge Virtual Track, and our team was barrelling through last-minute updates to our robot controllers in a small conference room at the Michigan Tech Research Institute (MTRI) offices in Ann Arbor, Mich. That’s when we noticed the smell. We’d assumed that one of the benefits of entering a virtual disaster competition was that we wouldn’t be exposed to any actual disasters, but equipment in the basement of the building MTRI shares had started to smoke. We evacuated. The fire department showed up. And as soon as we could, the team went back into the building, hunkered down, and tried to make up for the unexpected loss of several critical hours.

17 Teams to Take Part in DARPA’s SubT Cave Circuit Competition

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/darpa-subt-cave-circuit-competition

Among all of the other in-person events that have been totally wrecked by COVID-19 is the Cave Circuit of the DARPA Subterranean Challenge. DARPA has already hosted the in-person events for the Tunnel and Urban SubT circuits (see our previous coverage here), and the plan had always been for a trio of events representing three uniquely different underground environments in advance of the SubT Finals, which will somehow combine everything into one bonkers course.

While the SubT Urban Circuit event snuck in just under the lockdown wire in late February, DARPA made the difficult (but prudent) decision to cancel the in-person Cave Circuit event. What this means is that there will be no Systems Track Cave competition, which is a serious disappointment—we were very much looking forward to watching teams of robots navigating through an entirely unpredictable natural environment with a lot of verticality. Fortunately, DARPA is still running a Virtual Cave Circuit, and 17 teams will be taking part in this competition featuring a simulated cave environment that’s as dynamic and detailed as DARPA can make it.

Video Friday: Researchers 3D Print Liquid Crystal Elastomer for Soft Robots

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/video-friday-3d-printed-liquid-crystal-elastomer

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online]
IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.


Autonomous Robots Could Mine the Deep Seafloor

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/automaton/robotics/robotics-software/autonomous-robots-could-mine-the-deep-seafloor

A battle is brewing over the fate of the deep ocean. Huge swaths of seafloor are rich in metals—nickel, copper, cobalt, zinc—that are key to making electric vehicle batteries, solar panels, and smartphones. Mining companies have proposed scraping and vacuuming the dark expanse to provide supplies for metal-intensive technologies. Marine scientists and environmentalists oppose such plans, warning of huge and potentially permanent damage to fragile ecosystems.

Pietro Filardo is among the technology developers who are working to find common ground.

His company, Pliant Energy Systems, has built what looks like a black mechanical stingray. Its soft, rippling fins use hyperbolic geometry to move in a traveling wave pattern, propelling the skateboard-sized device through water. From an airy waterfront lab in Brooklyn, New York, Filardo’s team is developing tools and algorithms to transform the robot into an autonomous device equipped with grippers. Their goal is to pluck polymetallic nodules—potato-sized deposits of precious ores—off the seafloor without disrupting precious habitats.

“On the one hand, we need these metals to electrify and decarbonize. On the other hand, people worry we’re going to destroy deep ocean ecosystems that we know very little about,” Filardo said. He described deep sea mining as the “killer app” for Pliant’s robot—a potentially lucrative use for the startup’s minimally invasive design.

How deep seas will be mined, and where, is ultimately up to the International Seabed Authority (ISA), a group of 168 member countries. In October, the intergovernmental body is expected to adopt a sweeping set of technical and environmental standards, known as the Mining Code, that could pave the way for private companies to access large tracts of seafloor. 

The ISA has already awarded 30 exploratory permits to contractors in sections of the Atlantic, Pacific, and Indian Oceans. Over half the permits are for prospecting polymetallic nodules, primarily in the Clarion-Clipperton Zone, a hotspot south of Hawaii and west of Mexico.

Researchers have tested nodule mining technology since the 1970s, mainly in national waters. Existing approaches include sweeping the seafloor with hydraulic suction dredges to pump up sediment, filter out minerals, and dump the resulting slurry in the ocean or tailing ponds. In India, the National Institute of Ocean Technology is building a tracked “crawler” vehicle with a large scoop to collect, crush, and pump nodules up to a mother ship.

Mining proponents say such techniques are better for people and the environment than dangerous, exploitative land-based mining practices. Yet ocean experts warn that stirring up sediment and displacing organisms that live on nodules could destroy deep sea habitats that took millions of years to develop. 

“One thing I often talk about is, ‘How do we fix it if we break it? How are we going to know we broke it?’” said Cindy Lee Van Dover, a deep sea biologist and professor at Duke University’s Nicholas School of the Environment. She said much more research is required to understand the potential effects on ocean ecosystems, which foster fisheries, absorb carbon dioxide, and produce most of the Earth’s oxygen.

Significant work is also needed to transform robots into metal collectors that can operate some 6,000 meters below the ocean surface.

Pliant’s first prototype, called Velox, can navigate the depths of a swimming pool and the shallow ocean “surf zone” where waves crash into the sand. Inside Velox, an onboard CPU distributes power to actuators that drive the undulating motions in the flexible fins. Unlike a propeller thruster, which uses a rapidly rotating blade to move small jets of water at high velocity, Pliant’s undulating fins move large volumes of water at low velocity. By using the water’s large surface area, the robot can make rapid local maneuvers using relatively little battery input, allowing the device to operate for longer periods before needing to recharge, Filardo said. 

The design also stirs up less sediment on the seafloor, a potential advantage in sensitive deep sea environments, he added.

The Brooklyn company is partnering with the Massachusetts Institute of Technology to develop a larger next-generation robot, called C-Ray. The highly maneuverable device will twist and roll like a sea otter. Using metal detectors and a mix of camera hardware and computer algorithms, C-Ray will likely be used to surveil the surf zone for potential hazards to the U.S. Navy, who is sponsoring the research program.

The partners ultimately aim to deploy “swarms” of autonomous C-Rays that communicate via a “hive mind”—applications that would also serve to mine polymetallic nodules. Pliant envisions launching hundreds of gripper-equipped robots that roam the seafloor and place nodules in cages that float to the surface on gas-filled lift bags. Filardo suggested that C-Ray could also swap nodules with lower-value stones, allowing organisms to regrow on the seafloor.

A separate project in Italy may also yield new tools for plucking the metal-rich orbs.

SILVER2 is a six-legged robot that can feel its way around the dark and turbid seafloor, without the aid of cameras or lasers, by pushing its legs in repeated, frequent cycles.

“We started by looking at what crabs did underwater,” said Marcello Calisti, an assistant professor at the BioRobotics Institute, in the Sant’Anna School of Advanced Studies. He likened the movements to people walking waist-deep in water and using the sand as leverage, or the “punter” on a flat-bottomed river boat who uses a long wooden pole to propel the vessel forward.

Calisti and colleagues spent most of July at a seaside lab in Livorno, Italy, testing the 20-kilogram prototype in shallow water. SILVER2 is equipped with a soft elastic gripper that gently envelopes objects, as if cupping them in the palm of a hand. Researchers used the crab-like robot to collect plastic litter on the seabed and deposit the debris in a central collection bin.

Although SILVER2 isn’t intended for deep sea mining, Calisti said he could foresee potential applications in the sector if his team can scale the technology.

For developers like Pliant, their ability to raise funding and achieve their mining robots will largely depend on the International Seabed Authority’s next meeting. Opponents of ocean mining are pushing to pause discussions on the Mining Code to give scientists more time to evaluate risks, and to allow companies like Tesla or Apple to devise technologies that require fewer or different metal parts. Such regulatory uncertainty could dissuade investors from backing new mining approaches that might never be used.

The biologist Van Dover said she doesn’t outright oppose the Mining Code; rather, rules should include stringent stipulations, such as requirements to monitor environmental impacts and immediately stop operations once damage is detected. “I don’t see why the code couldn’t be so well-written that it would not allow the ISA to make a mistake,” she said.

Clearpath Robotics Now Supporting ROS on Windows

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/clearpath-robotics-now-supporting-ros-on-windows

Part of the appeal of the Robot Operating System, or ROS, is that it’s much faster and easier to get started with robots because so much of the difficult and annoying groundwork is already done for you. This is totally true, but for many people, getting started with ROS adds a bunch of difficult and annoying groundwork of its own, in the form of Linux. All kinds of people will tell you, “Oh just learn Linux, it’s not so bad!” But sometimes, it really kind of is so bad, especially if all you want is for your robot to do cool stuff.

Since 2018, Microsoft has been working on getting ROS to run on Windows, the operating system used by those of us who mostly just want our computers to work without having to think about them all that much (a statement that Linux users will surely laugh at). For that to be really useful to the people who need it, though, there needs to be robot support, and today, Clearpath Robotics is adding ROS for Windows support to their fleet of friendly yellow and black ground robots.

World Turtle Day Celebrates Final Release of ROS 1

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/world-turtle-day-celebrates-final-release-of-ros-1

This past Saturday, May 23, was World Turtle Day, the day that celebrates the newest release of the turtle-themed Robot Operating System (ROS)—and also probably some actual turtles—and so we reached out to Open Robotics CEO Brian Gerkey and developer advocate Katherine Scott. We wanted to talk to them because this particular World Turtle Day also marked the release of the very last version of ROS 1: Noetic Ninjemys. From here on out, if you want some new ROS, it’s going to be ROS 2 whether you like it or not.

For folks who have robots that have been running ROS 1 for the last 4,581 days (which is when the very first commit happened), it might be a tough transition, but Open Robotics says it’ll be worth it. Below is our brief Q&A with Gerkey and Scott, where they discuss whether it’s really going to be worth all the hassle to switch to ROS 2, and also what life will be like for ROS users 4,581 days from now.

Dogs Obey Commands Given by Social Robots

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/dogs-obey-commands-given-by-social-robots

One of the things that sets robots apart from intermittently animated objects like toasters is that humans generally see robots as agents. That is, when we look at a robot, and especially a social robot (a robot designed for human interaction), we tend to ascribe some amount of independent action to them, along with motivation at varying levels of abstraction. Robots, in other words, have agency in a way that toasters just don’t. 

Agency is something that designers of robots intended for human interaction can to some extent exploit to make the robots more effective. But humans aren’t the only species that robots interact with. At the ACM/IEEE International Conference on Human-Robot Interaction (HRI 2020), researchers at Yale University’s Social Robotics Lab led by Brian Scassellati presented a paper taking the first step towards determining whether dogs, which are incredibly good at understanding social behaviors in humans, see human-ish robots as agents—or more specifically, whether dogs see robots more like humans (which they obey), or more like speaker systems (which they don’t).

It’s Robot Versus Human as Shimon Performs Real-Time Rap Battles

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/robot-versus-human-shimon-real-time-rap-battles

Last month, Georgia Tech’s Center for Music Technology introduced the latest version of Shimon, its head-bopping, marimba-playing robot. Along with a new face, Shimon has learned to compose and sing original music with a voice and style of its own. Today, Shimon’s full album will be released on Spotify, along with a series of music videos and a demonstration of a new talent: real-time robot on human rap battles.

UC Berkeley’s AI-Powered Robot Teaches Itself to Drive Off-Road

Post Syndicated from Gregory Kahn original https://spectrum.ieee.org/automaton/robotics/robotics-software/uc-berkeley-badgr-robot

This article was originally published on UC Berkeley’s BAIR Blog.

Look at the images above. If I asked you to bring me a picnic blanket to the grassy field, would you be able to? Of course. If I asked you to bring over a cart full of food for a party, would you push the cart along the paved path or on the grass? Obviously the paved path.

While the answers to these questions may seem obvious, today’s mobile robots would likely fail at these tasks: They would think the tall grass is the same as a concrete wall, and wouldn’t know the difference between a smooth path and bumpy grass. This is because most mobile robots think purely in terms of geometry: They detect where obstacles are, and plan paths around these perceived obstacles in order to reach the goal. This purely geometric view of the world is insufficient for many navigation problems. Geometry is simply not enough.

Scientists Can Work From Home When the Lab Is in the Cloud

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/robotics/robotics-software/scientists-work-from-home-lab-cloud

Working from home is the new normal, at least for those of us whose jobs mostly involve tapping on computer keys. But what about researchers who are synthesizing new chemical compounds or testing them on living tissue or on bacteria in petri dishes? What about those scientists rushing to develop drugs to fight the new coronavirus? Can they work from home?

Silicon Valley-based startup Strateos says its robotic laboratories allow scientists doing biological research and testing to do so right now. Within a few months, the company believes it will have remote robotic labs available for use by chemists synthesizing new compounds. And, the company says, those new chemical synthesis lines will connect with some of its existing robotic biology labs so a remote researcher can seamlessly transfer a new compound from development into testing.

The company’s first robotic labs, up and running in Menlo Park, Calif., since 2012, were developed by one of Strateos’ predecessor companies, Transcriptic. Last year Transcriptic merged with 3Scan, a company that produces digital 3D histological models from scans of tissue samples, to form Strateos. This facility has four robots that run experiments in large, pod-like laboratories for a number of remote clients, including DARPA and the California Pacific Medical Center Research Institute.

Strateos CEO Mark Fischer-Colbrie explains Strateos’ process:

“It starts with an intake kit,” he says, in which the researchers match standard lab containers with a web-based labeling system. Then scientists use Strateos’ graphical user interface to select various tests to run. These can include tests of the chemical properties of compounds, biochemical processes including how compounds react to enzymes or where compounds bind to molecules, and how synthetic yeast organisms respond to stimuli. Soon the company will be adding the capability to do toxicology tests on living cells.

“Our approach is fully automated and programmable,” Fischer-Colbrie says. “That means that scientists can pick a standard workflow, or decide how a workflow is run. All the pieces of equipment, which include acoustic liquid handlers, spectrophotometers, real-time quantitative polymerase chain reaction instruments, and flow cytometers are accessible.

“The scientists can define every step of the experiment with various parameters, for example, how long the robot incubates a sample and whether it does it fast or slow.&rdquo

To develop the system, Strateos’ engineers had to “connect the dots, that is, connect the lab automation to the web,” rather than dramatically push technology’s envelope, Fischer-Colbrie explains, “bringing the concepts of web services and the sharing economy to the life sciences.”

Nobody had done it before, he says, simply because researchers in the life sciences had been using traditional laboratory techniques for so long, it didn’t seem like there could be a real substitute to physically being in the lab.

Late last year, in a partnership with Eli Lilly, Strateos added four more biology lab modules in San Diego and by July plans to integrate these with eight chemistry robots that will, according to a press release, “physically and virtually integrate several areas of the drug discovery process—including design, synthesis, purification, analysis, sample management, and hypothesis testing—into a fully automated platform. The lab includes more than 100 instruments and storage for over 5 million compounds, all within a closed-loop and automated drug discovery platform.”

Some of the capacity will be used exclusively by Lilly scientists, but Fischer-Colbrie says, Strateos capped that usage and will be selling lab capacity beyond the cap to others. It currently prices biological assays on a per plate basis and will price chemical reactions per compound.

The company plans to add labs in additional cities as demand for the services increases, in much the same way that Amazon Web Services adds data centers in multiple locales.

It has also started selling access to its software systems directly to companies looking to run their own, dedicated robotic biology labs.

Strateos, of course, had developed this technology long before the new coronavirus pushed people into remote work. Fischer-Colbrie says it has several advantages over traditional lab experiments in addition to enabling scientists to work from home. Experiments run via robots are easier to standardize, he says, and record more metadata than customary or even possible during a manual experiment. This will likely make repeating research easier, allow geographically separated scientists to work together, and create a shorter path to bringing AI into the design and analysis of experiments. “Because we can easily repeat experiments and generate clean datasets, training data for AI systems is cleaner,” he said.

And, he says, robotic labs open up the world of drug discovery to small companies and individuals who don’t have funding for expensive equipment, expanding startup opportunities in the same way software companies boomed when they could turn to cloud services for computing capacity instead of building their own server farms.

Says Alok Gupta, Strateos senior vice president of engineering, “This allows scientists to focus on the concept, not on buying equipment, setting it up, calibrating it; they can just get online and start their work.”

“It’s frictionless science,” says CEO Fischer-Colbrie, “giving scientists the ability to concentrate on their ideas and hypotheses.”

Musical Robot Learns to Sing, Has Album Dropping on Spotify

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/musical-robot-shimon-sing-album-dropping-on-spotify

We’ve been writing about the musical robots from Georgia Tech’s Center for Music Technology for many, many years. Over that time, Gil Weinberg’s robots have progressed from being able to dance along to music that they hear, to being able to improvise along with it, to now being able to compose, play, and sing completely original songs.

Shimon, the marimba-playing robot that has performed in places like the Kennedy Center, will be going on a new tour to promote an album that will be released on Spotify next month, featuring songs written (and sung) entirely by the robot.

New Ideas for the Automation of Tomorrow

Post Syndicated from Automatica original https://spectrum.ieee.org/robotics/robotics-software/new_ideas_for_the_automation_of_tomorrow

 

Being the only event worldwide that brings together all visionary key technologies in one place, automatica is the leading exhibition for smart automation and robotics. Benefit from know-how transfer with leading experts and expand your network with people who can show you the to Industry 4.0.

automatica will open its doors June 16-19, 2020 to 890 international exhibitors and over 45,584 visitors from all over the world in Munich, Germany.

Save time and money now and conveniently purchase your ticket in advance online!

Automatica Autonomous

Patrick Schwarzkopf, Managing Director of VDMA Robotics + Automation, explained: “Robotics and automation is the key technology for increased competitiveness, quality and sustainability. If you want to make the best use of intelligent automation and robotics as well as find out about all new trends, you will find answers at automatica in Munich. It is clearly the leader in this topic area.”Learn more

Autonomous Vehicles Should Start Small, Go Slow

Post Syndicated from Shaoshan Liu original https://spectrum.ieee.org/robotics/robotics-software/autonomous-vehicles-should-start-small-go-slow

Many young urbanites don’t want to own a car, and unlike earlier generations, they don’t have to rely on mass transit. Instead they treat mobility as a service: When they need to travel significant distances, say, more than 5 miles (8 kilometers), they use their phones to summon an Uber (or a car from a similar ride-sharing company). If they have less than a mile or so to go, they either walk or use various “micromobility” services, such as the increasingly ubiquitous Lime and Bird scooters or, in some cities, bike sharing.

The problem is that today’s mobility-as-a-service ecosystem often doesn’t do a good job covering intermediate distances, say a few miles. Hiring an Uber or Lyft for such short trips proves frustratingly expensive, and riding a scooter or bike more than a mile or so can be taxing to many people. So getting yourself to a destination that is from 1 to 5 miles away can be a challenge. Yet such trips account for about half of the total passenger miles traveled.

Many of these intermediate-distance trips take place in environments with limited traffic, such as university campuses and industrial parks, where it is now both economically reasonable and technologically possible to deploy small, low-speed autonomous vehicles powered by electricity. We’ve been involved with a startup that intends to make this form of transportation popular. The company, PerceptIn, hasautonomous vehicles operating at tourist sites in Nara and Fukuoka, Japan; at an industrial park in Shenzhen, China; and is just now arranging for its vehicles to shuttle people around Fishers, Ind., the location of the company’s headquarters.

Because these diminutive autonomous vehicles never exceed 20 miles (32 kilometers) per hour and don’t mix with high-speed traffic, they don’t engender the same kind of safety concerns that arise with autonomous cars that travel on regular roads and highways. While autonomous driving is a complicated endeavor, the real challenge for PerceptIn was not about making a vehicle that can drive itself in such environments—the technology to do that is now well established—but rather about keeping costs down.

Given how expensive autonomous cars still are in the quantities that they are currently being produced—an experimental model can cost you in the neighborhood of US $300,000—you might not think it possible to sell a self-driving vehicle of any kind for much less. Our experience over the past few years shows that, in fact, it is possible today to produce a self-driving passenger vehicle much more economically: PerceptIn’s vehicles currently sell for about $70,000, and the price will surely drop in the future. Here’s how we and our colleagues at PerceptIn brought the cost of autonomous driving down to earth.

Let’s start by explaining why autonomous cars are normally so expensive. In a nutshell, it’s because the sensors and computers they carry are very pricey.

The suite of sensors required for autonomous driving normally includes a high-end satellite-navigation receiver, lidar (light detection and ranging), one or more video cameras, radar, and sonar. The vehicle also requires at least one very powerful computer.

The satellite-navigation receivers used in this context aren’t the same as the one found in your phone. The kind built into autonomous vehicles have what is called real-time kinematic capabilities for high-precision position fixes—down to 10 centimeters. These devices typically cost about $4,000. Even so, such satellite-navigation receivers can’t be entirely relied on to tell the vehicle where it is. The fixes it gets could be off in situations where the satellite signals bounce off of nearby buildings, introducing noise and delays. In any case, satellite navigation requires an unobstructed view of the sky. In closed environments, such as tunnels, that just doesn’t work.

Fortunately, autonomous vehicles have other ways to figure out where they are. In particular they can use lidar, which determines distances to things by bouncing a laser beam off them and measuring how long it takes for the light to reflect back. A typical lidar unit for autonomous vehicles covers a range of 150 meters and samples more than 1 million spatial points per second.

Such lidar scans can be used to identify different shapes in the local environment. The vehicle’s computer then compares the observed shapes with the shapes recorded in a high-definition digital map of the area, allowing it to track the exact position of the vehicle at all times. Lidar can also be used to identify and avoid transient obstacles, such as pedestrians and other cars.

Lidar is a wonderful technology, but it suffers from two problems. First, these units are extremely expensive: A high-end lidar for autonomous driving can easily cost more than $80,000, although costs are dropping, and for low-speed applications a suitable unit can be purchased for about $4,000. Also, lidar, being an optical device, can fail to provide reasonable measurements in bad weather, such as heavy rain or fog.

The same is true for the cameras found on these vehicles, which are mostly used to recognize and track different objects, such as the boundaries of driving lanes, traffic lights, and pedestrians. Usually, multiple cameras are mounted around the vehicle. These cameras typically run at 60 frames per second, and the multiple cameras used can generate more than 1 gigabyte of raw data each second. Processing this vast amount of information, of course, places very large computational demands on the vehicle’s computer. On the plus side, cameras aren’t very expensive.

The radar and sonar systems found in autonomous vehicles are used for obstacle avoidance. The data sets they generate show the distance from the nearest object in the vehicle’s path. The major advantage of these systems is that they work in all weather conditions. Sonar usually covers a range of up to 10 meters, whereas radar typically has a range of up to 200 meters. Like cameras, these sensors are relatively inexpensive, often costing less than $1,000 each.

The many measurements such sensors supply are fed into the vehicle’s computers, which have to integrate all this information to produce an understanding of the environment. Artificial neural networks and deep learning, an approach that has grown rapidly in recent years, play a large role here. With these techniques, the computer can keep track of other vehicles moving nearby, as well as of pedestrians crossing the road, ensuring the autonomous vehicle doesn’t collide with anything or anyone.

Of course, the computers that direct autonomous vehicles have to do a lot more than just avoid hitting something. They have to make a vast number of decisions about where to steer and how fast to go. For that, the vehicle’s computers generate predictions about the upcoming movement of nearby vehicles before deciding on an action plan based on those predictions and on where the occupant needs to go.

Lastly, an autonomous vehicle needs a good map. Traditional digital maps are usually generated from satellite imagery and have meter-level accuracy. Although that’s more than sufficient for human drivers, autonomous vehicles demand higher accuracy for lane-level information. Therefore, special high-definition maps are needed.

Just like traditional digital maps, these HD maps contain many layers of information. The bottom layer is a map with grid cells that are about 5 by 5 cm; it’s generated from raw lidar data collected using special cars. This grid records elevation and reflection information about the objects in the environment.

On top of that base grid, there are several layers of additional information. For instance, lane information is added to the grid map to allow autonomous vehicles to determine whether they are in the correct lane. On top of the lane information, traffic-sign labels are added to notify the autonomous vehicles of the local speed limit, whether they are approaching traffic lights, and so forth. This helps in cases where cameras on the vehicle are unable to read the signs.

Traditional digital maps are updated every 6 to 12 months. To make sure the maps that autonomous vehicles use contain up-to-date information, HD maps should be refreshed weekly. As a result, generating and maintaining HD maps can cost millions of dollars per year for a midsize city.

All that data on those HD maps has to be stored on board the vehicle in solid-state memory for ready access, adding to the cost of the computing hardware, which needs to be quite powerful. To give you a sense, an early computing system that Baidu employed for autonomous driving used an Intel Xeon E5 processor and four to eight Nvidia K80 GPU accelerators. The system was capable of delivering 64.5 trillion floating-point operations per second, but it consumed around 3,000 watts and generated an enormous amount of heat. And it cost about $30,000.

Given that the sensors and computers alone can easily cost more than $100,000, it’s not hard to understand why autonomous vehicles are so expensive, at least today. Sure, the price will come down as the total number manufactured increases. But it’s still unclear how the costs of creating and maintaining HD maps will be passed along. In any case, it will take time for better technology to address all the obvious safety concerns that come with autonomous driving on normal roads and highways.

We and our colleagues at PerceptIn have been trying to address these challenges by focusing on small, slow-speed vehicles that operate in limited areas and don’t have to mix with high-speed traffic—university campuses and industrial parks, for example.

The main tactic we’ve used to reduce costs is to do away with lidar entirely and instead use more affordable sensors: cameras, inertial measurement units, satellite positioning receivers, wheel encoders, radars, and sonars. The data that each of these sensors provides can then be combined though a process called sensor fusion.

With a balance of drawbacks and advantages, these sensors tend to complement one another. When one fails or malfunctions, others can take over to ensure that the system remains reliable. With this sensor-fusion approach, sensor costs could drop eventually to something like $2,000.

Because our vehicle runs at a low speed, it takes at the very most 7 meters to stop, making it much safer than a normal car, which can take tens of meters to stop. And with the low speed, the computing systems have less severe latency requirements than those used in high-speed autonomous vehicles.

PerceptIn’s vehicles use satellite positioning for initial localization. While not as accurate as the systems found on highway-capable autonomous cars, these satellite-navigation receivers still provide submeter accuracy. Using a combination of camera images and data from inertial measurement units (in a technique called visual inertial odometry), the vehicle’s computer further improves the accuracy, fixing position down to the decimeter level.

For imaging, PerceptIn has integrated four cameras into one hardware module. One pair faces the front of the vehicle, and another pair faces the rear. Each pair of cameras provides binocular vision, allowing it to capture the kind of spatial information normally given by lidar. What’s more, the four cameras together can capture a 360-degree view of the environment, with enough overlapping spatial regions between frames to ensure that visual odometry works in any direction.

Even if visual odometry were to fail and satellite-positioning signals were to drop out, all wouldn’t be lost. The vehicle could still work out position updates using rotary encoders attached to its wheels—following a general strategy that sailors used for centuries, called dead reckoning.

Data sets from all these sensors are combined to give the vehicle an overall understanding of its environment. Based on this understanding, the vehicle’s computer can make the decisions it requires to ensure a smooth and safe trip.

The vehicle also has an anti-collision system that operates independently of its main computer, providing a last line of defense. This uses a combination of millimeter-wave radars and sonars to sense when the vehicle is within 5 meters of objects, in which case it’s immediately stopped.

Relying on less expensive sensors is just one strategy that PerceptIn has pursued to reduce costs. Another has been to push computing to the sensors to reduce the demands on the vehicle’s main computer, a normal PC with a total cost less than $1,500 and a peak system power of just 400 W.

PerceptIn’s camera module, for example, can generate 400 megabytes of image information per second. If all this data were transferred to the main computer for processing, that computer would have to be extremely complex, which would have significant consequences in terms of reliability, power, and cost. PerceptIn instead has each sensor module perform as much computing as possible. This reduces the burden on the main computer and simplifies its design.

More specifically, a GPU is embedded into the camera module to extract features from the raw images. Then, only the extracted features are sent to the main computer, reducing the data-transfer rate a thousandfold.

Another way to limit costs involves the creation and maintenance of the HD maps. Rather than using vehicles outfitted with lidar units to provide map data, PerceptIn enhances existing digital maps with visual information to achieve decimeter-level accuracy.

The resultant high-precision visual maps, like the lidar-based HD maps they replace, consist of multiple layers. The bottom layer can be any existing digital map, such as one from the OpenStreetMap project. This bottom layer has a resolution of about 1 meter. The second layer records the visual features of the road surfaces to improve mapping resolution to the decimeter level. The third layer, also saved at decimeter resolution, records the visual features of other parts of the environment—such as signs, buildings, trees, fences, and light poles. The fourth layer is the semantic layer, which contains lane markings, traffic sign labels, and so forth.

While there’s been much progress over the past decade, it will probably be another decade or more before fully autonomous cars start taking to most roads and highways. In the meantime, a practical approach is to use low-speed autonomous vehicles in restricted settings. Several companies, including Navya, EasyMile, and May Mobility, along with PerceptIn, have been pursuing this strategy intently and are making good progress.

Eventually, as the relevant technology advances, the types of vehicles and deployments can expand, ultimately to include vehicles that can equal or surpass the performance of an expert human driver.

PerceptIn has shown that it’s possible to build small, low-speed autonomous vehicles for much less than it costs to make a highway-capable autonomous car. When the vehicles are produced in large quantities, we expect the manufacturing costs to be less than $10,000. Not too far in the future, it might be possible for such clean-energy autonomous shuttles to be carrying passengers in city centers, such as Manhattan’s central business district, where the average speed of traffic now is only 7 miles per hour[PDF]. Such a fleet would significantly reduce the cost to riders, improve traffic conditions, enhance safety, and improve air quality to boot. Tackling autonomous driving on the world’s highways can come later.

This article appears in the March 2020 print issue as “Autonomous Vehicles Lite.”

About the Authors

Shaoshan Liu is the cofounder and CEO of PerceptIn, an autonomous vehicle startup in Fishers, Ind. Jean-Luc Gaudiot is a professor of electrical engineering and computer science at the University of California, Irvine.

How to Program Sony’s Robot Dog Aibo

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/how-to-program-sony-aibo

The Sony Aibo has been the most sophisticated home robot that you can buy for an astonishing 20 years. The first Aibo went on sale in 1999, and even though there was a dozen year-long gap between 2005’s ERS-7 and the latest ERS-1000, there was really no successful consumer robot over that intervening time that seriously challenged the Aibo.

Part of what made Aibo special was how open Sony was user customization and programmability. Aibo served as the RoboCup Standard Platform for a decade, providing an accessible hardware platform that leveled the playing field for robotic soccer. Designed to stand up to the rigors of use by unsupervised consumers (and, presumably, their kids), Aibo offered both durability and versatility that compared fairly well to later, much more expensive robots like Nao

Aibo ERS-1000: The newest model

The newest Aibo, the ERS-1000, was announced in late 2017 and is now available for US $2,900 in the United States and 198,000 yen in Japan. It’s faithful to the Aibo family, while benefiting from years of progress in robotics hardware and software. However, it wasn’t until last November that Sony opened up Aibo to programmers, by providing visual programming tools as well as access to an API (application programming interface). And over the holidays, Sony lent us an Aibo to try it out for ourselves.

This is not (I repeat not) an Aibo review: I’m not going to talk about how cute it is, how to feed it, how to teach it to play fetch, how weird it is that it pretends to pee sometimes, or how it feels to have it all snuggled up in your lap while you’re working at your computer. Instead, I’m going to talk about how to (metaphorically) rip it open and access its guts to get it to do exactly what you want.

Help Rescuers Find Missing Persons With Drones and Computer Vision

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/tech-talk/robotics/robotics-software/help-rescuers-find-missing-persons-through-emergency-response-contest

There’s a definite sense that robots are destined to become a critical part of search and rescue missions and disaster relief efforts, working alongside humans to help first responders move faster and more efficiently. And we’ve seen all kinds of studies that include the claim “this robot could potentially help with disaster relief,” to varying degrees of plausibility.

But it takes a long time, and a lot of extra effort, for academic research to actually become anything useful—especially for first responders, where there isn’t a lot of financial incentive for further development.

It turns out that if you actually ask first responders what they most need for disaster relief, they’re not necessarily interested in the latest and greatest robotic platform or other futuristic technology. They’re using commercial off-the-shelf drones, often consumer-grade ones, because they’re simple and cheap and great at surveying large areas. The challenge is doing something useful with all of the imagery that these drones collect. Computer vision algorithms could help with that, as long as those algorithms are readily accessible and nearly effortless to use. 

The IEEE Robotics and Automation Society and the Center for Robotic-Assisted Search and Rescue (CRASAR) at Texas A&M University have launched a contest to bridge this gap between the kinds of tools that roboticists and computer vision researchers might call “basic” and a system that’s useful to first responders in the field. It’s a simple and straightforward idea, and somewhat surprising that no one had thought of it before now. And if you can develop such a system, it’s worth some cash.