Tag Archives: robotics

Video Friday: Nanotube-Powered Insect Robots

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-nanotubepowered-insect-robots

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi’an, China

Let us know if you have suggestions for next week, and enjoy today’s videos.

Visible Touch: How Cameras Can Help Robots Feel

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/robotics/robotics-hardware/visible-touch-cameras-robots-feel

The dawn of the robot revolution is already here, and it is not the dystopian nightmare we imagined. Instead, it comes in the form of social robots: Autonomous robots in homes and schools, offices and public spaces, able to interact with humans and other robots in a socially acceptable, human-perceptible way to resolve tasks related to core human needs. 

To design social robots that “understand” humans, robotics scientists are delving into the psychology of human communication. Researchers from Cornell University posit that embedding the sense of touch in social robots could teach them to detect physical interactions and gestures. They describe a way of doing so by relying not on touch but on vision.

A USB camera inside the robot captures shadows of hand gestures on the robot’s surface and classifies them with machine-learning software. They call this method ShadowSense, which they define as a modality between vision and touch, bringing “the high resolution and low cost of vision-sensing to the close-up sensory experience of touch.” 

Touch-sensing in social or interactive robots is usually achieved with force sensors or capacitive sensors, says study co-author Guy Hoffman of the Sibley School of Mechanical and Aerospace Engineering at Cornell University. The drawback to his group’s approach has been that, even to achieve coarse spatial resolution, many sensors are needed in a small area.

However, working with non-rigid, inflatable robots, Hoffman and his co-researchers installed a consumer-grade USB camera to which they attached a fisheye lens for a wider field of vision. 

“Given that the robot is already hollow, and has a soft and translucent skin, we could do touch interaction by looking at the shadows created by people touching the robot,” says Hoffman. They used deep neural networks to interpret the shadows. “And we were able to do it with very high accuracy,” he says. The robot was able to interpret six different gestures, including one- or two-handed touch, pointing, hugging and punching, with an accuracy of 87.5 to 96 percent, depending on the lighting.

This is not the first time that computer vision has been used for tactile sensing, though the scale and application of ShadowSense is unique. “Photography has been used for touch mainly in robotic grasping,” says Hoffman. By contrast, Hoffman and collaborators wanted to develop a sense that could be “felt” across the whole of the device. 

The potential applications for ShadowSense include mobile robot guidance using touch, and interactive screens on soft robots. A third concerns privacy, especially in home-based social robots. “We have another paper currently under review that looks specifically at the ability to detect gestures that are further away [from the robot’s skin],” says Hoffman. This way, users would be able to cover their robot’s camera with a translucent material and still allow it to interpret actions and gestures from shadows. Thus, even though it’s prevented from capturing a high-resolution image of the user or their surrounding environment, using the right kind of training datasets, the robot can continue to monitor some kinds of non-tactile activities. 

In its current iteration, Hoffman says, ShadowSense doesn’t do well in low-light conditions. Environmental noise, or shadows from surrounding objects, also interfere with image classification. Relying on one camera also means a single point of failure. “I think if this were to become a commercial product, we would probably [have to] work a little bit better on image detection,” says Hoffman.

As it was, the researchers used transfer learning—reusing a pre-trained deep-learning model in a new problem—for image analysis. “One of the problems with multi-layered neural networks is that you need a lot of training data to make accurate predictions,” says Hoffman. “Obviously, we don’t have millions of examples of people touching a hollow, inflatable robot. But we can use pre-trained networks trained on general images, which we have billions of, and we only retrain the last layers of the network using our own dataset.”

Using AI to Find Essential Battery Materials

Post Syndicated from Maria Gallucci original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-tools-find-essential-battery-materials

Demand for battery-making metals is projected to soar as more of the world’s cars, buses, and ships run on electricity. The coming mining boom is raising concerns of environmental damage and labor abuses—and it’s driving a search for more sustainable ways of making batteries and cutting-edge electronics.

Artificial intelligence could help improve the way battery metals are mined, or replace them altogether. KoBold Metals is developing an AI agent to find the most desirable ore deposits in the least problematic locations. IBM Research, meanwhile, is harnessing AI techniques to identify alternative materials that already exist and also develop new chemistries.

KoBold, a mining exploration startup, says its technology could reduce the need for costly and invasive exploration missions, which often involve scouring the Earth many times over to find rare, high-quality reserves. 

“All the stuff poking out of the ground has already been found,” said Kurt House, co-founder and CEO of the San Francisco Bay area company. “At the same time, we’ve realized we need to massively change the energy system, which requires all these new minerals.”

KoBold is partnering with Stanford University’s Center for Earth Resource Forecasting to develop an AI agent that can make decisions about how and where explorers should focus their work. The startup is mainly looking for copper, cobalt, nickel, and lithium—metals key to making electric vehicle batteries as well as solar panels, smartphones, and many other devices.

Jef Caers, a professor of geological sciences at Stanford, said the idea is to accelerate the decision-making process and enable explorers to evaluate multiple sites at once. He likened the AI agent to a self-driving car: The vehicle not only gathers and processes data about its surrounding environment but also acts upon that information to, say, navigate traffic or change speeds.

“We can’t wait another 10 or 20 years to make more discoveries,” Caers said. “We need to make them in the next few years if we want to have an impact on [climate change] and go away from fossil fuels.”

Light-duty cars alone will have a significant need for metals. The global fleet of battery-powered cars could expand from 7.5 million in 2019 to potentially 2 billion cars by 2050 as countries work to reduce greenhouse gas emissions, according to a December paper in the journal Nature. Powering those vehicles would require 12 terawatt-hours of annual battery capacity—roughly 10 times the current U.S. electricity generating capacity—and mean a “drastic expansion” of metal supply chains, the paper’s authors said. 

Almost all lithium-ion batteries use cobalt, a material that is primarily supplied by the Democratic Republic of Congo, where young children and adults often work in dangerous conditions. Copper, another important EV material, requires huge volumes of water to mine, yet much of the global supply comes from water-scarce regions near Chile’s Atacama desert. 

For mining companies, the challenge is to expand operations without wreaking havoc in the name of sustainable transportation. 

KoBold’s AI-driven approach begins with its data platform, which stores all available forms of information about a particular area, including soil samples, satellite-based hyperspectral imaging, and century-old handwritten drilling reports. The company then applies machine learning methods to make predictions about the location of compositional anomalies—that is, unusually high concentrations of ore bodies in the Earth’s subsurface.

Working with Stanford, KoBold is refining sequential decision-making algorithms to determine how explorers should next proceed to gather data. Perhaps they should fly a plane over a site or collect drilling samples; maybe companies should walk away from what is likely to be a dud. Such steps are currently risky and expensive, and companies move slowly to avoid wasting resources. 

The AI agent could make such decisions roughly 20 times faster than humans can, while also reducing the rate of false positives in mining exploration, Caers said. “This is completely new within the Earth sciences,” he added.

KoBold, which is backed by the Bill Gates-led Breakthrough Energy Ventures, is already exploring three sites in Australia, North America, and Sub-Saharan Africa. Field data collected this year will provide the first validations of the company’s predictions, House said.

As the startup searches for metals, IBM researchers are searching for solvents and other materials to reduce the use of battery ingredients such as cobalt and lithium.

Research teams are using AI techniques to identify and test solvents that offer higher safety and performance potential than current lithium-ion battery options. The project focuses on existing and commercially available materials that can be tested immediately. A related research effort, however, aims to create brand-new molecules entirely.

Using “generative models,” experts train AI to learn the molecular structure of known materials, as well as characteristics such as viscosity, melting point, or electronic conductivity.  

“For example, if we want a generative model to design new electrolyte materials for batteries— such as electrolyte solvents or appropriate monomers to form ion-conducting polymers—we should train the AI with known electrolyte material data,” Seiji Takeda and Young-hye Na of IBM Research said in an email. 

Once the AI training is completed, researchers can input a query such as “design a new molecular electrolyte material that meets the characteristics of X, Y, and Z,” they said. “And then the model designs a material candidate by referring to the structure-characteristics relation.”

IBM has already used this AI-boosted approach to create new molecules named called photoacid generators that could eventually help produce more environmentally friendly computing devices. Researchers also designed polymer membranes that apparently absorbs carbon dioxide better than membranes currently used in carbon capture technologies.

Designing a more sustainable battery “is our next challenge,” Takeda and Na said.

Review: DJI’s New FPV Drone is Effortless, Exhilarating Fun

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/drones/review-djis-new-fpv-drone-is-effortless-exhilarating-fun

In my experience, there are three different types of consumer drone pilots. You’ve got people for whom drones are a tool for taking pictures and video, where flying the drone is more or less just a necessary component of that. You’ve also got people who want a drone that can be used to take pictures or video of themselves, where they don’t want to be bothered flying the drone at all. Then you have people for whom flying the drone itself is the appealing part— people who like flying fast and creatively because it’s challenging and exciting and fun. And that typically means flying in First Person View, or FPV, where it feels like you’re a tiny little human sitting inside of a virtual cockpit in your drone.

For that last group of folks, the barrier to entry is high. Or rather, the barriers are high, because there are several. Not only is the equipment expensive, you often have to build your own system of drone, FPV goggles, and transmitter and receiver. And on top of that, it takes a lot of skill to fly an FPV drone well, and all of the inevitable crashes just add to the expense.

Today, DJI is announcing a new consumer first-person view drone system that includes everything you need to get started. You get an expertly designed and fully integrated high-speed FPV drone, a pair of FPV goggles with exceptional image quality and latency that’s some of the best we’ve ever seen, plus a physical controller to make it all work. Most importantly, though, there’s on-board obstacle avoidance plus piloting assistance that means even a complete novice can be zipping around with safety and confidence on day one.

Video Friday: A Blimp For Your Cat

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-a-blimp-for-your-cat

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi’an, China

Let us know if you have suggestions for next week, and enjoy today’s videos.

AI Teaches Itself Diplomacy

Post Syndicated from Matthew Hutson original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-learns-diplomacy-gaming

Now that DeepMind has taught AI to master the game of Go—and furthered its advantage in chess—they’ve turned their attention to another board game: Diplomacy. Unlike Go, it is seven-player, it requires a combination of competition and cooperation, and on each turn players make moves simultaneously, so they must reason about what others are reasoning about them, and so on.

“It’s a qualitatively different problem from something like Go or chess,” says Andrea Tacchetti, a computer scientist at DeepMind. In December, Tacchetti and collaborators presented a paper at the NeurIPS conference on their system, which advances the state of the art, and may point the way toward AI systems with real-world diplomatic skills—in negotiating with strategic or commercial partners or simply scheduling your next team meeting. 

Diplomacy is a strategy game played on a map of Europe divided into 75 provinces. Players build and mobilize military units to occupy provinces until someone controls a majority of supply centers. Each turn, players write down their moves, which are then executed simultaneously. They can attack or defend against opposing players’ units, or support opposing players’ attacks and defenses, building alliances. In the full version, players can negotiate. DeepMind tackled the simpler No-Press Diplomacy, devoid of explicit communication. 

Historically, AI has played Diplomacy using hand-crafted strategies. In 2019, the Montreal research institute Mila beat the field with a system using deep learning. They trained a neural network they called DipNet to imitate humans, based on a dataset of 150,000 human games. DeepMind started with a version of DipNet and refined it using reinforcement learning, a kind of trial-and-error. 

Exploring the space of possibility purely through trial-and-error would pose problems, though. They calculated that a 20-move game can be played nearly 10868 ways—yes, that’s 10 with 868 zeroes after it.

So they tweaked their reinforcement-learning algorithm. During training, on each move, they sample likely moves of opponents, calculate the move that works best on average across these scenarios, then train their net to prefer this move. After training, it skips the sampling and just works from what its learning has taught it. “The message of our paper is: we can make reinforcement learning work in such an environment,” Tacchetti says. One of their AI players versus six DipNets won 30 percent of the time (with 14 percent being chance). One DipNet against seven of theirs won only 3 percent of the time.

In April, Facebook will present a paper at the ICLR conference describing their own work on No-Press Diplomacy. They also built on a human-imitating network similar to DipNet. But instead of adding reinforcement learning, they added search—the techniques of taking extra time to plan ahead and reason about what every player is likely to do next. On each turn, SearchBot computes an equilibrium, a strategy for each player that the player can’t improve by switching only its own strategy. To do this, SearchBot evaluates each potential strategy for a player by playing the game out a few turns (assuming everyone chooses subsequent moves based on the net’s top choice). A strategy consists not of a single best move but a set of probabilities across 50 likely moves (suggested by the net), to avoid being too predictable to opponents. 

Conducting such exploration during a real game slows SearchBot down, but allows it beat DipNet by an even greater margin than DeepMind’s system does. SearchBot also played anonymously against humans on a Diplomacy website and ranked in the top 2 percent of players. “This is the first bot that’s demonstrated to be competitive with humans,” says Adam Lerer, a computer scientist at Facebook and paper co-author.

“I think the most important point is that search is often underestimated,” Lerer says. One of his Facebook collaborators, Noam Brown, implemented search in a superhuman poker bot. Brown says the most surprising finding was that their method could find equilibria, a computationally difficult task.

“I was really happy when I saw their paper,” Tacchetti says, “because of just how different their ideas were to ours, which means that there’s so much stuff that we can try still.” Lerer sees a future in combining reinforcement learning and search, which worked well for DeepMind’s AlphaGo.

Both teams found that their systems were not easily exploitable. Facebook, for example, invited two top human players to each play 35 straight games against SearchBot, probing for weaknesses. The humans won only 6 percent of the time. Both groups also found that their systems didn’t just compete, but also cooperated, sometimes supporting opponents. “They get that in order to win, they have to work with others,” says Yoram Bachrach, from the DeepMind team.

That’s important, Bachrach, Lerer, and Tacchetti say, because games that combine competition and cooperation are much more realistic than purely competitive games like Go. Mixed motives occur in all realms of life: driving in traffic, negotiating contracts, and arranging times to Zoom. 

How close are we to AI that can play Diplomacy with “press,” negotiating all the while using natural language?

“For Press Diplomacy, as well as other settings that mix cooperation and competition, you need progress,” Bachrach says, “in terms of theory of mind, how they can communicate with others about their preferences or goals or plans. And, one step further, you can look at the institutions of multiple agents that human society has. All of this work is super exciting, but these are early days.”

When Robots Enter the World, Who Is Responsible for Them?

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/industrial-robots/when-robots-enter-the-world-who-is-responsible-for-them

Over the last half decade or so, the commercialization of autonomous robots that can operate outside of structured environments has dramatically increased. But this relatively new transition of robotic technologies from research projects to commercial products comes with its share of challenges, many of which relate to the rapidly increasing visibility that these robots have in society.

Whether it’s because of their appearance of agency, or because of their history in popular culture, robots frequently inspire people’s imagination. Sometimes this is a good thing, like when it leads to innovative new use cases. And sometimes this is a bad thing, like when it leads to use cases that could be classified as irresponsible or unethical. Can the people selling robots do anything about the latter? And even if they can, should they?

Search-and-Rescue Drone Locates Victims By Homing in on Their Phones

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/robotics/drones/searchandrescue-drone-locates-victims-by-homing-in-on-their-phones

Journal Watch report logo, link to report landing page

When a natural disaster strikes, first responders must move quickly to search for survivors. To support the search-and-rescue efforts, one group of innovators in Europe has succeeded in harnessing the power of drones, AI, and smartphones, all in one novel combination.

Their idea is to use a single drone as a moving cellular base station, which can do large sweeps over disaster areas and locate survivors using signals from their phones. AI helps the drone methodically survey the area and even estimate the trajectory of survivors who are moving.

The team built its platform, called Search-And-Rescue DrOne based solution (SARDO), using off-the-shelf hardware and tested it in field experiments and simulations. They describe the results in a study published 13 January in IEEE Transactions on Mobile Computing.

“We built SARDO to provide first responders with an all-in-one victims localization system capable of working in the aftermath of a disaster without existing network infrastructure support,” explains Antonio Albanese, a Research Associate at NEC Laboratories Europe GmbH, which is headquartered in Heidelberg, Germany.

The point is that a natural disaster may knock out cell towers along with other infrastructure. SARDO, which is quipped with a light-weight cellular base station, is a mobile solution that could be implemented regardless of what infrastructure remains after a natural disaster. 

To detect and map out the locations of victims, SARDO performs time-of-flight measurements (using the timing of signals emitted by the users’ phones to estimate distance). 

A machine learning algorithm is then applied to the time-of-flight measurements to calculate the positions of victims. The algorithm compensates for when signals are blocked by rubble.

If a victim is on the move in the wake of a disaster, a second machine learning algorithm, tasked with estimating the person’s trajectory based on their current movement, kicks in—potentially helping first responders locate the person sooner.   

After sweeping an area, the drone is programmed to automatically maneuver closer to the position of a suspected victim to retrieve more accurate distance measurements. If too many errors are interfering with the drone’s ability to locate victims, it’s programmed to enlarge the scanning area.

In their study, Albanese and his colleagues tested SARDO in several field experiments without rubble, and used simulations to test the approach in a scenario where rubble interfered with some signals. In the field experiments, the drone was able to pinpoint the location of missing people to within a few tens of meters, requiring approximately three minutes to locate each victim (within a field roughly 200 meters squared. As would be expected, SARDO was less accurate when rubble was present or when the drone was flying at higher speeds or altitudes.

Albanese notes that a limitation of SARDO–as is the case with all drone-based approaches–is the battery life of the drone. But, he says, the energy consumption of the NEC team’s design remains relatively low.

The group is consulting the laboratory’s business experts on the possibility of commercializing this tech.  Says Albanese: “There is interest, especially from the public safety divisions, but still no final decision has been taken.”

In the meantime, SARDO may undergo further advances. “We plan to extend SARDO to emergency indoor localization so [it is] capable of working in any emergency scenario where buildings might not be accessible [to human rescuers],” says Albanese.

Onboard Video of the Perseverance Rover Landing is the Most Incredible Thing I’ve Ever Seen

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/space-robots/onboard-video-of-the-perseverance-rover-landing-is-the-most-incredible-thing-ive-ever-seen

At a press conference this afternoon, NASA released a new video showing, in real-time and full color, the entire descent and landing of the Perseverance Mars rover. The video begins with the deployment of the parachute, and ends with the Skycrane cutting the rover free and flying away. It’s the most mind-blowing three minutes of video I have ever seen. 

Folding Drone Can Drop Into Inaccessible Mines

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/automaton/robotics/drones/folding-drone-can-drop-into-inaccessible-mines

Inspecting old mines is a dangerous business. For humans, mines can be lethal: prone to rockfalls and filled with noxious gases. Robots can go where humans might suffocate, but even robots can only do so much when mines are inaccessible from the surface.

Now, researchers in the UK, led by Headlight AI, have developed a drone that could cast a light in the darkness. Named Prometheus, this drone can enter a mine through a borehole not much larger than a football, before unfurling its arms and flying around the void. Once down there, it can use its payload of scanning equipment to map mines where neither humans nor robots can presently go. This, the researchers hope, could make mine inspection quicker and easier. The team behind Prometheus published its design in November in the journal Robotics.

Mine inspection might seem like a peculiarly specific task to fret about, but old mines can collapse, causing the ground to sink and damaging nearby buildings. It’s a far-reaching threat: the geotechnical engineering firm Geoinvestigate, based in Northeast England, estimates that around 8 percent of all buildings in the UK are at risk from any of the thousands of abandoned coal mines near the country’s surface. It’s also a threat to transport, such as road and rail. Indeed, Prometheus is backed by Network Rail, which operates Britain’s railway infrastructure.

Such grave dangers mean that old mines need periodic check-ups. To enter depths that are forbidden to traditional wheeled robots—such as those featured in the DARPA SubT Challenge—inspectors today drill boreholes down into the mine and lower scanners into the darkness.

But that can be an arduous and often fruitless process. Inspecting the entirety of a mine can take multiple boreholes, and that still might not be enough to chart a complete picture. Mines are jagged, labyrinthine places, and much of the void might lie out of sight. Furthermore, many old mines aren’t well-mapped, so it’s hard to tell where best to enter them.

Prometheus can fly around some of those challenges. Inspectors can lower Prometheus, tethered to a docking apparatus, down a single borehole. Once inside the mine, the drone can undock and fly around, using LIDAR scanners—common in mine inspection today—to generate a 3D map of the unknown void. Prometheus can fly through the mine autonomously, using infrared data to plot out its own course.

Other drones exist that can fly underground, but they’re either too small to carry a relatively heavy payload of scanning equipment, or too large to easily fit down a borehole. What makes Prometheus unique is its ability to fold its arms, allowing it to squeeze down spaces its counterparts cannot.

It’s that ability to fold and enter a borehole that makes Prometheus remarkable, says Jason Gross, a professor of mechanical and aerospace engineering at West Virginia University. Gross calls Prometheus “an exciting idea,” but he does note that it has a relatively short flight window and few abilities beyond scanning.

The researchers have conducted a number of successful test flights, both in a basement and in an old mine near Shrewsbury, England. Not only was Prometheus able to map out its space, the drone was able to plot its own course in an unknown area.

The researchers’ next steps, according to Puneet Chhabra, co-founder of Headlight AI, will be to test Prometheus’s ability to unfold in an actual mine. Following that, researchers plan to conduct full-scale test flights by the end of 2021.

Soft Legged Robot Uses Pneumatic Circuitry to Walk Like a Turtle

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/soft-legged-robot-pneumatic-circuitry

Soft robots are inherently safe, highly resilient, and potentially very cheap, making them promising for a wide array of applications. But development on them has been a bit slow relative to other areas of robotics, at least partially because soft robots can’t directly benefit from the massive increase in computing power and sensor and actuator availability that we’ve seen over the last few decades. Instead, roboticists have had to get creative to find ways of achieving the functionality of conventional robotics components using soft materials and compatible power sources.

In the current issue of Science Robotics, researchers from UC San Diego demonstrate a soft walking robot with four legs that moves with a turtle-like gait controlled by a pneumatic circuit system made from tubes and valves. This air-powered nervous system can actuate multiple degrees of freedom in sequence from a single source of pressurized air, offering a huge reduction in complexity and bringing a very basic form of decision making onto the robot itself.

Video Friday: Perseverance Lands on Mars

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-perseverance-lands-on-mars

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi’an, China

Let us know if you have suggestions for next week, and enjoy today’s videos.

Modified Laser Cutter Fabricates a Ready to Fly Drone

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/industrial-robots/modified-laser-cutter-fabricates-a-ready-to-fly-drone

It’s been very cool to watch 3D printers and laser cutters evolve into fairly common tools over the last decade-ish, finding useful niches across research, industry, and even with hobbyists at home. Capable as these fabricators are, they tend to be good at just one specific thing: making shapes out of polymer. Which is great! But we have all kinds of other techniques for making things that are even more useful, like by adding computers and actuators and stuff like that. You just can’t do that with your 3D printer or laser cutter, because it just does its one thing—which is too bad.

At CHI this year, researchers from MIT CSAIL are presenting LaserFactory, an integrated fabrication system that turns any laser cutter into a device that can (with just a little bit of assistance) build you an entire drone at a level of finish that means when the fabricator is done, the drone can actually fly itself off of the print bed. Sign me up.

Video Friday: Digit Takes a Hike

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-digit-takes-a-hike

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi’an, China

Let us know if you have suggestions for next week, and enjoy today’s videos.

Hyundai Motor Group Introduces Two New Robots

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/industrial-robots/hyundai-boston-dynamics-new-robots

Over the past few weeks, we’ve seen a couple of new robots from Hyundai Motor Group. This is a couple more robots than I think I’ve seen from Hyundai Motor Group, like, ever. We’re particularly interested in them right now mostly because Hyundai Motor Group are the new owners of Boston Dynamics, and so far, these robots represent one of the most explicit indications we’ve got about exactly what Hyundai Motor Group wants their robots to be doing.

We know it would be a mistake to read too much into these new announcements, but we can’t help reading something into them, right? So let’s take a look at what Hyundai Motor Group has been up to recently. This first robot is DAL-e, what HMG is calling an “Advanced Humanoid Robot.”

According to Hyundai, DAL-e is “designed to pioneer the future of automated customer services,” and is equipped with “state-of-the-art artificial intelligence technology for facial recognition as well as an automatic communication system based on a language-comprehension platform.” You’ll find it in car showrooms, but only in Seoul, for now.

We don’t normally write about robots like these because they tend not to represent much that’s especially new or interesting in terms of robotic technology, capabilities, or commercial potential. There’s certainly nothing wrong with DAL-e—it’s moderately cute and appears to be moderately functional. We’ve seen other platforms (like Pepper) take on similar roles, and our impression is that the long-term cost effectiveness of these greeter robots tends to be somewhat limited. And unless there’s some hidden functionality that we’re not aware of, this robot doesn’t really seem to be pushing the envelope, but we’d love to be wrong about that.

The other new robot, announced yesterday, is TIGER (Transforming Intelligent Ground Excursion Robot). It’s a bit more interesting, although you’ll have to skip ahead about 1:30 in the video to get to it.

We’ve talked about how adding wheels can make legged robots faster and more efficient, but I’m honestly not sure that it works all that well going the other way (adding legs to wheeled robots) because rather than adding a little complexity to get a multi-modal system that you can use much of the time, you’re instead adding a lot of complexity to get a multi-modal system that you’re going to use sometimes.

You could argue, as perhaps Hyundai would, that the multi-modal system is critical to get TIGER to do what they want it to do, which seems to be primarily remote delivery. They mention operating in urban areas as well, where TIGER could use its legs to climb stairs, but I think it would be beat by more traditional wheeled platforms, or even whegged platforms, that are almost as capable while being much simpler and cheaper. For remote delivery, though, legs might be a necessary feature.

That is, if you assume that using a ground-based system is really the best way to go.

The TIGER concept can be integrated with a drone to transport it from place to place, so why not just use the drone to make the remote delivery instead? I guess maybe if you’re dealing with a thick tree canopy, the drone could drop TIGER off in a clearing and the robot could drive to its destination, but now we’re talking about developing a very complex system for a very specific use case. Even though Hyundai has said that they’re going to attempt to commercialize TIGER over the next five years, I think it’ll be tricky for them to successfully do so.

The best part about these robots from Hyundai is that between the two of them, they suggest that the company is serious about developing commercial robots as well as willing to invest in something that seems a little crazy. And you know who else is both of those things? Boston Dynamics. To be clear, it’s almost certain that both of Hyundai’s robots were developed well before the company was even thinking about acquiring Boston Dynamics, so the real question is: Where do these two companies go from here?

New Drone Software Handles Motor Failures Even Without GPS

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/drones/new-drone-software-handles-motor-failures-even-without-gps

Good as some drones are becoming at obstacle avoidance, accidents do still happen. And as far as robots go, drones are very much on the fragile side of things.  Any sort of significant contact between a drone and almost anything else usually results in a catastrophic, out-of-control spin followed by a death plunge to the ground. Bad times. Bad, expensive times.

A few years ago, we saw some interesting research into software that can keep the most common drone form factor, the quadrotor, aloft and controllable even after the failure of one motor. The big caveat to that software was that it relied on GPS for state estimation, meaning that without a GPS signal, the drone is unable to get the information it needs to keep itself under control. In a paper recently accepted to RA-L, researchers at the University of Zurich report that they have developed a vision-based system that brings state estimation completely on-board. The upshot: potentially any drone with some software and a camera can keep itself safe even under the most challenging conditions.

Video Friday: New Entertainment Robot Sings and Does Impressions

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-engineered-arts-mesmer-robot-cleo

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi’an, China

Let us know if you have suggestions for next week, and enjoy today’s videos.

Boston Dynamics’ Spot Robot Is Now Armed

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/industrial-robots/boston-dynamics-spot-robot-arm

Boston Dynamics has been working on an arm for its Spot quadruped for at least five years now. There have been plenty of teasers along the way, including this 45-second clip from early 2018 of Spot using its arm to open a door, which at 85 million views seems to be Boston Dynamics’ most popular video ever by a huge margin. Obviously, there’s a substantial amount of interest in turning Spot from a highly dynamic but mostly passive sensor platform into a mobile manipulator that can interact with its environment. 

As anyone who’s done mobile manipulation will tell you, actually building an arm is just the first step—the really tricky part is getting that arm to do exactly what you want it to do. In particular, Spot’s arm needs to be able to interact with the world with some amount of autonomy in order to be commercially useful, because you can’t expect a human (remote or otherwise) to spend all their time positioning individual joints or whatever to pick something up. So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.

Today, Boston Dynamics is announcing commercial availability of the Spot arm, along with some improved software called Scout plus a self-charging dock that’ll give the robot even more independence. And to figure out exactly what Spot’s new arm can do, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.

Video Friday: These Robots Have Made 1 Million Autonomous Deliveries

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/video-friday-starship-robots-1-million-deliveries

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]

Let us know if you have suggestions for next week, and enjoy today’s videos.

Meet Blueswarm, a Smart School of Robotic Fish

Post Syndicated from Lawrence Ulrich original https://spectrum.ieee.org/automaton/robotics/industrial-robots/blueswarm-robotic-fish

Anyone who’s seen an undersea nature documentary has marveled at the complex choreography that schooling fish display, a darting, synchronized ballet with a cast of thousands.

Those instinctive movements have inspired researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), and the Wyss Institute for Biologically Inspired Engineering. The results could improve the performance and dependability of not just underwater robots, but other vehicles that require decentralized locomotion and organization, such as self-driving cars and robotic space exploration. 

The fish collective called Blueswarm was created by a team led by Radhika Nagpal, whose lab is a pioneer in self-organizing systems. The oddly adorable robots can sync their movements like biological fish, taking cues from their plastic-bodied neighbors with no external controls required. Nagpal told IEEE Spectrum that this marks a milestone, demonstrating complex 3D behaviors with implicit coordination in underwater robots. 

“Insights from this research will help us develop future miniature underwater swarms that can perform environmental monitoring and search in visually-rich but fragile environments like coral reefs,” Nagpal said. “This research also paves a way to better understand fish schools, by synthetically recreating their behavior.”

The research is published in Science Robotics, with Florian Berlinger as first author. Berlinger said the “Bluedot” robots integrate a trio of blue LED lights, a lithium-polymer battery, a pair of cameras, a Raspberry Pi computer and four controllable fins within a 3D-printed hull. The fish-lens cameras detect LED’s of their fellow swimmers, and apply a custom algorithm to calculate distance, direction and heading. 

Based on that simple production and detection of LED light, the team proved that Blueswarm could self-organize behaviors, including aggregation, dispersal and circle formation—basically, swimming in a clockwise synchronization. Researchers also simulated a successful search mission, an autonomous Finding Nemo. Using their dispersion algorithm, the robot school spread out until one could detect a red light in the tank. Its blue LEDs then flashed, triggering the aggregation algorithm to gather the school around it. Such a robot swarm might prove valuable in search-and-rescue missions at sea, covering miles of open water and reporting back to its mates. 

“Each Bluebot implicitly reacts to its neighbors’ positions,” Berlinger said. The fish—RoboCod, perhaps?—also integrate a Wifi module to allow uploading new behaviors remotely. The lab’s previous efforts include a 1,000-strong army of “Kilobots,” and a robotic construction crew inspired by termites. Both projects operated in two-dimensional space. But a 3D environment like air or water posed a tougher challenge for sensing and movement.

In nature, Berlinger notes, there’s no scaly CEO to direct the school’s movements. Nor do fish communicate their intentions. Instead, so-called “implicit coordination” guides the school’s collective behavior, with individual members executing high-speed moves based on what they see their neighbors doing. That decentralized, autonomous organization has long fascinated scientists, including in robotics. 

“In these situations, it really benefits you to have a highly autonomous robot swarm that is self-sufficient. By using implicit rules and 3D visual perception, we were able to create a system with a high degree of autonomy and flexibility underwater where things like GPS and WiFi are not accessible.”

Berlinger adds the research could one day translate to anything that requires decentralized robots, from self-driving cars and Amazon warehouse vehicles to exploration of faraway planets, where poor latency makes it impossible to transmit commands quickly. Today’s semi-autonomous cars face their own technical hurdles in reliably sensing and responding to their complex environments, including when foul weather obscures onboard sensors or road markers, or when they can’t fix position via GPS. An entire subset of autonomous-car research involves vehicle-to-vehicle (V2V) communications that could give cars a hive mind to guide individual or collective decisions— avoiding snarled traffic, driving safely in tight convoys, or taking group evasive action during a crash that’s beyond their sensory range.

“Once we have millions of cars on the road, there can’t be one computer orchestrating all the traffic, making decisions that work for all the cars,” Berlinger said.

The miniature robots could also work long hours in places that are inaccessible to humans and divers, or even large tethered robots. Nagpal said the synthetic swimmers could monitor and collect data on reefs or underwater infrastructure 24/7, and work into tiny places without disturbing fragile equipment or ecosystems. 

“If we could be as good as fish in that environment, we could collect information and be non-invasive, in cluttered environments where everything is an obstacle,” Nagpal said.