Tag Archives: robotics

Video Friday: These Giant Robots Are Made of Air, Fabric

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-air-giant-soft-robots

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ACRA 2020 – December 8-10, 2020 – [Online]

Let us know if you have suggestions for next week, and enjoy today’s videos.


Boston Dynamics’ Spot Is Helping Chernobyl Move Towards Safe Decommissioning

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/boston-dynamics-spot-chernobyl

In terms of places where you absolutely want a robot to go instead of you, what remains of the utterly destroyed Chernobyl Reactor 4 should be very near the top of your list. The reactor, which suffered a catastrophic meltdown in 1986, has been covered up in almost every way possible in an effort to keep its nuclear core contained. But eventually, that nuclear material is going to have to be dealt with somehow, and in order to do that, it’s important to understand which bits of it are just really bad, and which bits are the actual worst. And this is where Spot is stepping in to help.

Video Friday: Matternet Launches Urban Drone Delivery in Berlin

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-matternet-drone-delivery-berlin

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online]
Bay Area Robotics Symposium – November 20, 2020 – [Online]
ACRA 2020 – December 8-10, 2020 – [Online]

Let us know if you have suggestions for next week, and enjoy today’s videos.


Why We Need a Robot Registry


Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/robotics/humanoids/why-we-need-a-robot-registry

I have a confession to make: A robot haunts my nightmares. For me, Boston Dynamics’ Spot robot is 32.5 kilograms (71.1 pounds) of pure terror. It can climb stairs. It can open doors. Seeing it in a video cannot prepare you for the moment you cross paths on a trade-show floor. Now that companies can buy a Spot robot for US $74,500, you might encounter Spot anywhere.

Spot robots now patrol public parks in Singapore to enforce social distancing during the pandemic. They meet with COVID-19 patients at Boston’s Brigham and Women’s Hospital so that doctors can conduct remote consultations. Imagine coming across Spot while walking in the park or returning to your car in a parking garage. Wouldn’t you want to know why this hunk of metal is there and who’s operating it? Or at least whom to call to report a malfunction?

Robots are becoming more prominent in daily life, which is why I think governments need to create national registries of robots. Such a registry would let citizens and law enforcement look up the owner of any roaming robot, as well as learn that robot’s purpose. It’s not a far-fetched idea: The U.S. Federal Aviation Administration already has a registry for drones.

Governments could create national databases that require any companies operating robots in public spaces to report the robot make and model, its purpose, and whom to contact if the robot breaks down or causes problems. To allow anyone to use the database, all public robots would have an easily identifiable marker or model number on their bodies. Think of it as a license plate or pet microchip, but for bots.

There are some smaller-scale registries today. San Jose’s Department of Transportation (SJDOT), for example, is working with Kiwibot, a delivery robot manufacturer, to get real-time data from the robots as they roam the city’s streets. The Kiwibots report their location to SJDOT using the open-source Mobility Data Specification, which was originally developed by Los Angeles to track Bird scooters.

Real-time location reporting makes sense for Kiwibots and Spots wandering the streets, but it’s probably overkill for bots confined to cleaning floors or patrolling parking lots. That said, any robots that come in contact with the general public should clearly provide basic credentials and a way to hold their operators accountable. Given that many robots use cameras, people may also be interested in looking up who’s collecting and using that data.

I starting thinking about robot registries after Spot became available in June for anyone to purchase. The idea gained specificity after listening to Andra Keay, founder and managing director at Silicon Valley Robotics, discuss her five rules of ethical robotics at an Arm event in October. I had already been thinking that we needed some way to track robots, but her suggestion to tie robot license plates to a formal registry made me realize that people also need a way to clearly identify individual robots.

Keay pointed out that in addition to sating public curiosity and keeping an eye on robots that could cause harm, a registry could also track robots that have been hacked. For example, robots at risk of being hacked and running amok could be required to report their movements to a database, even if they’re typically restricted to a grocery store or warehouse. While we’re at it, Spot robots should be required to have sirens, because there’s no way I want one of those sneaking up on me.

This article appears in the December 2020 print issue as “Who’s Behind That Robot?”

Coordinated Robotics Wins DARPA SubT Virtual Cave Circuit

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/coordinated-robotics-wins-darpa-subt-virtual-cave-circuit

DARPA held the Virtual Cave Circuit event of the Subterranean Challenge on Tuesday in the form of a several hour-long livestream. We got to watch (along with all of the competing teams) as virtual robots explored virtual caves fully autonomously, dodging rockfalls, spotting artifacts, scoring points, and sometimes running into stuff and falling over.

Expert commentary was provided by DARPA, and we were able to watch multiple teams running at once, skipping from highlight to highlight. It was really very well done (you can watch an archive of the entire stream here), but they made us wait until the very end to learn who won: First place went to Coordinated Robotics, with BARCS taking second, and third place going to newcomer Team Dynamo.

Ending the COVID-19 Pandemic

Post Syndicated from Susan Hassler original https://spectrum.ieee.org/robotics/artificial-intelligence/ending-the-covid19-pandemic

IEEE COVID-19 coverage logo, link to landing page

The approach of a new year is always a time to take stock and be hopeful. This year, though, reflection and hope are more than de rigueur—they’re rejuvenating. We’re coming off a year in which doctors, engineers, and scientists took on the most dire public threat in decades, and in the new year we’ll see the greatest results of those global efforts. COVID-19 vaccines are just months away, and biomedical testing is being revolutionized.

At IEEE Spectrum we focus on the high-tech solutions: Can artificial intelligence (AI) be used to diagnose COVID-19 using cough recordings? Can mathematical modeling determine whether preventive measures against COVID-19 work? Can big data and AI provide accurate pandemic forecasting?

Consider our story “AI Recognizes COVID-19 in the Sound of a Cough,” reported by Megan Scudellari in our Human OS blog. Using a cellphone-recorded cough, machine-learning models can now detect coronavirus with 90 percent accuracy, even in people with no symptoms. It’s a remarkable research milestone. This AI model sifts through hundreds of factors to distinguish the COVID-19 cough from those of bronchitis, whooping cough, and asthma.

But while such high-tech triumphs give us hope, the no-tech solutions are mostly what we have to work with. Soon, as our Numbers Don’t Lie columnist, Vaclav Smil, pointed out in a recent email, we will have near-instantaneous home testing, and we will have an ability to use big data to crunch every move and every outbreak. But we are nowhere near that yet. So let’s use, as he says, some old-fashioned kindergarten epidemiology, the no-tech measures, while we work to get there:

Masks: Wear them. If we all did so, we could cut transmission by two-thirds, perhaps even 80 percent.

Hands: Wash them.

Social distancing: If we could all stay home for two weeks, we could see enormous declines in COVID-19 transmission.

These are all time-tested solutions, proven effective ages ago in countless outbreaks of diseases including typhoid and cholera. They’re inexpensive and easy to prescribe, and the regimens are easy to follow.

The conflict between public health and individual rights and privacy, however, is less easy to resolve. Even during the pandemic of 1918–19, there was widespread resistance to mask wearing and social distancing. Fifty million people died—675,000 in the United States alone. Today, we are up to 240,000 deaths in the United States, and the end is not in sight. Antiflu measures were framed in 1918 as a way to protect the troops fighting in World War I, and people who refused to wear masks were called out as “dangerous slackers.” There was a world war, and yet it was still hard to convince people of the need for even such simple measures.

Personally, I have found the resistance to these easy fixes startling. I wouldn’t want maskless, gloveless doctors taking me through a surgical procedure. Or waltzing in from lunch without washing their hands. I’m sure you wouldn’t, either.

Science-based medicine has been one of the world’s greatest and most fundamental advances. In recent years, it has been turbocharged by breakthroughs in genetics technologies, advanced materials, high-tech diagnostics, and implants and other electronics-based interventions. Such leaps have already saved untold lives, but there’s much more to be done. And there will be many more pandemics ahead for humanity.

The Right Robotic Solution for Your Unique Operation

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/the-right-robotic-solution-for-your-unique-operation

Robotic solutions can help your operation keep up with the demands of today’s changing e-commerce market. Honeywell Robotics is helping DCs evaluate solutions with powerful physics-based simulation tools to ensure that everything works together in an integrated ecosystem.

Put more than a quarter-century of automation expertise to work for you.

Download White Paper

Watch DARPA’s SubT Cave Circuit Virtual Competition

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/darpa-subt-cave-circuit-virtual-competition

While we’re super bummed that COVID forced the cancellation of the Systems Track event of the DARPA Subterranean Challenge Cave Circuit, the good news is that the Virtual Track (being virtual) is 100 percent coronavirus-free, and the final event is taking place tomorrow, November 17, right on schedule. And honestly, it’s about time the Virtual Track gets the attention that it deserves—we’re as guilty as anyone of focusing more heavily on the Systems Track, being full of real robots that alternate between amazingly talented and amazingly klutzy, but the Virtual Track is just as compelling, in a very different way.

DARPA has scheduled the Cave Circuit Virtual Track live event for Tuesday starting at 2 p.m. ET, and we’ve got all the details.

Video Friday: Aquanaut Robot Takes to the Ocean

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-aquanaut-robot

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Bay Area Robotics Symposium – November 20, 2020 – [Online]
ACRA 2020 – December 8-10, 2020 – [Online]

Let us know if you have suggestions for next week, and enjoy today’s videos.


Friday Squid Blogging: Underwater Robot Uses Squid-Like Propulsion

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/11/friday-squid-blogging-underwater-robot-uses-squid-like-propulsion.html

This is neat:

By generating powerful streams of water, UCSD’s squid-like robot can swim untethered. The “squidbot” carries its own power source, and has the room to hold more, including a sensor or camera for underwater exploration.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

How We Won the DARPA SubT Challenge: Urban Circuit Virtual Track

Post Syndicated from Richard Chase original https://spectrum.ieee.org/automaton/robotics/robotics-software/darpa-subt-challenge-urban-circuit-virtual

This is a guest post. The views expressed here are those of the authors and do not necessarily represent positions of IEEE or its organizational units.​

“Do you smell smoke?” It was three days before the qualification deadline for the Virtual Tunnel Circuit of the DARPA Subterranean Challenge Virtual Track, and our team was barrelling through last-minute updates to our robot controllers in a small conference room at the Michigan Tech Research Institute (MTRI) offices in Ann Arbor, Mich. That’s when we noticed the smell. We’d assumed that one of the benefits of entering a virtual disaster competition was that we wouldn’t be exposed to any actual disasters, but equipment in the basement of the building MTRI shares had started to smoke. We evacuated. The fire department showed up. And as soon as we could, the team went back into the building, hunkered down, and tried to make up for the unexpected loss of several critical hours.

Video Friday: Snugglebot Is What We All Need Right Now

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-snugglebot

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online]
Robotica 2020 – November 10-14, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Bay Area Robotics Symposium – November 20, 2020 – [Online]

Let us know if you have suggestions for next week, and enjoy today’s videos.


AI-Directed Robotic Hand Learns How to Grasp

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/automaton/robotics/humanoids/robotic-hand-uses-artificial-neural-network-to-learn-how-to-grasp-different-objects

Reaching for a nearby object seems like a mindless task, but the action requires a sophisticated neural network that took humans millions of years to evolve. Now, robots are acquiring that same ability using artificial neural networks. In a recent study, a robotic hand “learns” to pick up objects of different shapes and hardness using three different grasping motions.  

The key to this development is something called a spiking neuron. Like real neurons in the brain, artificial neurons in a spiking neural network (SNN) fire together to encode and process temporal information. Researchers study SNNs because this approach may yield insights into how biological neural networks function, including our own. 

“The programming of humanoid or bio-inspired robots is complex,” says Juan Camilo Vasquez Tieck, a research scientist at FZI Forschungszentrum Informatik in Karlsruhe, Germany. “And classical robotics programming methods are not always suitable to take advantage of their capabilities.”

Conventional robotic systems must perform extensive calculations, Tieck says, to track trajectories and grasp objects. But a robotic system like Tieck’s, which relies on a SNN, first trains its neural net to better model system and object motions. After which it grasps items more autonomously—by adapting to the motion in real-time. 

The new robotic system by Tieck and his colleagues uses an existing robotic hand, called a Schunk SVH 5-finger hand, which has the same number of fingers and joints as a human hand.

The researchers incorporated a SNN into their system, which is divided into several sub-networks. One sub-network controls each finger individually, either flexing or extending the finger. Another concerns each type of grasping movement, for example whether the robotic hand will need to do a pinching, spherical or cylindrical movement.

For each finger, a neural circuit detects contact with an object using the currents of the motors and the velocity of the joints. When contact with an object is detected, a controller is activated to regulate how much force the finger exerts.

“This way, the movements of generic grasping motions are adapted to objects with different shapes, stiffness and sizes,” says Tieck. The system can also adapt its grasping motion quickly if the object moves or deforms.

The robotic grasping system is described in a study published October 24 in IEEE Robotics and Automation Letters. The researchers’ robotic hand used its three different grasping motions on objects without knowing their properties. Target objects included a plastic bottle, a soft ball, a tennis ball, a sponge, a rubber duck, different balloons, a pen, and a tissue pack. The researchers found, for one, that pinching motions required more precision than cylindrical or spherical grasping motions.

“For this approach, the next step is to incorporate visual information from event-based cameras and integrate arm motion with SNNs,” says Tieck. “Additionally, we would like to extend the hand with haptic sensors.”

The long-term goal, he says, is to develop “a system that can perform grasping similar to humans, without intensive planning for contact points or intense stability analysis, and [that is] able to adapt to different objects using visual and haptic feedback.”
 

A Swarm of Cyborg Cockroaches That Lives in Your House

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/swarm-of-cyborg-cockroaches

Digital Nature Group at the University of Tsukuba in Japan is working towards a “post ubiquitous computing era consisting of seamless combination of computational resources and non-computational resources.” By “non-computational resources,” they mean leveraging the natural world, which for better or worse includes insects

At small scales, the capabilities of insects far exceed the capabilities of robots. I get that. And I get that turning cockroaches into an army of insect cyborgs could be useful in a variety of ways. But what makes me fundamentally uncomfortable is the idea that “in the future, they’ll appear out of nowhere without us recognizing it, fulfilling their tasks and then hiding.” In other words, you’ll have cyborg cockroaches hiding all over your house, all the time.

Disney Research Makes Robotic Gaze Interaction Eerily Lifelike

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/humanoids/disney-research-robotic-gaze

While it’s not totally clear to what extent human-like robots are better than conventional robots for most applications, one area I’m personally comfortable with them is entertainment. The folks over at Disney Research, who are all about entertainment, have been working on this sort of thing for a very long time, and some of their animatronic attractions are actually quite impressive.

The next step for Disney is to make its animatronic figures, which currently feature scripted behaviors, to perform in an interactive manner with visitors. The challenge is that this is where you start to get into potential Uncanny Valley territory, which is what happens when you try to create “the illusion of life,” which is what Disney (they explicitly say) is trying to do.

In a paper presented at IROS this month, a team from Disney Research, Caltech, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering is trying to nail that illusion of life with a single, and perhaps most important, social cue: eye gaze.

Video Friday: Attack of the Hexapod Robots

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-attack-hexapod-robots

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.


Metal Spheres Swarm Together to Create Freeform Modular Robots

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/freebots-spheres-swarm-robots

Swarms of modular, self-reconfigurable robots have a lot going for them, at least in theory— they’re resilient and easy to scale, since big robots can be made on demand from lots of little robots. One of the trickiest bits about modular robots is figuring out a simple and reliable way of getting them to connect to each other, without having to rely on some kind of dedicated connectivity system.

This week at the IEEE/RSJ International Conference on Intelligent Robots (IROS), a research team at the Chinese University of Hong Kong, Shenzhen, led by Tin Lun Lam is presenting a new kind of modular robot that solves this problem by using little robotic vehicles inside of iron spheres that can stick together wherever you need them to.

Dart-Shooting Drone Attacks Trees for Science

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/drones/dart-shooting-drone

We all know how robots are great at going to places where you can’t (or shouldn’t) send a human. We also know how robots are great at doing repetitive tasks. These characteristics have the potential to make robots ideal for setting up wireless sensor networks in hazardous environments—that is, they could deploy a whole bunch of self-contained sensor nodes that create a network that can monitor a very large area for a very long time.

When it comes to using drones to set up sensor networks, you’ve generally got two options: A drone that just drops sensors on the ground (easy but inaccurate and limited locations), or using a drone with some sort of manipulator on it to stick sensors in specific places (complicated and risky). A third option, under development by researchers at Imperial College London’s Aerial Robotics Lab, provides the accuracy of direct contact with the safety and ease of use of passive dropping by instead using the drone as a launching platform for laser-aimed, sensor-equipped darts. 

New AI Inferencing Records

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/new-ai-inferencing-records

MLPerf, a consortium of AI experts and computing companies, has released a new set of machine learning records. The records were set on a series of benchmarks that measure the speed of inferencing: how quickly an already-trained neural network can accomplish its task with new data. For the first time, benchmarks for mobiles and tablets were contested. According to David Kanter, executive director of MLPerf’s parent organization, a downloadable app is in the works that will allow anyone to test the AI capabilities of their own smartphone or tablet.

MLPerf’s goal is to present a fair and straightforward way to compare AI systems. Twenty-three organizations—including Dell, Intel, and Nvidia—submitted a total of 1200 results, which were peer reviewed and subjected to random third-party audits. (Google was conspicuously absent this round.) As with the MLPerf records for training AIs released over the summer, Nvidia was the dominant force, besting what competition there was in all six categories for both datacenter and edge computing systems. Including submissions by partners like Cisco and Fujitsu, 1029 results, or 85 percent of the total for edge and data center categories, used Nvidia chips, according to the company.

“Nvidia outperforms by a wide range on every test,” says Paresh Kharaya, senior director of product management, accelerated computing at Nvidia. Nvidia’s A100 GPUs powered its wins in the datacenter categories, while its Xavier was behind the GPU-maker’s edge-computing victories. According to Kharaya, on one of the new MLPerf benchmarks, Deep Learning Recommendation Model (DLRM), a single DGX A100 system was the equivalent of 1000 CPU-based servers.

There were four new inferencing benchmarks introduced this year, adding to the two carried over from the previous round:

  • BERT, for Bi-directional Encoder Representation from Transformers, is a natural language processing AI contributed by Google. Given a question input, BERT predicts a suitable answer.
  • DLRM, for Deep Learning Recommendation Model is a recommender system that is trained to optimize click-through rates. It’s used to recommend items for online shopping and rank search results and social media content. Facebook was the major contributor of the DLRM code.
  • 3D U-Net is used in medical imaging systems to tell which 3D voxel in an MRI scan are parts of a tumor and which are healthy tissue. It’s trained on a dataset of brain tumors.
  • RNN-T, for Recurrent Neural Network Transducer, is a speech recognition model. Given a sequence of speech input, it predicts the corresponding text.

In addition to those new metrics, MLPerf put together the first set of benchmarks for mobile devices, which were used to test smartphone and tablet platforms from MediaTek, Qualcomm, and Samsung as well as a notebook from Intel. The new benchmarks included:

  • MobileNetEdgeTPU, an image classification benchmark that is considered the most ubiquitous task in computer vision. It’s representative of how a photo app might be able pick out the faces of you or your friends.
  • SSD-MobileNetV2, for Single Shot multibox Detection with MobileNetv2, is trained to detect 80 different object categories in input frames with 300×300 resolution. It’s commonly used to identify and track people and objects in photography and live video.
  • DeepLabv3+ MobileNetV2: This is used to understand a scene for things like VR and navigation, and it plays a role in computational photography apps. 
  • MobileBERT is a mobile-optimized variant of the larger natural language processing BERT model that is fine-tuned for question answering. Given a question input, the MobileBERT generates an answer.

The benchmarks were run on a purpose-built app that should be available to everyone within months, according to Kanter. “We want something people can put into their hands for newer phones,” he says.

The results released this week were dubbed version 0.7, as the consortium is still ramping up. Version 1.0 is likely to be complete in 2021.

IROS Robotics Conference Is Online Now and Completely Free

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/iros-2020-online

The 2020 International Conference on Intelligent Robots and Systems (IROS) was originally going to be held in Las Vegas this week. Like ICRA last spring, IROS has transitioned to a completely online conference, which is wonderful news: Now everyone everywhere can participate in IROS without having to spend a dime on travel.

IROS officially opened yesterday, and the best news is that registration is entirely free! We’ll take a quick look at what IROS has on offer this year, which includes some stuff that’s brand news to IROS.