Tag Archives: robotics

How Robots Helped Out After the Surfside Condo Collapse

Post Syndicated from Robin R. Murphy original https://spectrum.ieee.org/building-collapse-surfside-robots

Editor’s Note: Along with Robin Murphy, the authors of this article include David Merrick, Justin Adams, Jarrett Broder, Austin Bush, Laura Hart, and Rayne Hawkins. This team is with the Florida State University’s Disaster Incident Response Team, which was in Surfside for 24 days at the request of Florida US&R Task 1 (Miami Dade Fire Rescue Department).

On June 24, 2021, at 1:25AM portions of the 12 story Champlain Towers South condominium in Surfside, Florida collapsed, killing 98 people and injuring 11, making it the third largest fatal collapse in US history. The life-saving and mitigation Response Phase, the phase where responders from local, state, and federal agencies searched for survivors, spanned June 24 to July 7, 2021. This article summarizes what is known about the use of robots at Champlain Towers South, and offers insights into challenges for unmanned systems.


Small unmanned aerial systems (drones) were used immediately upon arrival by the Miami Dade Fire Rescue (MDFR) Department to survey the roughly 2.68 acre affected area. Drones, such as the DJI Mavic Enterprise Dual with a spotlight payload and thermal imaging, flew in the dark to determine the scope of the collapse and search for survivors. Regional and state emergency management drone teams were requested later that day to supplement the effort of flying day and night for tactical life-saving operations and to add flights for strategic operations to support managing the overall response.


View of a Phantom 4 Pro in use for mapping the collapse on July 2, 2021. Two other drones were also in the airspace conducting other missions but not visible. Photo: Robin R. Murphy

The teams brought at least 9 models of rotorcraft drones, including the DJI Mavic 2 Enterprise Dual, Mavic 2 Enterprise Advanced, DJI Mavic 2 Zoom, DJI Mavic Mini, DJI Phantom 4 Pro, DJI Matrice 210, Autel Dragonfish, and Autel EVO II Pro plus a tethered Fotokite drone. The picture above shows a DJI Phantom 4 Pro in use, with one of the multiple cranes in use on the site visible. The number of flights for tactical operations were not recorded, but drones were flown for 304 missions for strategic operations alone, making the Surfside collapse the largest and longest use of drones recorded for a disaster, exceeding the records set by Hurricane Harvey (112) and Hurricane Florence (260).

Unmanned ground bomb squad robots were reportedly used on at least two occasions in the standing portion of the structure during the response, once to investigate and document the garage and once on July 9 to hold a repeater for a drone flying in the standing portion of the garage. Note that details about the ground robots are not yet available and there may have been more missions, though not on the order of magnitude of the drone use. Bomb squad robots tend to be too large for use in areas other than the standing portions of the collapse.

We concentrate on the use of the drones for tactical and strategic operations, as the authors were directly involved in those operations. It offers a preliminary analysis of the lessons learned. The full details of the response will not be available for many months due to the nature of an active investigation into the causes of the collapse and due to privacy of the victims and their families.

Drone Use for Tactical Operations

Tactical operations were carried out primarily by MDFR with other drone teams supporting when necessary to meet the workload. Drones were first used by the MDFR drone team, which arrived within minutes of the collapse as part of the escalating calls. The drone effort started with night operations for direct life-saving and mitigation activities. Small DJI Mavic 2 Enterprise Dual drones with thermal camera and spotlight payloads were used for general situation awareness to help responders understand the extent of the collapse beyond what could be seen from the street side. The built-in thermal imager was used but did not have the resolution and was unable to show details as much of the material was the same temperature and heat emissions were fuzzy. The spotlight with the standard visible light camera was more effective, though the view was constricted. The drones were also used to look for survivors or trapped victims, help determine safety hazards to responders, and provide task force leaders with overwatch of the responders. During daylight, DJI Mavic Zoom drones were added because of their higher camera resolution zoom. When fires started in the rubble, drones with a streaming connection to bucket truck operators were used to help optimize position of water. Drones were also used to locate civilians entering the restricted area or flying drones to taking pictures.

In a novel use of drones for physical interaction, MDFR squads flew drones to attempt to find and pick up items in the standing portion of the structure with immense value to survivors.

As the response evolved, the use of drones was expanded to missions where the drones would fly in close proximity to structures and objects, fly indoors, and physically interact with the environment. For example, drones were used to read license plates to help identify residents, search for pets, and document belongings inside parts of the standing structure for families. In a novel use of drones for physical interaction, MDFR squads flew drones to attempt to find and pick up items in the standing portion of the structure with immense value to survivors. Before the demolition of the standing portion of the tower, MDFR used a drone to remove an American flag that had been placed on the structure during the initial search.

Drone Use for Strategic Operations


An orthomosiac of the collapse constructed from imagery collected by a drone on July 1, 2021.

Strategic operations were carried out by the Disaster Incident Research Team (DIRT) from the Florida State University Center for Disaster Risk Policy. The DIRT team is a state of Florida asset and was requested by Florida Task Force 1 when it was activated to assist later on June 24. FSU supported tactical operations but was solely responsible for collecting and processing imagery for use in managing the response. This data was primarily orthomosiac maps (a single high resolution image of the collapse created from stitching together individual high resolution imagers, as in the image above) and digital elevation maps (created from structure from motion, below).


Digital elevation map constructed from imagery collected by a drone on 27 June, 2021.Photo: Robin R. Murphy

These maps were collected every two to four hours during daylight, with FSU flying an average of 15.75 missions per day for the first two weeks of the response. The latest orthomosaic maps were downloaded at the start of a shift by the tactical responders for use as base maps on their mobile devices. In addition, a 3D reconstruction of the state of the collapse on July 4 was flown the afternoon before the standing portion was demolished, shown below.


GeoCam 3D reconstruction of the collapse on July 4, 2021. Photo: Robin R. Murphy

The mapping functions are notable because they require specialized software for data collection and post-processing, plus the speed of post-processing software relied on wireless connectivity. In order to stitch and fuse images without gaps or major misalignments, dedicated software packages are used to generate flight paths and autonomously fly and trigger image capture with sufficient coverage of the collapse and overlap between images.

Coordination of Drones on Site

The aerial assets were loosely coordinated through social media. All drones teams and Federal Aviation Administration (FAA) officials shared a WhatsApp group chat managed by MDFR. WhatsApp offered ease of use, compatibility with everyone’s smartphones and mobile devices, and ease of adding pilots. Ease of adding pilots was important because many were not from MDFR and thus would not be in any personnel-oriented coordination system. The pilots did not have physical meetings or briefings as a whole, though the tactical and strategic operations teams did share a common space (nicknamed “Drone Zone”) while the National Institute of Standards and Technology teams worked from a separate staging location. If a pilot was approved by MDFR drone captain who served as the “air boss,” they were invited to the WhatsApp group chat and could then begin flying immediately without physically meeting the other pilots.

The teams flew concurrently and independently without rigid, pre-specified altitude or area restrictions. One team would post that they were taking off to fly at what area of the collapse and at what altitude and then post when they landed. The easiest solution was for the pilots to be aware of each others’ drones and adjust their missions, pause, or temporarily defer flights. If a pilot forgot to post, someone would send a teasing chat eliciting a rapid apology.

Incursions by civilian manned and unmanned aircraft in the restricted airspace did occur. If FAA observers or other pilots saw a drone flying that was not accounted for in the chat, i.e., that five drones were visible over the area but only four were posted, or if a drone pilot saw a drone in an unexpected area, they would post a query asking if someone had forgotten to post or update a flight. If the drone remained unaccounted for, the FAA would assume that a civilian drone had violated the temporary flight restrictions and search the surrounding area for the offending pilot.

Preliminary Lessons Learned

While the drone data and performance is still being analyzed, some lessons learned have emerged that may be of value to the robotics, AI, and engineering communities.

Tactical and strategic operations during the response phase favored small, inexpensive, easy to carry platforms with cameras supporting coarse structure from motion rather than larger, more expensive lidar systems. The added accuracy of lidar systems was not needed for those missions, though the greater accuracy and resolution of such systems were valuable for the forensic structural analysis. For tactical and strategic operations, the benefits of lidar was not worth the capital costs and logistical burden. Indeed, general purpose consumer/prosumer drones that could fly day or night, indoors and outdoors, and for both mapping and first person view missions were highly preferred over specialized drones. The reliability of a drone was another major factor in choosing a specific model to field, again favoring consumer/prosumer drones as they typically have hundreds of thousand hours of flight time more than specialized or novel drones. Tethered drones offer some advantages for overwatch but many tactical operations missions require a great deal of mobility. Strategic mapping necessitates flying directly over the entire area being mapped.

While small, inexpensive general purpose drones offered many advantages, they could be further improved for flying at night and indoors. A wider area of lighting would be helpful. A 360 degree (spherical) area of coverage for obstacle avoidance for working indoors or at low altitudes and close proximity to irregular work envelopes and near people, especially as night, would also be useful. Systems such as the Flyability ELIOS 2 are designed to fly in narrow and highly cluttered indoor areas, but no models were available for the immediate response. Drone camera systems need to be able to look straight up to inspect the underside of structures or ceilings. Mechanisms for determining the accurate GPS location of a pixel in an image, not just the GPS location of the drone, is becoming increasing desirable.

Other technologies could be of benefit to the enterprise but face challenges. Computer vision/machine learning (CV/ML) for searching for victims in rubble is often mentioned as a possible goal, but a search for victims who are not on the surface of the collapse is not visually directed. The portions of victims that are not covered by rubble are usually camouflaged with gray dust, so searches tend to favor canines using scent. Another challenge for CV/ML methods is the lack of access to training data. Privacy and ethical concerns poses barriers to the research community gaining access to imagery with victims in the rubble, but simulations may not have sufficient fidelity.

The collapse supplies motivation for how informatics research and human-computer interaction and human-robot interaction can contribute to the effective use of robots during a disaster, and illustrates that a response does not follow a strictly centralized, hierarchical command structure and the agencies and members of the response are not known in advance. Proposed systems must be flexible, robust, and easy to use. Furthermore, it is not clear that responders will accept a totally new software app versus making do with a general purpose app such as WhatsApp that the majority routinely use for other purposes.

The biggest lesson learned is that robots are helpful and warrant more investment, particular as many US states are proposing to terminate purchase of the very models of drones that were so effective over cybersecurity concerns.

However, the biggest lesson learned is that robots are helpful and warrant more investment, particular as many US states are proposing to terminate purchase of the very models of drones that were so effective over cybersecurity concerns. There remains much to work to be done by researchers, manufacturers, and emergency management to make these critical technologies more useful for extreme environments. Our current work is focusing on creating open source datasets and documentation and conducting a more thorough analysis to accelerate the process.

Value of Drones

The pervasive use of the drones indicates their implicit value to responding to, and documenting, the disaster. It is difficult to quantify the impact of drones, similar to the difficulties in quantifying the impact of a fire truck on firefighting or the use of mobile devices in general. Simply put, drones would not have been used beyond a few flights if they were not valuable.

The impact of the drones on tactical operations was immediate, as upon arrival MDFR flew drones to assess the extent of the collapse. Lighting on fire trucks primarily illuminated the street side of the standing portion of the building, while the drones, unrestricted by streets or debris, quickly expanded situation awareness of the disaster. The drones were used optimize placement of water to suppress the fires in the debris. The impact of the use of drones for other tactical activities is harder to quantify, but the frequent flights and pilots remaining on stand-by 24/7 indicate their value.

The impact of the drones on strategic operations was also considerable. The data collected by the drones and then processed into 2D maps and 3D models became a critical part of the US&R operations as well as one part of the nascent investigation into why the building failed. During initial operations, DIRT provided 2D maps to the US&R teams four times per day. These maps became the base layers for the mobile apps used on the pile to mark the locations of human remains, structural members of the building, personal effects, or other identifiable information. Updated orthophotos were critical to the accuracy of these reports. These apps running on mobile devices suffered from GPS accuracy issues, often with errors as high as ten meters. By having base imagery that was only hours old, mobile app users where able to ‘drag the pin’ on the mobile app to a more accurate report location on the pile – all by visualizing where they were standing compared to fresh UAS imagery. Without this capability, none of the GPS field data would be of use to US&R or investigators looking at why the structural collapse occurred. In addition to serving a base layer on mobile applications, the updated map imagery was used in all tactical, operational, and strategic dashboards by the individual US&R teams as well as the FEMA US&R Incident Support Team (IST) on site to assist in the management of the incident.

Aside from the 2D maps and orthophotos, 3D models were created from the drone data and used by structural experts to plan operations, including identifying areas with high probabilities of finding survivors or victims. Three-dimensional data created through post-processing also supported the demand for up-to-date volumetric estimates – how much material was being removed from the pile, and how much remained. These metrics provided clear indications of progress throughout the operations.

Acknowledgments

Portions of this work were supported by NSF grants IIS-1945105 and CMMI- 2140451. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

The authors express their sincere condolences to the families of the victims.

Nothing Can Keep This Drone Down

Post Syndicated from It uses a beetle-inspired set of wings to self-right itself original https://spectrum.ieee.org/tech-talk/robotics/drones/nothing-can-keep-this-drone-down

Journal Watch report logo, link to report landing page

When life knocks you down, you’ve got to get back up. Ladybugs take this advice seriously in the most literal sense. If caught on their backs, the insects are able to use their tough exterior wings, called elytra (of late made famous in the game Minecraft), to self-right themselves in just a fraction of a second.

Inspired by this approach, researchers have created self-righting drones with artificial elytra. Simulations and experiments show that the artificial elytra can not only help salvage fixed-wing drones from compromising positions, but also improve the aerodynamics of the vehicles during flight. The results are described in a study published July 9 in IEEE Robotics and Automation Letters.

Charalampos Vourtsis is a doctoral assistant at the Laboratory of Intelligent Systems, Ecole Polytechnique Federale de Lausanne in Switzerland who co-created the new design. He notes that beetles, including ladybugs, have existed for tens of millions of years. “Over that time, they have developed several survival mechanisms that we found to be a source of inspiration for applications in modern robotics,” he says.

His team was particularly intrigued by beetles’ elytra, which for ladybugs are their famous black-spotted, red exterior wing. Underneath the elytra is the hind wing, the semi-transparent appendage that’s actually used for flight.

When stuck on their backs, ladybugs use their elytra to stabilize themselves, and then thrust their legs or hind wings in order to pitch over and self-right. Vourtsis’ team designed Micro Aerial Vehicles (MAVs) that use a similar technique, but with actuators to provide the self-righting force. “Similar to the insect, the artificial elytra feature degrees of freedom that allow them to reorient the vehicle if it flips over or lands upside down,” explains Vourtsis.

The researchers created and tested artificial elytra of different lengths (11, 14 and 17 centimeters) and torques to determine the most effective combination for self-righting a fixed-wing drone. While torque had little impact on performance, the length of elytra was found to be influential.

On a flat, hard surface, the shorter elytra lengths yielded mixed results. However, the longer length was associated with a perfect success rate. The longer elytra were then tested on different inclines of 10°, 20° and 30°, and at different orientations. The drones used the elytra to self-right themselves in all scenarios, except for one position at the steepest incline.  

The design was also tested on seven different terrains: pavement, course sand, fine sand, rocks, shells, wood chips and grass. The drones were able to self-right with a perfect success rate across all terrains, with the exception of grass and fine sand. Vourtsis notes that the current design was made from widely available materials and a simple scale model of the beetle’s elytra—but further optimization may help the drones self-right on these more difficult terrains.

As an added bonus, the elytra were found to add non-negligible lift during flight, which offsets their weight.  

Vourtsis says his team hopes to benefit from other design features of the beetles’ elytra. “We are currently investigating elytra for protecting folding wings when the drone moves on the ground among bushes, stones, and other obstacles, just like beetles do,” explains Vourtsis. “That would enable drones to fly long distances with large, unfolded wings, and safely land and locomote in a compact format in narrow spaces.”

12 Robotics Teams Will Hunt For (Virtual) Subterranean Artifacts

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/twelve-teams-to-compete-in-darpa-subt-virtual-finals

Last week, DARPA announced the twelve teams who will be competing in the Virtual Track of the DARPA Subterranean Challenge Finals, scheduled to take place in September in Louisville, KY. The robots and the environment may be virtual, but the prize money is very real, with $1.5 million of DARPA cash on the table for the teams who are able to find the most subterranean artifacts in the shortest amount of time.

12 Robotics Teams Will Hunt For (Virtual) Subterranean Artifacts

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/twelve-teams-to-compete-in-darpa-subt-virtual-finals

Last week, DARPA announced the twelve teams who will be competing in the Virtual Track of the DARPA Subterranean Challenge Finals, scheduled to take place in September in Louisville, KY. The robots and the environment may be virtual, but the prize money is very real, with $1.5 million of DARPA cash on the table for the teams who are able to find the most subterranean artifacts in the shortest amount of time.

New AI-Based Augmented Innovation Tool Promises to Transform Engineer Problem Solving

Post Syndicated from IP Innovation original https://spectrum.ieee.org/robotics/artificial-intelligence/new-aibased-augmented-innovation-tool-promises-to-transform-engineer-problem-solving

You’re an engineer trying to work out a solution to a complicated problem.  You have been at this problem for the last three days. You’ve been leveraging your expertise in innovative methods and other disciplined processes, but you still haven’t gotten to where you need to be.

Imagine if you could forego the last thirty hours of work, and instead you could have reached a novel solution in just 30 minutes. In addition to having saved yourself nearly a week of time, you would have not only arrived at a solution to your vexing engineering issue, but you also would have prepared all the necessary documentation to apply for intellectual property (IP) protection for it. 

This is now what’s available from IP.com  with its latest suite of workflow solutions dubbed IQ Ideas PlusTM.  IQ Ideas Plus makes it easy for inventors to submit, refine, and collaborate on ideas that are then delivered to the IP team for review.  This new workflow solution is built on IP.com’s AI natural language processing engine, Semantic GistTM, which the company has been refining since 1994. The IQ Ideas Plus portfolio was introduced earlier this year in the U.S. and has started rolling out worldwide.  

“The great thing about Semantic Gist is that it is set up to do a true semantic search,” explained Dr. William Fowlkes, VP Analytics and Workflow Solutions at IP.com and developer of the IQ Ideas Plus solution. “It works off of your description. It does not require you to use arcane codes to define subject matters, to use keywords, or rely on complex Boolean constructs to find the key technology that you’re looking for.”

The program is leveraging AI to analyze your words. So, the description of your problem is turned into a query. The AI engine then analyzes that query for its technical content and then using essentially cosine-similarity-type techniques and vector math, it will search eight or nine million patents, from any field, that are similar to your problem. 

“Even patents that look like they’re in a different field sometimes have some pieces, some key technology nuggets, that are actually similar to your problem and it will find those,” added Fowlkes.

In a typical session, you might spend 10 – 15 minutes describing your problem on the IQ Ideas Plus template, which includes root cause analysis, when you need to fix a specific problem, or system improvement analysis, when you are asked to develop the next big thing for an existing product. The template lists those elements that you need to include so that you describe all the relevant factors and how they work together. 

The template involves a graphical user interface (GUI) that starts by asking you to name your new analysis and to describe the type of analysis you’ll be conducting: “Solve a Problem”, or “Improve a System”. 

After you’ve chosen to ‘Solve a Problem’, for example, you are given a drop-down menu that asks you what field this problem resides in, i.e., mechanical engineering, electrical engineering, etc. The next drop-down menu then asks what sub-group this field belongs to, i.e., aerospace. After you’ve chosen your fields, you write a fairly simple description of your problem and ask for a solution (How do I fix…?). 

You then press the button, and three to five seconds later, you’re provided two lists – “Functional Concepts” and “Inventive Principles”. One can think of the Functional Concepts list as a thorough catalogue of all the prior art in this area. What really distinguishes the IQ Ideas Plus process is the “Inventive Principles” list, which is abstractions from previous patents or patent applications. 

The semantic engine returns ordered lists with the most relevant results at the top. Of course, as you scroll down through the list, after the first 10 to 20, the results become less and less relevant.

What will often happen is that as you work through both the “Functional Concepts” and “Inventive Principles” lists you begin to realize that you’ve omitted elements to your description, or that your description should go in a slightly different direction based on the results. While this represents a slightly iterative process, each iteration is just as fast as the first. In fact, it’s faster because you no longer need to spend 10 minutes writing down your changes. All along the process, there’s a workbook, similar to an electronic lab notebook, for you to jot down your ideas. 

As you jot down your ideas based on the recommendations from the AI, it will offer you the ability to run a concept evaluation, telling you whether the concept is “marginally acceptable” or “good”, for example.  You can use this concept evaluation tool to understand whether you have written your problem and solution in a way that it’s unique or novel, or whether you should consider going back to the drawing board to keep iterating on it. 

When you get a high-scoring idea, the next module, called “Inventor’s Aide,” helps you write a very clear invention disclosure. In many organizations, drafting and submitting disclosures can be a pain point for engineers. Inventor’s Aide makes the process fast and easy providing suggestions to make the language clear and concise.

With the IQ Ideas Plus suite of tools, all of the paperwork (i.e., a list of related or similar patents, companies active in the field, a full technology landscape review, etc.) is included as attachments to your invention disclosure so that when it gets sent to the patent committee, they can look at the idea and know what art is already there and what technologies are in the space. They can then vet your idea, which has been delivered in a clear, concise manner with no jargon, so they understand the idea you have written. 

The cycle time between a patent review committee looking at your disclosure and you getting it back can sometimes take weeks.  IQ Ideas Plus shortens the cycle time, drives efficiencies and reduces a lot of frustration on both ends of the equation.  Moving more complete disclosures through the system improves the grant rate of the applications because the tool has helped document necessary legwork during the process. 

“IQ Ideas does a great job of both helping you to find novel solutions using the brainstorming modules, and then analyzing those new ideas using the Inventor’s Aide module,” Fowlkes said.

Fowkes argues that this really benefits both sides of the invention process – product development engineers and IP teams. For the engineers, filing invention disclosures is a very burdensome task. For the patent review committees or IP Counsel, getting clear, concise disclosures, free of jargon and acronyms and complete with documentation of prior art attached, makes the review faster and more efficient.

Professor Greg Gdowski,  Executive Director of the Center for Medical Technology & Innovation, at the University of Rochester, deployed IQ Ideas Plus to his students earlier this year.  According to Gdowski, IQ Ideas Plus is very valuable.

“We train our students in carrying out technology landscapes on unmet clinical needs that are observed in our surgical operating rooms.  Despite our best efforts, the students always miss technologies that are out there in the form of patent or patent applications.  IQ Ideas Plus not only helped us brainstorm additional solutions, but it also revealed existing technologies that would have complicated the solution space had they not been identified.”

Gdowski said another important advantage of using IQ Ideas Plus was that it helped the team understand the distribution of patents and companies working on technology related to a specific unmet clinical need (or problem).   “IQ Ideas Plus gives engineers a new lens by which to evaluate solutions to problems and to execute intellectual property landscapes,” Gdowski added.

IQ Ideas Plus enables faster idea generation and collaboration, more complete documents for submission and review so the best ideas surface faster allowing great ideas to get to market faster.

Footnote:
Greg Gdowski is the IEEE Region 1 Director-Elect Candidate
Dr. William Fowlkes is an IEEE Senior Member

AI and Robots Are a Minefield of Cognitive Biases

Post Syndicated from Sangbae Kim original https://spectrum.ieee.org/automaton/robotics/robotics-software/humans-cognitive-biases-facing-ai

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Most people associate artificial intelligence with robots as an inseparable pair. In fact, the term “artificial intelligence” is rarely used in research labs. Terminology specific to certain kinds of AI and other smart technologies are more relevant. Whenever I’m asked the question “Is this robot operated by AI?”, I hesitate to answer—wondering whether it would be appropriate to call the algorithms we develop “artificial intelligence.”

First used by scientists such as John McCarthy and Marvin Minsky in the 1950s, and frequently appearing in sci-fi novels or films for decades, AI is now being used in smartphone virtual assistants and autonomous vehicle algorithms. Both historically and today, AI can mean many different things—which can cause confusion.

However, people often express the preconception that AI is an artificially realized version of human intelligence. And that preconception might come from our cognitive bias as human beings. 

We judge robots’ or AI’s tasks in comparison to humans

If you happened to follow this news in 2017, how did you feel when AlphaGo, AI developed by DeepMind, defeated 9-dan Go player Lee Sedol? You may have been surprised or terrified, thinking that AI has surpassed the ability of geniuses. Still, winning a game with an exponential number of possible moves like Go only means that AI has exceeded a very limited part of human intelligence. The same goes for IBM’s AI, Watson, which competed in ‘Jeopardy!’, the television quiz show.

I believe many were impressed to see the Mini Cheetah, developed in my MIT Biomimetic Robotics Laboratory, perform a backflip. While jumping backwards and landing on the ground is very dynamic, eye-catching and, of course, difficult for humans, the algorithm for the particular motion is incredibly simple compared to one that enables stable walking that requires much more complex feedback loops. Achieving robot tasks that are seemingly easy for us is often extremely difficult and complicated. This gap occurs because we tend to think of a task’s difficulty based on human standards.  

We tend to generalize AI functionality after watching a single robot demonstration. When we see someone on the street doing backflips, we tend to assume this person would be good at walking and running, and also be flexible and athletic enough to be good at other sports. Very likely, such judgement about this person would not be wrong.

However, can we also apply this judgement to robots? It’s easy for us to generalize and determine AI performance based on an observation of a specific robot motion or function, just as we do with humans. By watching a video of a robot hand-solving Rubik’s Cube at OpenAI, an AI research lab, we think that the AI can perform all other simpler tasks because it can perform such a complex one. We overlook the fact that this AI’s neural network was only trained for a limited type of task; solving the Rubik’s Cube in that configuration. If the situation changes—for example, holding the cube upside down while manipulating it—the algorithm does not work as well as might be expected.

Unlike AI, humans can combine individual skills and apply them to multiple complicated tasks. Once we learn how to solve a Rubik’s Cube, we can quickly work on the cube even when we’re told to hold it upside down, though it may feel strange at first. Human intelligence can naturally combine the objectives of not dropping the cube and solving the cube. Most robot algorithms will require new data or reprogramming to do so. A person who can spread jam on bread with a spoon can do the same using a fork. It is obvious. We understand the concept of “spreading” jam, and can quickly get used to using a completely different tool. Also, while autonomous vehicles require actual data for each situation, human drivers can make rational decisions based on pre-learned concepts to respond to countless situations. These examples show one characteristic of human intelligence in stark contrast to robot algorithms, which cannot perform tasks with insufficient data.

Mammals have continuously been evolving for more than 65 million years. The entire time humans spent on learning math, using languages, and playing games would sum up to a mere 10,000 years. In other words, humanity spent a tremendous amount of time developing abilities directly related to survival, such as walking, running, and using our hands. Therefore, it may not be surprising that computers can compute much faster than humans, as they were developed for this purpose in the first place. Likewise, it is natural that computers cannot easily obtain the ability to freely use hands and feet for various purposes as humans do. These skills have been attained through evolution for over 10 million years.

This is why it is unreasonable to compare robot or AI performance from demonstrations to that of an animal or human’s abilities. It would be rash to believe that robot technologies involving walking and running like animals are complete, while watching videos of the Cheetah robot running across fields at MIT and leaping over obstacles. Numerous robot demonstrations still rely on algorithms set for specialized tasks in bounded situations. There is a tendency, in fact, for researchers to select demonstrations that seem difficult, as it can produce a strong impression. However, this level of difficulty is from the human perspective, which may be irrelevant to the actual algorithm performance.

Humans are easily influenced by instantaneous and reflective perception before any logical thoughts. And this cognitive bias is strengthened when the subject is very complicated and difficult to analyze logically—for example, a robot that uses machine learning. 

So where does our human cognitive bias come from? I believe it comes from our psychological tendency to subconsciously anthropomorphize the subjects we see. Humans have evolved as social animals, probably developing the ability to understand and empathize with each other in the process. Our tendency to anthropomorphize subjects would have come from the same evolutionary process. People tend to use the expression “teaching robots” when they refer to programing algorithms. Nevertheless, we are used to using anthropomorphized expressions. As the 18th century philosopher David Hume said, “There is a universal tendency among mankind to conceive all beings like themselves. We find human faces in the moon, armies in the clouds.”

Of course, we not only anthropomorphize subjects’ appearance but also their state of mind. For example, when Boston Dynamics released a video of its engineers kicking a robot, many viewers reacted by saying “this is cruel,” and that they “pity the robot.” A comment saying, “one day, robots will take revenge on that engineer” received likes. In reality, the engineer was simply testing the robot’s balancing algorithm. However, before any thought process to comprehend this situation, the aggressive motion of kicking combined with the struggling of the animal-like robot is instantaneously transmitted to our brains, leaving a strong impression. Like this, such instantaneous anthropomorphism has a deep effect on our cognitive process. 

Humans process information qualitatively, and computers, quantitively

Looking around, our daily lives are filled with algorithms, as can be seen by machines and services that run on these algorithms. All algorithms operate on numbers. We use the terms such as “objective function,” which is a numerical function that represents a certain objective. Many algorithms have the sole purpose of reaching the maximum or minimum value of this function, and an algorithm’s characteristics differ based on how it achieves this.
The goal of tasks such as winning a game of Go or chess are relatively easy to quantify. The easier quantification is, the better the algorithms work. On the contrary, humans often make decisions without quantitative thinking.  

As an example, consider cleaning a room. The way we clean differs subtly from day to day, depending on the situation, depending on whose room it is, and depending on how one feels. Were we trying to maximize a certain function in this process? We did no such thing. The act of cleaning has been done with an abstract objective of “clean enough.” Besides, the standard for how much is “enough” changes easily. This standard may be different among people, causing conflicts particularly among family members or roommates. 

There are many other examples. When you wash your face every day, which quantitative indicators do you intend to maximize with your hand movements? How hard do you rub? When choosing what to wear? When choosing what to have for dinner? When choosing which dish to wash first? The list goes on. We are used to making decisions that are good enough by putting together information we already have. However, we often do not check whether every single decision is optimized. Most of the time, it is impossible to know because we would have to satisfy numerous contradicting indicators with limited data. When selecting groceries with a friend at the store, we cannot each quantify standards for groceries and make a decision based on these numerical values. Usually, when one picks something out, the other will either say “OK!” or suggest another option. This is very different from saying this vegetable “is the optimal choice!” It is more like saying “this is good enough” 

This operational difference between people and algorithms may cause troubles when designing work or services we expect robots to perform. This is because while algorithms perform tasks based on quantitative values, humans’ satisfaction, the outcome of the task, is difficult to be quantified completely. It is not easy to quantify the goal of a task that must adapt to individual preferences or changing circumstances like the aforementioned room cleaning or dishwashing tasks. That is, to coexist with humans, robots may have to evolve not to optimize particular functions, but to achieve “good enough.” Of course, the latter is much more difficult to achieve robustly in real-life situations where you need to manage so many conflicting objectives and qualitative constraints. 

Actually, we do not know what we are doing

Try to recall the most recent meal you had before reading this. Can you remember what you had? Then, can you also remember the process of chewing and swallowing the food? Do you know what exactly your tongue was doing at that very moment? Our tongue does so many things for us. It helps us put food in our mouths, distribute the food between our teeth, swallow the finely chewed pieces, or even send large pieces back toward our teeth, if needed. We can naturally do all of this, even while talking to a friend, using your tongue also in charge of pronunciation. How much do our conscious decisions contribute to the movement of our tongues that accomplish so many complex tasks simultaneously? It may seem like we are moving our tongues as we want, but in fact, there are more moments when the tongue is moving automatically, taking high-level commands from our consciousness. This is why we cannot remember detailed movements of our tongues during a meal. We know little about their movement in the first place.

We may assume that our hands are the most consciously controllable organ, but many hand movements also happen automatically and unconsciously, or subconsciously at most. For those who disagree, try putting something like keys in your pocket and take it back out. In that short moment, countless micromanipulations instantly and seamlessly coordinated to complete the task. We often cannot perceive each action separately. We do not even know what units we should divide them into, so we collectively express them as abstract words such as organize, wash, apply, rub, wipe, etc. These verbs are qualitatively defined. They often refer to the aggregate of fine movements and manipulations, whose composition changes depending on the situations. Of course, it is easy even for children to understand and think of this concept, but from the perspective of algorithm development, these words are endlessly vague and abstract.

Let’s try to teach how to make a sandwich by spreading peanut butter on bread. We can show how this is done and explain with a few simple words. Let’s assume a slightly different situation. Say there is an alien who uses the same language as us, but knows nothing about human civilization or culture. (I know this assumption is already contradictory…, but please bear with me.) Can we explain over the phone how to make a peanut butter sandwich? We will probably get stuck trying to explain how to scoop peanut butter out of the jar. Even grasping the slice of bread is not so simple. We have to grasp the bread strongly enough so we can spread the peanut butter, but not so much so as to ruin the shape of the soft bread. At the same time, we should not drop the bread either. It is easy for us to think of how to grasp the bread, but it will not be easy to express this through speech or text, let alone in a function. Even if it is a human who is learning a task, can we learn a carpenter’s work over the phone? Can we precisely correct tennis or golf postures over the phone? It is difficult to discern to what extent the details we see are done either consciously or unconsciously. 

My point is that not everything we do with our hands and feet can directly be expressed with our language. Things that happen in between successive actions often automatically occur unconsciously, and thus we explain our actions in a much simpler way than how they actually take place. This is why our actions seem very simple, and why we forget how incredible they really are. The limitations of expression often lead to underestimation of actual complexity. We should recognize the fact that difficulty of language depiction can hinder research progress in fields where words are not well developed.

Until recently, AI has been practically applied in information services related to data processing. Some prominent examples today include voice recognition and facial recognition. Now, we are entering a new era of AI that can effectively perform physical services in our midst. That is, the time is coming in which automation of complex physical tasks becomes imperative.

Particularly, our increasingly aging society poses a huge challenge. Shortage of labor is no longer a vague social problem. It is urgent that we discuss how to develop technologies that augment humans’ capability, allowing us to focus on more valuable work and pursue lives uniquely human. This is why not only engineers but also members of society from various fields should improve their understanding of AI and unconscious cognitive biases. It is easy to misunderstand artificial intelligence, as noted above, because it is substantively unlike human intelligence.

Things that are very natural among humans may be cognitive biases for AI and robots. Without a clear understanding of our cognitive biases, we cannot set the appropriate directions for technology research, application, and policy. In order for productive development as a scientific community, we need keen attention to our cognition and deliberate debate in the process of promoting appropriate development and applications of technology.  

Sangbae Kim leads the Biomimetic Robotics Laboratory at MIT. The preceding is an adaptation of a blog Kim posted in June for Naver Labs

Video Friday: Fluidic Fingers

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-fluidic-fingers

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Humanoids 2020 – July 19-21, 2021 – [Online Event]
RO-MAN 2021 – August 8-12, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
IROS 2021 – September 27-1, 2021 – [Online Event]
ROSCon 2021 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today’s videos.


Jump-start Your Electric Motor Designs with Ansys Motor-CAD

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/jumpstart-your-electric-motor-designs-with-ansys-motorcad

free trial

Today’s electric motor design requires multiphysics analysis across a wide torque and speed operating range to accommodate rapid development cycles and system integration. Ansys Motor-CAD is accelerating this work-in-progress. Try Ansys Motor-CAD for free for 30-days and let us show you how we can help lower product development costs and reduce time to market today!

Dextrous Robotics Wants To Move Boxes With Chopsticks

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/industrial-robots/dexterous-robotics-develops-chopstick-manipulation-for-boxes

Hype aside, there aren’t necessarily all that many areas where robots have the potential to step into an existing workflow and immediately provide a substantial amount of value. But one of the areas that we have seen several robotics companies jump into recently is box manipulation—specifically, using robots to unload boxes from the back of a truck, ideally significantly faster than a human. This is a good task for robots because it plays to their strengths: you can work in a semi-structured and usually predictable environment, speed, power, and precision are all valued highly, and it’s not a job that humans are particularly interested in or designed for.

One of the more novel approaches to this task comes from Dextrous Robotics, a Memphis TN-based startup led by Evan Drumwright. Drumwright was a professor at GWU before spending a few years at the Toyota Research Institute and then co-founding Dextrous in 2019 with an ex-student of his, Sam Zapolsky. The approach that they’ve come up with is to do box manipulation without any sort of suction, or really any sort of grippers at all. Instead, they’re using what can best be described as a pair of moving arms, each gripping a robotic chopstick.

Community stories: Avye

Post Syndicated from Katie Gouskos original https://www.raspberrypi.org/blog/community-stories-avye-robotics-girls-tech/

We’re excited to share another incredible story from the community — the second in our new series of inspirational short films that celebrate young tech creators across the world.

A young teenager with glasses smiles
Avye discovered robotics at her local CoderDojo and is on a mission to get more girls like her into tech.

These stories showcase some of the wonderful things that young people are empowered to do when they learn how to create with technology. We hope that they will inspire many more young people to get creative with technology too!

Meet Avye

This time, you will meet an accomplished, young community member who is on a quest to encourage more girls to join her and get into digital making.

Help us celebrate Avye by liking and sharing her story on Twitter, Linkedin, or Facebook!

For as long as she can remember, Avye (13) has enjoyed creating things. It was at her local CoderDojo that seven-year-old Avye was introduced to the world of robotics. Avye’s second-ever robot, the Raspberry Pi–powered Voice O’Tronik Bot, went on to win the Hardware category at our Coolest Projects UK event in 2018.

A girl shows off a robot she has built
Avye showcased her Raspberry Pi–powered Voice O’Tronik Bot at Coolest Projects UK in 2018.

Coding and digital making have become an integral part of Avye’s life, and she wants to help other girls discover these skills too. She says, I believe that it’s important for girls and women to see and be aware of ordinary girls and women doing cool things in the STEM world.” Avye started running her own workshops for girls in their community and in 2018 founded Girls Into Coding. She has now teamed up with her mum Helene, who is committed to helping to drive the Girls Into Coding mission forwards.

I want to get other girls like me interested in tech.

Avye

Avye has received multiple awards to celebrate her achievements, including the Princess Diana Award and Legacy Award in 2019. Most recently, in 2020, Avye won the TechWomen100 Award, the Women in Tech’s Aspiring Teen Award, and the FDM Everywoman in Tech Award!

We cannot wait to see what the future has in store for her. Help us celebrate Avye and inspire others by liking and sharing her story on Twitter, Linkedin, or Facebook!

The post Community stories: Avye appeared first on Raspberry Pi.

Video Friday: Walker X

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-walker-x

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RSS 2021 – July 12-16, 2021 – [Online Event]
Humanoids 2020 – July 19-21, 2021 – [Online Event]
RO-MAN 2021 – August 8-12, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
IROS 2021 – September 27-1, 2021 – [Online Event]
ROSCon 2021 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today’s videos.


Video Friday: Spot Meets BTS

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-spot-meets-bts

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RSS 2021 – July 12-16, 2021 – [Online Event]
Humanoids 2020 – July 19-21, 2021 – [Online Event]
RO-MAN 2021 – August 8-12, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
IROS 2021 – September 27-1, 2021 – [Online Event]
ROSCon 2021 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today’s videos.


Zebra Technologies To Acquire Fetch Robotics for $305 Million

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/industrial-robots/zebra-technologies-acquire-fetch-robotics

A company called Zebra Technologies announced this morning that it intends to acquire Fetch Robotics for an impressive $305 million.

Fetch is best known for its autonomous mobile robots (AMRs) for warehouses and boasts “the largest portfolio of AMRs in the industry,” and we’re particular fans of its one-armed mobile manipulator for research. Zebra, meanwhile, does stuff with barcodes (get it?), and has been actively investing in robotics companies with a goal of increasing its footprint in the intelligent industrial automation space.

According to the press release, the acquisition “will provide an innovative offering that drives greater efficiencies and higher ROI through better orchestration of technology and people.” We have no idea what that means, but fortunately, we’ve been able to speak with both Fetch and Zebra for details about the deal.

Fetch Robotics’ $305 million purchase price includes $290 million in cash to snap up the 95% of Fetch that Zebra doesn’t already own—Zebra had already invested in Fetch through Zebra Ventures, which also has Locus Robotics and Plus One robotics in its portfolio. There are still some “customary closing conditions” and regulatory approvals that need to happen, so everything isn’t expected to get wrapped up for another month or so. And when it does, it will in some ways mark the end of a robotics story that we’ve been following for the better part of a decade.

Fetch Robotics was founded in early 2015 by the same team of robot experts who had founded Unbounded Robotics just two years before. Melonee Wise, Michael Ferguson, Derek King, and Eric Diehr all worked at Willow Garage, and Unbounded was a mobile manipulation-focused spin-off of Willow that didn’t pan out for reasons that are still not super clear. But in any case, Fetch was a fresh start that allowed Wise, Ferguson, King, and Diehr to fully develop their concept for an intelligent, robust, and efficient autonomous mobile robotic system.

Most of what Fetch Robotics does is warehouse logistics—moving stuff from one place to another so that humans don’t have to. Their autonomous mobile robots work outside of warehouses as well, most recently by providing disinfection services for places like airports. There are plenty of other companies in the larger AMR space, but from what we understand, what Fetch has been doing for the last five years has been consistently state of the art. 

This is why Fetch makes sense as an acquisition target, I think: they’ve got exceptional technology in an area (fulfillment, mostly) that has been undergoing a huge amount of growth and where robotics has an enormous opportunity. But what about Zebra Technologies? As far as I can make out, Zebra is one of those companies that you’ve probably never heard of but is actually enormous and everywhere. According to Fortune, as of 2020 they were the 581st biggest company in the world (just behind Levi Strauss) with a market value of $25 billion. While Zebra was founded in 1969, the Zebra-ness didn’t come into play until the early 1980s when they started making barcode printers and scanners. They got into RFID in the early 2000s, and then acquired Motorola’s enterprise unit in 2014, giving Zebra a huge mobile technology portfolio.

To find out where robots fit into all of this, and to learn more about what this means for Fetch, we spoke with Melonee Wise, CEO of Fetch, and Jim Lawton, Vice President and General Manager of Robotics Automation at Zebra.

IEEE Spectrum: Can you tell us about Zebra’s background and interest in robotics?

Jim Lawton: Zebra is a combination of companies that have come together over time. Historically, we were a printing company that made barcode labels, and then we acquired a mobile computing business from Motorola, and today we have a variety of devices that do sensing, analyzing, and acting—we’ve been getting increasingly involved in automation in general. 

A lot of our major customers are retailers, warehousing, transportation and logistics, or healthcare, and what we’ve heard a lot lately is that there is an increased pressure towards trying to figure out how to run a supply chain efficiently. Workflows have gotten much more complicated and many of our customers don’t feel like they’re particularly well equipped to sort through those challenges. They understand that there’s an opportunity to do something significant with robots, but what does that look like? What are the right strategies? And they’re asking us for help.

There are lots of AMR companies out there doing things that superficially seem similar, but what do you feel is special about Fetch?

Jim Lawton: I was at Universal Robots for a while, and at Rethink Robotics for a number of years, and designing and building robots and bringing them to market is really, really hard. The only way to pull it off is with an amazing team, and Melonee has done an extraordinarily outstanding job, pulling together a world class robotics team.

We had invested in Fetch Robotics a couple of years ago, so we’ve been working pretty closely together already. We invest in companies in part so that we can educate ourselves, but it’s also an opportunity to see whether we’re a good fit with each other. Zebra is a technology and engineering oriented company, and Fetch is as well. With the best team, and the best robots, we just think there’s an outstanding opportunity that we haven’t necessarily found with other AMR companies.

What about for Fetch? Why is Zebra a good fit?

Melonee Wise: Over the last couple of years we have been slowly expanding the devices that we want to connect to, and the software ecosystems that we want to connect to, and Zebra has provided a lot of that synergy. We’re constantly asked, can we get a robot to do something if we scan a barcode, or can we press a button on a tablet, and have a robot appear, things like that. Being able to deliver these kinds of end to end, fully encapsulated solutions that go beyond the robots and really solve the problems that customers are looking to solve—Zebra helps us do that.

And there’s also an opportunity for us as a robotics startup to partner with a larger company to help us scale much more rapidly. That’s the other thing that’s really exciting for us—Zebra has a very strong business in warehousing and logistics. They’re an industry leader, and I think they can really help us get to the next level as a company. 

Does that represent a transition for AMRs from just moving things from one place to another to integrating with all kinds of other warehouse systems?

Melonee Wise: For a decade or more, people have been talking about Industry 4.0 and how it’s going to change the world and revolutionize manufacturing, but as a community we’ve struggled to execute on that goal for lots of reasons. We’ve had what people might call islands of automation: siloed pieces of automation that are doing their thing by themselves. But if they have to talk to each other, that’s a bridge too far.

But in many ways automation technology is now getting mature enough through the things that we’ve seen in software for a long time, like APIs, interconnected services, and cloud platforms. Zebra has been working on that independently for a long time as part of their business, and so bringing our two businesses together to build these bridges between islands of automation is why it made sense for us to come together at this point in time.

If you go back far enough, Fetch has its origins in Willow Garage and ROS, and I know that Fetch still makes substantial software contributions back to the ROS community. Is that something you’ll be able to continue?

Melonee Wise: Our participation in the open source community is still very important, and I think it’s going to continue to be important. A lot of robotics is really about getting great talent, and open source is one way that we connect to that talent and participate in the larger ecosystem and draw value from it. There are also lots of great tools out there in the open source community that Fetch uses and contributes to. And I think those types of projects that are not core to our IP but give us value will definitely be things that we continue to participate in.

What will happen to the Fetch mobile manipulator that I know a lot of labs are currently using for research?

Melonee Wise: We’re committed to continuing to support our existing customers and I think that there’s still a place for the research product going forward.

What do you think are the biggest challenges for AMRs right now?

Melonee Wise: One thing that I think is happening in the industry is that the safety standards are now coming into play. In December of last year the first official autonomous mobile robot safety standards were released, and not everyone was ready for that, but Fetch has been at the front of this for a long time. It took about four years to develop the AMR safety standard, and getting to an understanding of what safe actually means and how you implement those safety measures. It’s common for safety standards to lag behind technology, but customers have been asking more and more, “well how do I know that your robots are safe?” And so I think what we’re going to see is that these safety standards are going to have differing effects on different companies, based on how thoughtful they’ve been about safety through the design and implementation of their technology,

What have you learned, or what has surprised you about your industry now that we’re a year and a half into the pandemic?

Melonee Wise: One of the more interesting things to me was that it was amazing how quickly the resistance to the cloud goes away when you have to deploy things remotely during a pandemic. Originally customers weren’t that excited about the cloud and wanted to do everything on site, but once the pandemic hit they switched their point of view on the technology pretty quickly, which was nice to see.

Jim Lawton: The amount of interest that we’ve seen in robots and automation in general has skyrocketed over the last year. In particular we’re hearing from companies that are not well equipped to deal with their automation needs, and the pandemic has just made it so much more clear to them that they have to do something. I think we’re going to see a renaissance within some of these spaces because of their investment in robotic technologies.

Parrot Announces A Bug-Inspired 4G Drone

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/drones/parrot-announces-anafi-ai-a-buginspired-4g-drone

Parrot released the Anafi drone almost exactly four years ago. I’m still a fan of the little consumer drone—the design is elegant, it’s exceptionally portable, the camera is great, and it’s easy to fly. But the biggest problem with the Anafi (especially four years later) is that it’s very much not the cleverest of drones, without any of the onboard obstacle avoidance that’s now become the standard. Today, Parrot is announcing the Anafi AI, a butt-forward redesign of the Anafi for pros that adds obstacle avoidance, an enormous camera, and 4G connectivity that allows the drone to be flown to anywhere (and behind any object) where you can get a reliable 4G signal.

SoftBank Stops Making Pepper Robots, Will Cut 165 Robotics Jobs in France

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/home-robots/softbank-stops-making-pepper-robots-will-cut-165-robotics-jobs-in-france

Reuters is reporting that SoftBank stopped manufacturing Pepper robots at some point last year due to low demand, and by September, will cut about half of the 330 positions at SoftBank Robotics Europe in France. Most of the positions will be in Q&A, sales, and service, which hopefully leaves SoftBank Robotics’ research and development group mostly intact. But the cuts reflect poor long-term sales, with SoftBank Robotics Europe having lost over 100 million Euros in the past three years, according to French business news site JDN. Speaking with Nikkei, SoftBank said that this doesn’t actually mean a permanent end for Pepper, and that they “plan to resume production if demand recovers.” But things aren’t looking good.

Legged Robots Do Surprisingly Well in Low Gravity

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/space-robots/legged-robots-surprisingly-well-low-gravity

Here on Earth, we’re getting good enough at legged robots that we’re starting to see a transition from wheels to legs for challenging environments, especially environments with some uncertainty as to exactly what kind of terrain your robot might encounter. Beyond Earth, we’re still heavily reliant on wheeled vehicles, but even that might be starting to change. While wheels do pretty well on the Moon and on Mars, there are lots of other places to explore, like smaller moons and asteroids. And there, it’s not just terrain that’s a challenge: it’s gravity.

In low gravity environments, any robot moving over rough terrain risks entering a flight phase. Perhaps an extended flight phase, depending on how low the gravity is, which can be dangerous to robots that aren’t prepared for it. Researchers at the Robotic Systems Lab at ETH Zurich have been doing some experiments with the SpaceBok quadruped, and they’ve published a paper in IEEE T-RO showing that it’s possible to teach SpaceBok to effectively bok around in low gravity environments while using its legs to reorient itself during flight, exhibiting “cat-like jumping and landing” behaviors through vigorous leg-wiggling.

Also, while I’m fairly certain that “bok” is not a verb that means “to move dynamically in low gravity using legs,” I feel like that’s what it should mean.  Sort of like pronk, except in space. Let’s make it so!

Why Robots Can’t Be Counted On to Find Survivors in the Florida Building Collapse

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/why-robots-cant-help-find-survivors-in-the-florida-building-collapse

On Thursday, a portion of the 12-story Champlain Towers South condominium building in Surfside, Florida (just outside of Miami) suffered a catastrophic partial collapse. As of Saturday morning, according to the Miami Herald, 159 people are still missing, and rescuers are removing debris with careful urgency while using dogs and microphones to search for survivors still trapped within a massive pile of tangled rubble.

It seems like robots should be ready to help with something like this. But they aren’t.

Video Friday: Household Skills

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-household-skills

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RoboCup 2021 – June 22-28, 2021 – [Online Event]
RSS 2021 – July 12-16, 2021 – [Online Event]
Humanoids 2020 – July 19-21, 2021 – [Online Event]
RO-MAN 2021 – August 8-12, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
IROS 2021 – September 27-1, 2021 – [Online Event]

Let us know if you have suggestions for next week, and enjoy today’s videos.


To Fly a Drone in the U.S., You Now Must Pass FAA’s TRUST Test

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/drones/all-recreational-drone-pilots-must-now-past-the-faas-trust-test

For years, the U.S. Federal Aviation Administration (FAA) has been shuffling its way towards some semblance of regulation of the enormous number of drones now in the hands of recreational pilots in the United States. The fact that anyone can run out and buy a cheap drone at a nearby store, charge the battery, and launch the thing has got to be stupendously annoying for the FAA. One of their jobs, after all, is to impress upon people that drone owners doing something like that is not always a sensible thing to do. 

Perhaps coming to terms with its unfortunate (albeit quite necessary) role as a bit of a buzzkill, the FAA has been desperately trying to find ways of forcing recreational drone pilots to at least read the rules they’re supposed to be following, without resorting to a burdensome new regulatory infrastructure. Their strategy seems to be something like, “we’re going to require drone pilots to do a couple of things, but those things will be so painless that nobody can possibly object.” The first of those things is registering your drone if it weighs more than 0.55 pound, and the second of those things, just announced this week, is the TRUST testing requirement for all recreational drone pilots who fly drones.

Video Friday: Nanotube-Powered Insect Robots

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-nanotubepowered-insect-robots

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi’an, China

Let us know if you have suggestions for next week, and enjoy today’s videos.