All posts by Evan Ackerman

Video Friday: Nanotube-Powered Insect Robots

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-nanotubepowered-insect-robots

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi’an, China

Let us know if you have suggestions for next week, and enjoy today’s videos.


What Full Autonomy Means for the Waymo Driver

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/full-autonomy-waymo-driver

In January, Waymo posted a tweet breaking down what “autonomy” means for the Waymo Driver, which is how the company refers to its autonomous driving system. The video in the Tweet points out that Level 1, 2, and 3 autonomy are not “fully autonomous” because a human driver might be needed. Sounds good. The Waymo Driver operates at Level 4 autonomy, meaning, Waymo says, that “no human driver is needed in our defined operational conditions.” This, Waymo continues, represents “fully autonomous driving technology,” with the Waymo Driver being “fully independent from a human driver.” 

Using the term “full autonomy” in the context of autonomous vehicles can be tricky. Depending on your perspective, a vehicle with Level 4 autonomy fundamentally cannot be called “fully” autonomous, because it’s autonomous in some situations and not others, which is where the defined operational conditions bit comes in. The folks behind these levels of autonomy, SAE International, are comfortable calling vehicles with both Level 4 and Level 5 autonomy “fully autonomous,” but from a robotics perspective, the autonomy of the Waymo Driver is a little more nuanced.

While humans may not be directly in the loop with Waymo’s vehicles, there’s a team of them on remote standby to provide high-level guidance if a vehicle finds itself in a novel or ambiguous situation that it isn’t confident about handling on its own. These situations won’t require a human to take over the operation of the vehicle, but they can include things like construction zones, unexpected road closures, or a police officer directing traffic with hand signals— situations a human might be able to interpret at a glance, but that autonomous systems notoriously find difficult.

There’s nothing wrong with the approach of having humans available like this, except that it raises the question of whether a Level 4 autonomous system should really be called fully autonomous and fully independent from a human driver if it sometimes finds itself in situations where it may decide to ask a remote human for guidance. It may seem pedantic, but having a clear understanding of what autonomous systems can and cannot do is very important, especially when such topics are becoming more and more relevant to people who may not have much of a background in robotics or autonomy. This is what prompted Waymo’s tweet, and Waymo now has a whole public education initiative called Let’s Talk Autonomous Driving that’s intended to clearly communicate what autonomous driving is and how it works.

In this same spirit, I spoke with Nathaniel Fairfield, who leads the behavior team at Waymo, to get a more detailed understanding of what Waymo actually means when it calls the Waymo Driver fully autonomous.

IEEE Spectrum: Can you tell us a little bit about your background, and what your current role is at Waymo?

Nathaniel Fairfield: I’m currently a Distinguished Software Engineer at Waymo looking after what we call “behavior,” which is the decision-making part of the onboard software, including behavior prediction, planning, routing, fleet response, and control. I’ve been with the team since we were founded as the Google self-driving car project back in 2009, and my background is in robotics. Before Waymo I was at the Carnegie Mellon University Robotics Institute (where I received my Ph.D. and Masters) working on robots that could map complex 3D environments (ex: flooded cenotes in Mexico) and before that, I worked at a company called Bluefin Robotics building robots to map the ocean floor. 

How does Waymo define full autonomy?

When we think about defining full autonomy at Waymo, the question is whether the system is designed to independently perform the entire dynamic driving task in all conditions in our operational design domain (ODD) without the need to rely on human intervention, or whether it requires a human to intervene and take control in such situations to keep things safe. The former would be full autonomy, and the latter would not be. The delta between the two is the difference between the L4 system we’re developing at Waymo (the Waymo Driver) which is responsible for executing the entire dynamic driving task, and L2 or L3 systems.

What are the specific operational conditions under which Waymo’s vehicles cannot operate autonomously?

Our current ODD in Phoenix, where we have our fully autonomous service Waymo One, is around 130 km2 (larger than the city of San Francisco). This area is broad enough to cover everyday driving, which includes different roadway types, various maneuvers, speed ranges, all times of day, and so on. Our ODD is always evolving as our technology continues to advance.
 
Just like a competent human driver, the Waymo Driver is designed so that it will not operate outside of its approved ODD. The Waymo Driver is designed to automatically detect weather or road conditions that would affect safe driving within our ODD and return to base or come to a safe stop (i.e. achieve a “minimal risk condition”) until conditions improve.

If Waymo’s vehicles encounter a novel situation, they ask a remote human supervisor for assistance with decision making. Can you explain how that process works?

Imagine you’re out driving and you come up to a “road closed” sign ahead. You may pause for a bit as you look for a “Detour” sign to show you how to get around it or if you don’t see that, start preparing to turn around from that road and create your own detour or new route. The Waymo Driver does the same thing as it evaluates how to plot the best path forward. In a case like this where the road is fully blocked, it can call on our Fleet Response specialists to provide advice on what route might be better or more efficient and then take that input, combine it with the information it has from the onboard map and what it’s seeing in real time via the sensors, and choose the best way to proceed. 
 
This example shows the a few basic properties of all our fleet response interactions:

  • The remote humans are not teleoperating the cars
  • The Waymo Driver is not asking for help to perceive the surrounding environment; it can already do that. It’s asking for advice on more strategic planning questions based on what it’s already perceived. 
  • The Waymo Driver is always responsible for being safe
  • Human responses can be very helpful, but are not essential for safe driving

What are some examples of situations or decision points where the Waymo Driver may not be able to proceed without input from a human?

In addition to construction, another example would be interpreting hand gestures. While that’s something we’ve improved a lot on over the last few years, it’s a common scenario the Waymo Driver likes to call on Fleet Response for at times. The Waymo Driver can perceive that someone may be using hand signals, such as another road user waving their hands, and then it will call on Fleet Response to confirm what the gesture appears to be signaling and use that input to make a decision about when and how to proceed.

This is completely dynamic and depends on the specific scenario; not “all construction zones” or “all novel situations” will the Waymo Driver engage with Fleet Response. There are some dead ends or construction zones, for example, where the Waymo Driver may not need to call on Fleet Response at all. Those are just examples of some common scenarios we see Fleet Response utilized for—cases where the Waymo Driver may call on Fleet Response, but does not have to.

So the “driving task” is sometimes separate from “strategic planning,” which may include navigating through situations where a human is giving directions through hand signals, busy construction zones, full road closures, and things like that. And remote humans may at times be required to assist the Waymo Driver with strategic planning decisions. Am I understanding this correctly?

Zooming out a bit (this may get a little philosophical): are tasks made up of layers of behaviors of increasing levels of sophistication (so as to be able to cover every eventuality), or is it possible to carve off certain domains where only certain behaviors are necessary, and call that the task? A simplistic example would be tying your shoelaces. Does it include what I do most every day: putting on the shoe, tying the knot? Or does it also include dealing with a nasty knot that my son put in the laces? Or include patching the laces if they break? Or replacing if the break is in a bad place? Or finding new laces if I have to replace the lace? Or going to the store if I need to buy a new lace?

If it’s the first case, even humans aren’t really individually autonomous, because we rely on other individuals for assistance (changing a tire), public works (installing a traffic light), and social decision-making (traffic management of a small-town July-4th parade). If it’s the second case, then there is an endless discussion to be had about exactly where to draw the lines. So in some sense, it’s arbitrary, and we can agree to disagree, but what is the fun of that? I would argue that there are certain “useful” distinctions to draw—where there are certain sets of capabilities that allow an agent to do something meaningful.

And to clarify—this isn’t just Waymo’s perspective, it’s actually how SAE makes these distinctions. SAE essentially defines the dynamic driving task (DDT, or what the Waymo Driver is responsible for) as involving the tactical and operational functions required to operate a vehicle, which are separate from strategic functions.

EDITOR’S NOTE: According to SAE, the dynamic driving task includes the operational (steering, braking, accelerating, monitoring the vehicle and roadway) and tactical (responding to events, determining when to change lanes, turn, use signals, etc.) aspects of the driving task, but not the strategic (determining destinations and waypoints) aspect of the driving task. SAE’s definition of Level 4 autonomy involves the driving mode-specific performance by an automated driving system of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene.

What is the disengagement rate for Waymo’s vehicles when they are operating with passengers? What are the most frequent causes of disengagements?

“Disengagement” usually refers to when a vehicle operator in the car switches the mode from autonomous to manual. With our fully autonomous service, we don’t have vehicle operators in the car so there’re technically no “disengagements” the way the term is generally understood. We do have a roadside assistance team who can assist the vehicle (and switch it over to manual control, if appropriate) while on the road but we don’t have metrics to share on those interactions.
 
But, the idea that these disengagement rates should be deducted from autonomy and that anything that ever gets stuck isn’t “fully autonomous” is flawed. Under that definition, human drivers aren’t “fully” autonomous! It would be sort of silly to say that “Nathaniel is 99.999% autonomous because he had to call a tow truck that one time.” 

I agree that it would be silly to say that Nathaniel is only 99.999% autonomous because he had to call a tow truck, but that’s because most people don’t consider that to be part of the driving task—I think it might be less silly to say that Nathaniel is only 99.999% autonomous if he sometimes can’t drive through construction zones or can’t reroute himself if he encounters a road closure.

When you get into a taxi, you don’t ask yourself whether the driver has a particular license to drive on a particular road, or whether you’ll have to jump into the front seat to grab the steering wheel. You just assume that they can get you to your destination without any intervention. When you get in a vehicle driven by the Waymo Driver, you can safely make that assumption! It doesn’t mean that your human taxi driver can’t look for advice in some situations, nor does it mean that the Waymo Driver can’t do the same. 

Additionally, and as noted above, we think the SAE distinctions are helpful in determining what constitutes the dynamic driving task that an L4 automated driving system like the Waymo Driver must be able to perform, including that the DDT does not involve strategic functions. The examples you reference here are either functions the Waymo Driver can perform (such as driving through a clearly marked construction zone) or are examples of where the Waymo Driver receives information or clarification of some facts to facilitate its performance of the DDT. Human drivers (like the taxi driver!) receive information to inform their driving from radio reports, navigation devices, or even from asking adjacent motorists in stopped traffic what they see ahead, and in a confusing situation might ask a traffic officer how to get around a crash area.

So your perspective is that a system can be accurately described as “fully autonomous” if it sometimes relies on a human for strategic decision making?

Yes. the Waymo Driver is a fully autonomous driver in the Phoenix service area, and I think most roboticists would agree with me there! This is because for the purpose of driving our riders to their destinations, the Waymo Driver makes all the decisions related to the dynamic driving task.

What robotics research (besides your own, of course!) are you most excited about right now?

I’ll be honest, our research at Waymo into high-capability decision-making systems that promote safety and interact naturally with humans is about as cool (and challenging) as it gets! It involves reasoning about uncertainty (and our own limitations in sensing and interpretation), reasoning about the intentions of other agents, and how the actions of other agents will change depending on our actions, and using millions of miles of real world driving experience and cutting-edge machine learning to accelerate progress in behavior.
 
I’m also very impressed by both the mechanical engineering and sophisticated footstep planning shown by Boston Dynamics they are doing some really elegant robotics. And a part of my heart belongs to exploration robotics too, be it under water, under ice, or on other planets (or in the case of Europa, all three). It’s the combination of rock-solid mechanisms, robust autonomous capability, and ground-breaking scientific discovery.


The need to have a human somewhere in the loop for strategic edge cases is a very robot-y thing, and perhaps that’s why it’s incorporated into the SAE’s autonomy levels. And technically, Waymo is absolutely correct to call its vehicle fully autonomous based on that definition. I think the risk, though, is that people may not intuitively understand that “full autonomy” only applies to the dynamic driving task, and not the strategic planning task, which (for humans) is an integral part of what we tend to think of as “driving.”

What I’d really like to know is what happens, some number of years from now, after Waymo has solved the strategic planning part of driving (which I’m sure they will). Because at that point, the Waymo Driver will be more autonomous than it was before, and they’ll have to communicate that somehow. Even fuller autonomy? Full autonomy plus? Overfull autonomy? I can’t wait to find out.

Review: DJI’s New FPV Drone is Effortless, Exhilarating Fun

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/drones/review-djis-new-fpv-drone-is-effortless-exhilarating-fun

In my experience, there are three different types of consumer drone pilots. You’ve got people for whom drones are a tool for taking pictures and video, where flying the drone is more or less just a necessary component of that. You’ve also got people who want a drone that can be used to take pictures or video of themselves, where they don’t want to be bothered flying the drone at all. Then you have people for whom flying the drone itself is the appealing part— people who like flying fast and creatively because it’s challenging and exciting and fun. And that typically means flying in First Person View, or FPV, where it feels like you’re a tiny little human sitting inside of a virtual cockpit in your drone.

For that last group of folks, the barrier to entry is high. Or rather, the barriers are high, because there are several. Not only is the equipment expensive, you often have to build your own system of drone, FPV goggles, and transmitter and receiver. And on top of that, it takes a lot of skill to fly an FPV drone well, and all of the inevitable crashes just add to the expense.

Today, DJI is announcing a new consumer first-person view drone system that includes everything you need to get started. You get an expertly designed and fully integrated high-speed FPV drone, a pair of FPV goggles with exceptional image quality and latency that’s some of the best we’ve ever seen, plus a physical controller to make it all work. Most importantly, though, there’s on-board obstacle avoidance plus piloting assistance that means even a complete novice can be zipping around with safety and confidence on day one.

Video Friday: A Blimp For Your Cat

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-a-blimp-for-your-cat

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi’an, China

Let us know if you have suggestions for next week, and enjoy today’s videos.


When Robots Enter the World, Who Is Responsible for Them?

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/industrial-robots/when-robots-enter-the-world-who-is-responsible-for-them

Over the last half decade or so, the commercialization of autonomous robots that can operate outside of structured environments has dramatically increased. But this relatively new transition of robotic technologies from research projects to commercial products comes with its share of challenges, many of which relate to the rapidly increasing visibility that these robots have in society.

Whether it’s because of their appearance of agency, or because of their history in popular culture, robots frequently inspire people’s imagination. Sometimes this is a good thing, like when it leads to innovative new use cases. And sometimes this is a bad thing, like when it leads to use cases that could be classified as irresponsible or unethical. Can the people selling robots do anything about the latter? And even if they can, should they?

Onboard Video of the Perseverance Rover Landing is the Most Incredible Thing I’ve Ever Seen

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/space-robots/onboard-video-of-the-perseverance-rover-landing-is-the-most-incredible-thing-ive-ever-seen

At a press conference this afternoon, NASA released a new video showing, in real-time and full color, the entire descent and landing of the Perseverance Mars rover. The video begins with the deployment of the parachute, and ends with the Skycrane cutting the rover free and flying away. It’s the most mind-blowing three minutes of video I have ever seen. 

Soft Legged Robot Uses Pneumatic Circuitry to Walk Like a Turtle

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/soft-legged-robot-pneumatic-circuitry

Soft robots are inherently safe, highly resilient, and potentially very cheap, making them promising for a wide array of applications. But development on them has been a bit slow relative to other areas of robotics, at least partially because soft robots can’t directly benefit from the massive increase in computing power and sensor and actuator availability that we’ve seen over the last few decades. Instead, roboticists have had to get creative to find ways of achieving the functionality of conventional robotics components using soft materials and compatible power sources.

In the current issue of Science Robotics, researchers from UC San Diego demonstrate a soft walking robot with four legs that moves with a turtle-like gait controlled by a pneumatic circuit system made from tubes and valves. This air-powered nervous system can actuate multiple degrees of freedom in sequence from a single source of pressurized air, offering a huge reduction in complexity and bringing a very basic form of decision making onto the robot itself.

Video Friday: Perseverance Lands on Mars

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-perseverance-lands-on-mars

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi’an, China

Let us know if you have suggestions for next week, and enjoy today’s videos.


How NASA Designed a Helicopter That Could Fly Autonomously on Mars

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/aerospace/robotic-exploration/nasa-designed-perseverance-helicopter-rover-fly-autonomously-mars

Tucked under the belly of the Perseverance rover that will be landing on Mars in just a few days is a little helicopter called Ingenuity. Its body is the size of a box of tissues, slung underneath a pair of 1.2m carbon fiber rotors on top of four spindly legs. It weighs just 1.8kg, but the importance of its mission is massive. If everything goes according to plan, Ingenuity will become the first aircraft to fly on Mars. 

In order for this to work, Ingenuity has to survive frigid temperatures, manage merciless power constraints, and attempt a series of 90 second flights while separated from Earth by 10 light minutes. Which means that real-time communication or control is impossible. To understand how NASA is making this happen, below is our conversation with Tim Canham, Mars Helicopter Operations Lead at NASA’s Jet Propulsion Laboratory (JPL).

Modified Laser Cutter Fabricates a Ready to Fly Drone

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/industrial-robots/modified-laser-cutter-fabricates-a-ready-to-fly-drone

It’s been very cool to watch 3D printers and laser cutters evolve into fairly common tools over the last decade-ish, finding useful niches across research, industry, and even with hobbyists at home. Capable as these fabricators are, they tend to be good at just one specific thing: making shapes out of polymer. Which is great! But we have all kinds of other techniques for making things that are even more useful, like by adding computers and actuators and stuff like that. You just can’t do that with your 3D printer or laser cutter, because it just does its one thing—which is too bad.

At CHI this year, researchers from MIT CSAIL are presenting LaserFactory, an integrated fabrication system that turns any laser cutter into a device that can (with just a little bit of assistance) build you an entire drone at a level of finish that means when the fabricator is done, the drone can actually fly itself off of the print bed. Sign me up.

Everything You Need to Know About NASA’s Perseverance Rover Landing on Mars

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/aerospace/robotic-exploration/nasa-perseverance-rover-landing-on-mars-overview

Just before 4PM ET on February 18 (this Thursday), NASA’s Perseverance rover will attempt to land on Mars. Like its predecessor Curiosity, which has been exploring Mars since 2012, Perseverance is a semi-autonomous mobile science platform the size of a small car. It’s designed to spend years roving the red planet, looking for (among other things) any evidence of microbial life that may have thrived on Mars in the past.

This mission to Mars is arguably the most ambitious one ever launched, combining technically complex science objectives with borderline craziness that includes the launching of a small helicopter. Over the next two days, we’ll be taking an in-depth look at both that helicopter and how Perseverance will be leveraging autonomy to explore farther and faster that ever before, but for now, we’ll quickly go through all the basics about the Perseverance mission to bring you up to speed on everything that will happen later this week.

Video Friday: Digit Takes a Hike

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-digit-takes-a-hike

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi’an, China

Let us know if you have suggestions for next week, and enjoy today’s videos.


Hyundai Motor Group Introduces Two New Robots

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/industrial-robots/hyundai-boston-dynamics-new-robots

Over the past few weeks, we’ve seen a couple of new robots from Hyundai Motor Group. This is a couple more robots than I think I’ve seen from Hyundai Motor Group, like, ever. We’re particularly interested in them right now mostly because Hyundai Motor Group are the new owners of Boston Dynamics, and so far, these robots represent one of the most explicit indications we’ve got about exactly what Hyundai Motor Group wants their robots to be doing.

We know it would be a mistake to read too much into these new announcements, but we can’t help reading something into them, right? So let’s take a look at what Hyundai Motor Group has been up to recently. This first robot is DAL-e, what HMG is calling an “Advanced Humanoid Robot.”

According to Hyundai, DAL-e is “designed to pioneer the future of automated customer services,” and is equipped with “state-of-the-art artificial intelligence technology for facial recognition as well as an automatic communication system based on a language-comprehension platform.” You’ll find it in car showrooms, but only in Seoul, for now.

We don’t normally write about robots like these because they tend not to represent much that’s especially new or interesting in terms of robotic technology, capabilities, or commercial potential. There’s certainly nothing wrong with DAL-e—it’s moderately cute and appears to be moderately functional. We’ve seen other platforms (like Pepper) take on similar roles, and our impression is that the long-term cost effectiveness of these greeter robots tends to be somewhat limited. And unless there’s some hidden functionality that we’re not aware of, this robot doesn’t really seem to be pushing the envelope, but we’d love to be wrong about that.

The other new robot, announced yesterday, is TIGER (Transforming Intelligent Ground Excursion Robot). It’s a bit more interesting, although you’ll have to skip ahead about 1:30 in the video to get to it.

We’ve talked about how adding wheels can make legged robots faster and more efficient, but I’m honestly not sure that it works all that well going the other way (adding legs to wheeled robots) because rather than adding a little complexity to get a multi-modal system that you can use much of the time, you’re instead adding a lot of complexity to get a multi-modal system that you’re going to use sometimes.

You could argue, as perhaps Hyundai would, that the multi-modal system is critical to get TIGER to do what they want it to do, which seems to be primarily remote delivery. They mention operating in urban areas as well, where TIGER could use its legs to climb stairs, but I think it would be beat by more traditional wheeled platforms, or even whegged platforms, that are almost as capable while being much simpler and cheaper. For remote delivery, though, legs might be a necessary feature.

That is, if you assume that using a ground-based system is really the best way to go.

The TIGER concept can be integrated with a drone to transport it from place to place, so why not just use the drone to make the remote delivery instead? I guess maybe if you’re dealing with a thick tree canopy, the drone could drop TIGER off in a clearing and the robot could drive to its destination, but now we’re talking about developing a very complex system for a very specific use case. Even though Hyundai has said that they’re going to attempt to commercialize TIGER over the next five years, I think it’ll be tricky for them to successfully do so.

The best part about these robots from Hyundai is that between the two of them, they suggest that the company is serious about developing commercial robots as well as willing to invest in something that seems a little crazy. And you know who else is both of those things? Boston Dynamics. To be clear, it’s almost certain that both of Hyundai’s robots were developed well before the company was even thinking about acquiring Boston Dynamics, so the real question is: Where do these two companies go from here?

New Drone Software Handles Motor Failures Even Without GPS

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/drones/new-drone-software-handles-motor-failures-even-without-gps

Good as some drones are becoming at obstacle avoidance, accidents do still happen. And as far as robots go, drones are very much on the fragile side of things.  Any sort of significant contact between a drone and almost anything else usually results in a catastrophic, out-of-control spin followed by a death plunge to the ground. Bad times. Bad, expensive times.

A few years ago, we saw some interesting research into software that can keep the most common drone form factor, the quadrotor, aloft and controllable even after the failure of one motor. The big caveat to that software was that it relied on GPS for state estimation, meaning that without a GPS signal, the drone is unable to get the information it needs to keep itself under control. In a paper recently accepted to RA-L, researchers at the University of Zurich report that they have developed a vision-based system that brings state estimation completely on-board. The upshot: potentially any drone with some software and a camera can keep itself safe even under the most challenging conditions.

Video Friday: New Entertainment Robot Sings and Does Impressions

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-engineered-arts-mesmer-robot-cleo

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi’an, China

Let us know if you have suggestions for next week, and enjoy today’s videos.


Boston Dynamics’ Spot Robot Is Now Armed

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/industrial-robots/boston-dynamics-spot-robot-arm

Boston Dynamics has been working on an arm for its Spot quadruped for at least five years now. There have been plenty of teasers along the way, including this 45-second clip from early 2018 of Spot using its arm to open a door, which at 85 million views seems to be Boston Dynamics’ most popular video ever by a huge margin. Obviously, there’s a substantial amount of interest in turning Spot from a highly dynamic but mostly passive sensor platform into a mobile manipulator that can interact with its environment. 

As anyone who’s done mobile manipulation will tell you, actually building an arm is just the first step—the really tricky part is getting that arm to do exactly what you want it to do. In particular, Spot’s arm needs to be able to interact with the world with some amount of autonomy in order to be commercially useful, because you can’t expect a human (remote or otherwise) to spend all their time positioning individual joints or whatever to pick something up. So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.

Today, Boston Dynamics is announcing commercial availability of the Spot arm, along with some improved software called Scout plus a self-charging dock that’ll give the robot even more independence. And to figure out exactly what Spot’s new arm can do, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.

Video Friday: These Robots Have Made 1 Million Autonomous Deliveries

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/video-friday-starship-robots-1-million-deliveries

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]

Let us know if you have suggestions for next week, and enjoy today’s videos.


Smellicopter Drone Uses Live Moth Antenna to Track Scents

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/drones/smellicopter-drone-live-moth-antenna

Research into robotic sensing has, understandably I guess, been very human-centric. Most of us navigate and experience the world visually and in 3D, so robots tend to get covered with things like cameras and lidar. Touch is important to us, as is sound, so robots are getting pretty good with understanding tactile and auditory information, too. Smell, though? In most cases, smell doesn’t convey nearly as much information for us, so while it hasn’t exactly been ignored in robotics, it certainly isn’t the sensing modality of choice in most cases.

Part of the problem with smell sensing is that we just don’t have a good way of doing it, from a technical perspective. This has been a challenge for a long time, and it’s why we either bribe or trick animals like dogs, rats, vultures, and other animals to be our sensing systems for airborne chemicals. If only they’d do exactly what we wanted them to do all the time, this would be fine, but they don’t, so it’s not. 

Until we get better at making chemical sensors, leveraging biology is the best we can do, and what would be ideal would be some sort of robot-animal hybrid cyborg thing. We’ve seen some attempts at remote controlled insects, but as it turns out, you can simplify things if you don’t use the entire insect, but instead just find a way to use its sensing system. Enter the Smellicopter.

Video Friday: Record-Breaking Drone Show Depicts Life of Van Gogh

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-drone-show-van-gogh

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online]
RoboSoft 2021 – April 12-16, 2021 – [Online]

Let us know if you have suggestions for next week, and enjoy today’s videos.


New FAA Drone Rules: What Recreational and Commercial Pilots Need to Know

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/drones/faa-drone-rules-what-recreational-and-commercial-pilots-need-to-know

The United States Federal Aviation Administration has been desperately trying to keep up with the proliferation of recreational and commercial drones. They haven’t been as successful as all of us might have wanted, but some progress is certainly being made, most recently with some new rules about flying drones at night and over people and vehicles, as well as the requirement for a remote-identification system for all drones.

Over the next few years, FAA’s drone rules are going to affect you even if you just fly a drone for fun in your backyard, so we’ll take detailed look about what changes are coming and how you can prepare.