All posts by Evan Ackerman

Throwable Robot Car Always Lands on Four Wheels

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/military-robots/throwable-robot-car-always-lands-on-four-wheels

Throwable or droppable robots seem like a great idea for a bunch of applications, including exploration and search and rescue. But such robots do come with some constraints—namely, if you’re going to throw or drop a robot, you should be prepared for that robot to not land the way you want it to land. While we’ve seen some creative approaches to this problem, or more straightforward self-righting devices, usually you’re in for significant trade-offs in complexity, mobility, and mass. 

What would be ideal is a robot that can be relied upon to just always land the right way up. A robotic cat, of sorts. And while we’ve seen this with a tail, for wheeled vehicles, it turns out that a tail isn’t necessary: All it takes is some wheel spin.

Video Friday: Agility Robotics Raises $20 Million to Accelerate Robot Production

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-agility-robotics-robot-production

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.


How Intel’s OpenBot Wants to Make Robots Out of Smartphones

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/diy/intel-openbot-robots-smartphones

You could make a pretty persuasive argument that the smartphone represents the single fastest area of technological progress we’re going to experience for the foreseeable future. Every six months or so, there’s something with better sensors, more computing power, and faster connectivity. Many different areas of robotics are benefiting from this on a component level, but over at Intel Labs, they’re taking a more direct approach with a project called OpenBot that turns US $50 worth of hardware and your phone into a mobile robot that can support “advanced robotics workloads such as person following and real-time autonomous navigation in unstructured environments.” 

Video Friday: Poimo Is a Portable Inflatable E-Bike

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-poimo-inflatable-ebike

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.


AI-Powered Drone Learns Extreme Acrobatics

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/drones/ai-powered-drone-extreme-acrobatics

Quadrotors are among the most agile and dynamic machines ever created. In the hands of a skilled human pilot, they can do some astonishing series of maneuvers. And while autonomous flying robots have been getting better at flying dynamically in real-world environments, they still haven’t demonstrated the same level of agility of manually piloted ones.

Now researchers from the Robotics and Perception Group at the University of Zurich and ETH Zurich, in collaboration with Intel, have developed a neural network training method that “enables an autonomous quadrotor to fly extreme acrobatic maneuvers with only onboard sensing and computation.” Extreme.

There are two notable things here: First, the quadrotor can do these extreme acrobatics outdoors without any kind of external camera or motion-tracking system to help it out (all sensing and computing is onboard). Second, all of the AI training is done in simulation, without the need for an additional simulation-to-real-world (what researchers call “sim-to-real”) transfer step. Usually, a sim-to-real transfer step means putting your quadrotor into one of those aforementioned external tracking systems, so that it doesn’t completely bork itself while trying to reconcile the differences between the simulated world and the real world, where, as the researchers wrote in a paper describing their system, “even tiny mistakes can result in catastrophic outcomes.”

To enable “zero-shot” sim-to-real transfer, the neural net training in simulation uses an expert controller that knows exactly what’s going on to teach a “student controller” that has much less perfect knowledge. That is, the simulated sensory input that the student ends up using as it learns to follow the expert has been abstracted to present the kind of imperfect, imprecise data it’s going to encounter in the real world. This can involve things like abstracting away the image part of the simulation until you’d have no way of telling the difference between abstracted simulation and abstracted reality, which is what allows the system to make that sim-to-real leap.

The simulation environment that the researchers used was Gazebo, slightly modified to better simulate quadrotor physics. Meanwhile, over in reality, a custom 1.5-kilogram quadrotor with a 4:1 thrust to weight ratio performed the physical experiments, using only a Nvidia Jetson TX2 computing board and an Intel RealSense T265, a dual fisheye camera module optimized for V-SLAM. To challenge the learning system, it was trained to perform three acrobatic maneuvers plus a combo of all of them:

All of these maneuvers require high accelerations of up to 3 g’s and careful control, and the Matty Flip is particularly challenging, at least for humans, because the whole thing is done while the drone is flying backwards. Still, after just a few hours of training in simulation, the drone was totally real-world competent at these tricks, and could even extrapolate a little bit to perform maneuvers that it was not explicitly trained on, like doing multiple loops in a row. Where humans still have the advantage over drones is (as you might expect since we’re talking about robots) is quickly reacting to novel or unexpected situations. And when you’re doing this sort of thing outdoors, novel and unexpected situations are everywhere, from a gust of wind to a jealous bird.

For more details, we spoke with Antonio Loquercio from the University of Zurich’s Robotics and Perception Group.

IEEE Spectrum: Can you explain how the abstraction layer interfaces with the simulated sensors to enable effective sim-to-real transfer?

Antonio Loquercio: The abstraction layer applies a specific function to the raw sensor information. Exactly the same function is applied to the real and simulated sensors. The result of the function, which is “abstracted sensor measurements,” makes simulated and real observation of the same scene similar. For example, suppose we have a sequence of simulated and real images. We can very easily tell apart the real from the simulated ones given the difference in rendering. But if we apply the abstraction function of “feature tracks,” which are point correspondences in time, it becomes very difficult to tell which are the simulated and real feature tracks, since point correspondences are independent of the rendering. This applies for humans as well as for neural networks: Training policies on raw images gives low sim-to-real transfer (since images are too different between domains), while training on the abstracted images has high transfer abilities.

How useful is visual input from a camera like the Intel RealSense T265 for state estimation during such aggressive maneuvers? Would using an event camera substantially improve state estimation?

Our end-to-end controller does not require a state estimation module. It shares however some components with traditional state estimation pipelines, specifically the feature extractor and the inertial measurement unit (IMU) pre-processing and integration function. The input of the neural networks are feature tracks and integrated IMU measurements. When looking at images with low features (for example when the camera points to the sky), the neural net will mainly rely on IMU. When more features are available, the network uses to correct the accumulated drift from IMU. Overall, we noticed that for very short maneuvers IMU measurements were sufficient for the task. However, for longer ones, visual information was necessary to successfully address the IMU drift and complete the maneuver. Indeed, visual information reduces the odds of a crash by up to 30 percent in the longest maneuvers. We definitely think that event camera can improve even more the current approach since they could provide valuable visual information during high speed.

You describe being able to train on “maneuvers that stretch the abilities of even expert human pilots.” What are some examples of acrobatics that your drones might be able to do that most human pilots would not be capable of?

The Matty Flip is probably one of the maneuvers that our approach can do very well, but human pilots find very challenging. It basically entails doing a high speed power loop by always looking backward. It is super challenging for humans, since they don’t see where they’re going and have problems in estimating their speed. For our approach the maneuver is no problem at all, since we can estimate forward velocities as well as backward velocities.

What are the limits to the performance of this system?

At the moment the main limitation is the maneuver duration. We never trained a controller that could perform maneuvers longer than 20 seconds. In the future, we plan to address this limitation and train general controllers which can fly in that agile way for significantly longer with relatively small drift. In this way, we could start being competitive against human pilots in drone racing competitions.

Can you talk about how the techniques developed here could be applied beyond drone acrobatics?

The current approach allows us to do acrobatics and agile flight in free space. We are now working to perform agile flight in cluttered environments, which requires a higher degree of understanding of the surrounding with respect to this project. Drone acrobatics is of course only an example application. We selected it because it makes a stress test of the controller performance. However, several other applications which require fast and agile flight can benefit from our approach. Examples are delivery (we want our Amazon packets always faster, don’t we?), search and rescue, or inspection. Going faster allows us to cover more space in less time, saving battery costs. Indeed, agile flight has very similar battery consumption of slow hovering for an autonomous drone.


Deep Drone Acrobatics,” by Elia Kaufmann, Antonio Loquercio, René Ranftl, Matthias Müller, Vladlen Koltun, and Davide Scaramuzza from the Robotics and Perception Group at the University of Zurich and ETH Zurich, and Intel’s Intelligent Systems Lab, was presented at RSS 2020.

Video Friday: An In-Depth Look at Mesmer Humanoid Robot

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-mesmer-humanoid-robot

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online]
IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.


How Toyota Research Envisions the Future of Robots

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/home-robots/toyota-research-future-of-robots

Yesterday, the Toyota Research Institute (TRI) showed off some of the projects that it’s been working on recently, including a ceiling-mounted robot that could one day help us with household chores. That system is just one example of how TRI envisions the future of robotics and artificial intelligence. As TRI CEO Gill Pratt told us, the company is focusing on robotics and AI technology for “amplifying, rather than replacing, human beings.” In other words, Toyota wants to develop robots not for convenience or to do our jobs for us, but rather to allow people to continue to live and work independently even as we age.

To better understand Toyota’s vision of robotics 15 to 20 years from now, it’s worth watching the 20-minute video below, which depicts various scenarios “where the application of robotic capabilities is enabling members of an aging society to live full and independent lives in spite of the challenges that getting older brings.” It’s a long video, but it helps explains TRI’s perspective on how robots will collaborate with humans in our daily lives over the next couple of decades.

Those are some interesting conceptual telepresence-controlled bipeds they’ve got running around in that video, right?

For more details, we sent TRI some questions on how it plans to go from concepts like the ones shown in the video to real products that can be deployed in human environments. Below are answers from TRI CEO Gill Pratt, who is also chief scientist for Toyota Motor Corp.; Steffi Paepcke, senior UX designer at TRI; and Max Bajracharya, VP of robotics at TRI.

IEEE Spectrum: TRI seems to have a more explicit focus on eventual commercialization than most of the robotics research that we cover. At what point TRI starts to think about things like reliability and cost?

Gill Pratt: It’s a really interesting question, because the normal way to think about this would be to say, well, both reliability and cost are product development tasks. But actually, we need to think about it at the earliest possible stage with research as well. The hardware that we use in the laboratory for doing experiments, we don’t worry about cost there, or not nearly as much as you’d worry about for a product. However, in terms of what research we do, we very much have to think about, is it possible (if the research is successful) for it to end up in a product that has a reasonable cost. Because if a customer can’t afford what we come up with, maybe it has some academic value but it’s not actually going to make a difference in their quality of life in the real world. So we think about cost very much from the beginning.

The same is true with reliability. Right now, we’re working very hard to make our control techniques robust to wide variations in the environment. For instance, in work that Russ Tedrake is doing with manipulating dishes in a sink and a dishwasher, both in physical testing and in simulation, we’re doing thousands and now millions of different experiments to make sure that we can handle the edge cases and it works over a very wide range of conditions.

A tremendous amount of work that we do is trying to bring robotics out of the age of doing demonstrations. There’s been a history of robotics where for some time, things have not been reliable, so we’d catch the robot succeeding just once and then show that video to the world, and people would get the mis-impression that it worked all of the time. Some researchers have been very good about showing the blooper reel too, to show that some of the time, robots don’t work.

In the spirit of sharing things that didn’t work, can you tell us a bit about some of the robots that TRI has had under development that didn’t make it into the demo yesterday because they were abandoned along the way?

Steffi Paepcke: We’re really looking at how we can connect people; it can be hard to stay in touch and see our loved ones as much as we would like to. There have been a few prototypes that we’ve worked on that had to be put on the shelf, at least for the time being. We were exploring how to use light so that people could be ambiently aware of one another across distances. I was very excited about that—the internal name was “glowing orb.” For a variety of reasons, it didn’t work out, but it was really fascinating to investigate different modalities for keeping in touch.

Another prototype we worked on—we found through our research that grocery shopping is obviously an important part of life, and for a lot of older adults, it’s not necessarily the right answer to always have groceries delivered. Getting up and getting out of the house keeps you physically active, and a lot of people prefer to continue doing it themselves. But it can be challenging, especially if you’re purchasing heavy items that you need to transport. We had a prototype that assisted with grocery shopping, but when we pivoted our focus to Japan, we found that the inside of a Japanese home really needs to stay inside, and the outside needs to stay outside, so a robot that traverses both domains is probably not the right fit for a Japanese audience, and those were some really valuable lessons for us.

I love that TRI is exploring things like the gantry robot both in terms of near-term research and as part of its long-term vision, but is a robot like this actually worth pursuing? Or more generally, what’s the right way to compromise between making an environment robot friendly, and asking humans to make changes to their homes?

Max Bajracharya: We think a lot about the problems that we’re trying to address in a holistic way. We don’t want to just give people a robot, and assume that they’re not going to change anything about their lifestyle. We have a lot of evidence from people who use automated vacuum cleaners that people will adapt to the tools you give them, and they’ll change their lifestyle. So we want to think about what is that trade between changing the environment, and giving people robotic assistance and tools.

We certainly think that there are ways to make the gantry system plausible. The one you saw today is obviously a prototype and does require significant infrastructure. If we’re going to retrofit a home, that isn’t going to be the way to do it. But we still feel like we’re very much in the prototype phase, where we’re trying to understand whether this is worth it to be able to bypass navigation challenges, and coming up with the pros and cons of the gantry system. We’re evaluating whether we think this is the right approach to solving the problem.

To what extent do you think humans should be either directly or indirectly in the loop with home and service robots?

Bajracharya: Our goal is to amplify people, so achieving this is going to require robots to be in a loop with people in some form. One thing we have learned is that using people in a slow loop with robots, such as teaching them or helping them when they make mistakes, gives a robot an important advantage over one that has to do everything perfectly 100 percent of the time. In unstructured human environments, robots are going to encounter corner cases, and are going to need to learn to adapt. People will likely play an important role in helping the robots learn.

Toyota Research Demonstrates Ceiling-Mounted Home Robot

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/home-robots/toyota-research-ceiling-mounted-home-robot

Over the last several years, Toyota has been putting more muscle into forward-looking robotics research than just about anyone. In addition to the Toyota Research Institute (TRI), there’s that massive 175-acre robot-powered city of the future that Toyota still plans to build next to Mount Fuji. Even Toyota itself acknowledges that it might be crazy, but that’s just how they roll—as TRI CEO Gill Pratt told me a while back, when Toyota decides to do something, they really do go all-in on it.

TRI has been focusing heavily on home robots, which is reflective of the long-term nature of what TRI is trying to do, because home robots are both the place where we’ll need robots the most at the same time as they’re the place where it’s going to be hardest to deploy them. The unpredictable nature of homes, and the fact that homes tend to have squishy fragile people in them, are robot-unfriendly characteristics, but as the population continues to age (an increasingly acute problem in Japan), homes offer an enormous amount of potential for helping us maintain our independence.

Today, Toyota is showing off some of the research that it’s been working on recently, in the form of a virtual reality presentation in lieu of an in-person press event. For journalists, TRI pre-loaded the recording onto a VR headset, which was FedExed to my house. You can watch the entire 40-minute presentation in 360 video on YouTube (or in VR if you have a headset of your own), but if you don’t watch the whole thing, you should at least check out the full-on GLaDOS (with arms) that TRI thinks belongs in your home.

17 Teams to Take Part in DARPA’s SubT Cave Circuit Competition

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/darpa-subt-cave-circuit-competition

Among all of the other in-person events that have been totally wrecked by COVID-19 is the Cave Circuit of the DARPA Subterranean Challenge. DARPA has already hosted the in-person events for the Tunnel and Urban SubT circuits (see our previous coverage here), and the plan had always been for a trio of events representing three uniquely different underground environments in advance of the SubT Finals, which will somehow combine everything into one bonkers course.

While the SubT Urban Circuit event snuck in just under the lockdown wire in late February, DARPA made the difficult (but prudent) decision to cancel the in-person Cave Circuit event. What this means is that there will be no Systems Track Cave competition, which is a serious disappointment—we were very much looking forward to watching teams of robots navigating through an entirely unpredictable natural environment with a lot of verticality. Fortunately, DARPA is still running a Virtual Cave Circuit, and 17 teams will be taking part in this competition featuring a simulated cave environment that’s as dynamic and detailed as DARPA can make it.

Video Friday: Researchers 3D Print Liquid Crystal Elastomer for Soft Robots

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-software/video-friday-3d-printed-liquid-crystal-elastomer

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online]
IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.


Why You Should Be Very Skeptical of Ring’s Indoor Security Drone

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/drones/ring-indoor-security-drone

Yesterday, Ring, the smart home company owned by Amazon, announced the Always Home Cam, a “next-level indoor security” system in the form of a small autonomous drone. It costs US $250 and is designed to closely integrate with the rest of Ring’s home security hardware and software. Technologically, it’s impressive. But you almost certainly don’t want one.

Why Boston Dynamics Is Putting Legged Robots in Hospitals

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/medical-robots/boston-dynamics-legged-robots-hospitals

For the past eight months, Boston Dynamics has been trying to find ways in which their friendly yellow quadruped, Spot, can provide some kind of useful response to COVID-19. The company has been working with researchers from MIT and Brigham and Women’s Hospital in Massachusetts to use Spot as a telepresence-based extension for healthcare workers in suitable contexts, with the goal of minimizing exposure and preserving supplies of PPE.

For triaging sick patients, it’s necessary to collect a variety of vital data, including body temperature, respiration rate, pulse rate, and oxygen saturation. Boston Dynamics has helped to develop “a set of contactless  monitoring systems for measuring vital signs and a tablet computer to enable face-to-face medical interviewing,” all of which fits neatly on Spot’s back. This system was recently tested in a medical tent for COVID-19 triage, which appeared to be a well constrained and very flat environment that left us wondering whether a legged platform like Spot was really necessary in this particular application. What makes Spot unique (and relatively complex and expensive) is its ability to navigate around complex environments in an agile manner. But in a tent in a hospital parking lot, are you really getting your US $75k worth out of those legs, or would a wheeled platform do almost as well while being significantly simpler and more affordable?

As it turns out, we weren’t the only ones who wondered whether Spot is really the best platform for this application. “We had the same response when we started getting pitched these opportunities in Feb / March,” Michael Perry, Boston Dynamics’ VP of business development commented on Twitter. “As triage tents started popping up in late March, though, there wasn’t confidence wheeled robots would be able to handle arbitrary triage environments (parking lots, lawns, etc).”

To better understand Spot’s value in this role, we sent Boston Dynamics a few questions about their approach to healthcare robots.

iRobot Remembers That Robots Are Expensive, Gives Us a Break With More Affordable Roomba i3

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/home-robots/irobot-roomba-i3

iRobot has released several new robots over the last few years, including the i7 and s9 vacuums. Both of these models are very fancy and very capable, packed with innovative and useful features that we’ve been impressed by. They’re both also quite expensive—with dirt docks included, you’re looking at US $800 for the i7+, and a whopping $1,100 for the s9+. You can knock a couple hundred bucks off of those prices if you don’t want the docks, but still, these vacuums are absolutely luxury items.

If you just want something that’ll do some vacuuming so that you don’t have to, iRobot has recently announced a new Roomba option. The Roomba i3 is iRobot’s new low to midrange vacuum, starting at $400. It’s not nearly as smart as the i7 or the s9, but it can navigate (sort of) and make maps (sort of) and do some basic smart home integration. If that sounds like all you need, the i3 could be the robot vacuum for you.

Video Friday: Bittle Is a Palm-Sized Robot Dog Now on Kickstarter

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-bittle-robot-dog

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online]
IROS 2020 – October 25-29, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.


Rongzhong Li, who is responsible for the adorable robotic cat Nybble, has an updated and even more adorable quadruped that’s more robust and agile but only costs around US $200 in kit form on Kickstarter.

Looks like the early bird options are sold out, but a full kit is a $225 pledge, for delivery in December.

[ Kickstarter ]

Thanks Rz!


I still maintain that Stickybot was one of the most elegantly designed robots ever.

[ Stanford ]


With the unpredictable health crisis of COVID-19 continuing to place high demands on hospitals, PAL Robotics have successfully completed testing of their delivery robots in Barcelona hospitals this summer. The TIAGo Delivery and TIAGo Conveyor robots were deployed in Hospital Municipal of Badalona and Hospital Clínic Barcelona following a winning proposal submitted to the European DIH-Hero project. Accerion sensors were integrated onto the TIAGo Delivery Robot and TIAGo Conveyor Robot for use in this project.

[ PAL Robotics ]


Energy Robotics, a leading developer of software solutions for mobile robots used in industrial applications, announced that its remote sensing and inspection solution for Boston Dynamics’s agile mobile robot Spot was successfully deployed at Merck’s thermal exhaust treatment plant at its headquarters in Darmstadt, Germany. Energy Robotics equipped Spot with sensor technology and remote supervision functions to support the inspection mission.

Combining Boston Dynamics’ intuitive controls, robotic intelligence and open interface with Energy Robotics’ control and autonomy software, user interface and encrypted cloud connection, Spot can be taught to autonomously perform a specific inspection round while being supervised remotely from anywhere with internet connectivity. Multiple cameras and industrial sensors enable the robot to find its way around while recording and transmitting information about the facility’s onsite equipment operations.

Spot reads the displays of gauges in its immediate vicinity and can also zoom in on distant objects using an externally-mounted optical zoom lens. In the thermal exhaust treatment facility, for instance, it monitors cooling water levels and notes whether condensation water has accumulated. Outside the facility, Spot monitors pipe bridges for anomalies.

Among the robot’s many abilities, it can detect defects of wires or the temperature of pump components using thermal imaging. The robot was put through its paces on a comprehensive course that tested its ability to handle special challenges such as climbing stairs, scaling embankments and walking over grating.

Energy Robotics ]

Thanks Stefan!


Boston Dynamics really should give Dr. Guero an Atlas just to see what he can do with it.

[ DrGuero ]


World’s First Socially Distanced Birthday Party: Located in London, the robotic arm was piloted in real time to light the candles on the cake by the founder of Extend Robotics, Chang Liu, who was sat 50 miles away in Reading. Other team members in Manchester and Reading were also able to join in the celebration as the robot was used to accurately light the candles on the birthday cake.

[ Extend Robotics ]


The Robocon in-person competition was canceled this year, but check out Tokyo University’s robots in action:

[ Robocon ]


Sphero has managed to pack an entire Sphero into a much smaller sphere.

[ Sphero ]


Squishy Robotics, a small business funded by the National Science Foundation (NSF), is developing mobile sensor robots for use in disaster rescue, remote monitoring, and space exploration. The shape-shifting, mobile, senor robots from UC-Berkeley spin-off Squishy Robotics can be dropped from airplanes or drones and can provide first responders with ground-based situational awareness during fires, hazardous materials (HazMat) release, and natural and man-made disasters.

[ Squishy Robotics ]


Meet Jasper, the small girl with big dreams to FLY. Created by UTS Animal Logic Academy in partnership with the Royal Australian Air Force to encourage girls to soar above the clouds. Jasper was created using a hybrid of traditional animation techniques and technology such as robotics and 3D printing. A KUKA QUANTEC robot is used during the film making to help the Australian Royal Airforce tell their story in a unique way. UTS adapted their High Accurate robot to film consistent paths, creating a video with physical sets and digital characters.

[ AU AF ]


Impressive what the Ghost Robotics V60 can do without any vision sensors on it.

[ Ghost Robotics ]


Is your job moving tiny amounts of liquid around? Would you rather be doing something else? ABB’s YuMi got you.

[ Yumi ]


For his PhD work at the Media Lab, Biomechatronics researcher Roman Stolyarov developed a terrain-adaptive control system for robotic leg prostheses. as a way to help people with amputations feel as able-bodied and mobile as possible, by allowing them to walk seamlessly regardless of the ground terrain.

[ MIT ]


This robot collects data on each cow when she enters to be milked. Milk samples and 3D photos can be taken to monitor the cow’s health status. The Ontario Dairy Research Centre in Elora, Ontario, is leading dairy innovation through education and collaboration. It is a state-of-the-art 175,000 square foot facility for discovery, learning and outreach. This centre is a partnership between the Agricultural Research Institute of Ontario, OMAFRA, the University of Guelph and the Ontario dairy industry.

[ University of Guleph ]


Australia has one of these now, should the rest of us panic?

[ Boeing ]


Daimler and Torc are developing Level 4 automated trucks for the real world. Here is a glimpse into our closed-course testing, routes on public highways in Virginia, and self-driving capabilities development. Our year of collaborating on the future of transportation culminated in the announcement of our new truck testing center in New Mexico.

[ Torc Robotics ]


GITAI Sending Autonomous Robot to Space Station

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/space-robots/gitai-autonomous-robot-iss

We’ve been keeping a close watch on GITAI since early last year—what caught our interest initially is the history of the company, which includes a bunch of folks who started in the JSK Lab at the University of Tokyo, won the DARPA Robotics Challenge Trials as SCHAFT, got swallowed by Google, narrowly avoided being swallowed by SoftBank, and are now designing robots that can work in space.

The GITAI YouTube channel has kept us more to less up to date on their progress so far, and GITAI has recently announced the next step in this effort: The deployment of one of their robots on board the International Space Station in 2021.

NASA Study Proposes Airships, Cloud Cities for Venus Exploration

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/aerospace/space-flight/nasa-study-proposes-airships-cloud-cities-for-venus-exploration

Editor’s note: It’s been 35 years since a pair of robotic balloons explored the clouds of Venus. The Soviet Vega 2 and Vega 2 probes didn’t manage to find any Venusians, but that may have been because we didn’t know exactly what to look for. On 14 September 2020, a study published in Nature suggested that traces of phosphine in Venus’ atmosphere could be an indication of a biological process: that is, of microbial alien life. If confirmed, such a finding could completely change the way we think about the universe, which has us taking a serious look at what it would take to get human explorers to Venus in the near future. This article was originally published on 16 December 2014.

It has been accepted for decades that Mars is the next logical place for humans to explore. Mars certainly seems to offer the most Earth-like environment of any other place in the solar system, and it’s closer to Earth than just about anyplace else, except Venus. But exploration of Venus has always been an enormous challenge: Venus’s surface is hellish, with 92 atmospheres of pressure and temperatures of nearly 500 °C.

The surface of Venus isn’t going to work for humans, but what if we ignore the surface and stick to the clouds? Dale Arney and Chris Jones, from the Space Mission Analysis Branch of NASA’s Systems Analysis and Concepts Directorate at Langley Research Center, in Virginia, have been exploring that idea. Perhaps humans could ride through the upper atmosphere of Venus in a solar-powered airship. Arney and Jones propose that it may make sense to go to Venus before we ever send humans to Mars.

To put NASA’s High Altitude Venus Operational Concept (HAVOC) mission in context, it helps to start thinking about exploring the atmosphere of Venus instead of exploring the surface. “The vast majority of people, when they hear the idea of going to Venus and exploring, think of the surface, where it’s hot enough to melt lead and the pressure is the same as if you were almost a mile underneath the ocean,” Jones says. “I think that not many people have gone and looked at the relatively much more hospitable atmosphere and how you might tackle operating there for a while.”

At 50 kilometers above its surface, Venus offers one atmosphere of pressure and only slightly lower gravity than Earth. Mars, in comparison, has a “sea level” atmospheric pressure of less than a hundredth of Earth’s, and gravity just over a third Earth normal. The temperature at 50 km on Venus is around 75 °C, which is a mere 17 degrees hotter than the highest temperature recorded on Earth. It averages -63 °C on Mars, and while neither extreme would be pleasant for an unprotected human, both are manageable.

What’s more important, especially relative to Mars, is the amount of solar power available on Venus and the amount of protection that Venus has from radiation. The amount of radiation an astronaut would be exposed to in Venus’s atmosphere would be “about the same as if you were in Canada,” says Arney. On Mars, unshielded astronauts would be exposed to about 0.67 millisieverts per day, which is 40 times as much as on Earth, and they’d likely need to bury their habitats several meters beneath the surface to minimize exposure. As for solar power, proximity to the sun gets Venus 40 percent more than we get here on Earth, and 240 percent more than we’d see on Mars. Put all of these numbers together and as long as you don’t worry about having something under your feet, Jones points out, the upper atmosphere of Venus is “probably the most Earth-like environment that’s out there.”


It’s also important to note that Venus is often significantly closer to Earth than Mars is. Because of how the orbits of Venus and Earth align over time, a crewed mission to Venus would take a total of 440 days using existing or very near-term propulsion technology: 110 days out, a 30-day stay, and then 300 days back—with the option to abort and begin the trip back to Earth immediately after arrival. That sounds like a long time to spend in space, and it absolutely is. But getting to Mars and back using the same propulsive technology would involve more than 500 days in space at a minimum. A more realistic Mars mission would probably last anywhere from 650 to 900 days (or longer) due to the need to wait for a favorable orbital alignment for the return journey, which means that there’s no option to abort the mission and come home earlier: If anything went wrong, astronauts would have to just wait around on Mars until their return window opened.

HAVOC comprises a series of missions that would begin by sending a robot into the atmosphere of Venus to check things out. That would be followed up by a crewed mission to Venus orbit with a stay of 30 days, and then a mission that includes a 30-day atmospheric stay. Later missions would have a crew of two spend a year in the atmosphere, and eventually there would be a permanent human presence there in a floating cloud city.

The defining feature of these missions is the vehicle that will be doing the atmospheric exploring: a helium-filled, solar-powered airship. The robotic version would be 31 meters long (about half the size of the Goodyear blimp), while the crewed version would be nearly 130 meters long, or twice the size of a Boeing 747. The top of the airship would be covered with more than 1,000 square meters of solar panels, with a gondola slung underneath for instruments and, in the crewed version, a small habitat and the ascent vehicle that the astronauts would use to return to Venus’s orbit, and home.

Getting an airship to Venus is not a trivial task, and getting an airship to Venus with humans inside it is even more difficult. The crewed mission would involve a Venus orbit rendezvous, where the airship itself (folded up inside a spacecraft) would be sent to Venus ahead of time. Humans would follow in a transit vehicle (based on NASA’s Deep Space Habitat), linking up with the airship in Venus orbit.

Since there’s no surface to land on, the “landing” would be extreme, to say the least. “Traditionally, say if you’re going to Mars, you talk about ‘entry, descent, and landing,’ or EDL,” explains Arney. “Obviously, in our case, ‘landing’ would represent a significant failure of the mission, so instead we have ‘entry, descent, and inflation,’ or EDI.” The airship would enter the Venusian atmosphere inside an aeroshell at 7,200 meters per second. Over the next seven minutes, the aeroshell would decelerate to 450 m/s, and it would deploy a parachute to slow itself down further. At this point, things get crazy. The aeroshell would drop away, and the airship would begin to unfurl and inflate itself, while still dropping through the atmosphere at 100 m/s. As the airship got larger, its lift and drag would both increase to the point where the parachute became redundant. The parachute would be jettisoned, the airship would fully inflate, and (if everything had gone as it’s supposed to), it would gently float to a stop at 50 km above Venus’s surface.

Near the equator of Venus (where the atmosphere is most stable), winds move at about 100 meters per second, circling the planet in just 110 hours. Venus itself barely rotates, and one Venusian day takes longer than a Venusian year does. The slow day doesn’t really matter, however, because for all practical purposes the 110-hour wind circumnavigation becomes the length of one day/night cycle. The winds also veer north, so to stay on course, the airship would push south during the day, when solar energy is plentiful, and drift north when it needs to conserve power at night.

Meanwhile, the humans would be busy doing science from inside a small (21-cubic-meter) habitat, based on NASA’s existing Space Exploration Vehicle concept. There’s not much reason to perform extravehicular activities, so that won’t even be an option, potentially making things much simpler and safer (if a bit less exciting) than a trip to Mars.

The airship has a payload capacity of 70,000 kilograms. Of that, nearly 60,000 kg will be taken up by the ascent vehicle, a winged two-stage rocket slung below the airship. (If this looks familiar, it’s because it’s based on the much smaller Pegasus rocket, which is used to launch satellites into Earth orbit from beneath a carrier aircraft.) When it’s time to head home, the astronauts would get into a tiny capsule on the front of the rocket, drop from the airship, and then blast back into orbit. There, they’ll meet up with their transit vehicle and take it back to Earth orbit. The final stage is to rendezvous in Earth orbit with one final capsule (likely Orion), which the crew will use to make the return to Earth’s surface.

The HAVOC team believes that its concept offers a realistic target for crewed exploration in the near future, pending moderate technological advancements and support from NASA. Little about HAVOC is dependent on technology that isn’t near-term. The primary restriction that a crewed version of HAVOC would face is that in its current incarnation it depends on the massive Block IIB configuration of the Space Launch System, which may not be ready to fly until the late 2020s. Several proof-of-concept studies have already been completed. These include testing Teflon coating that can protect solar cells (and other materials) from the droplets of concentrated sulfuric acid that are found throughout Venus’s atmosphere and verifying that an airship with solar panels can be packed into an aeroshell and successfully inflated out of it, at least at 1/50 scale.

Many of the reasons that we’d want to go to Venus are identical to the reasons that we’d want to go to Mars, or anywhere else in the solar system, beginning with the desire to learn and explore. With the notable exception of the European Space Agency’s Venus Express orbiter, the second planet from the sun has been largely ignored since the 1980s, despite its proximity and potential for scientific discovery. HAVOC, Jones says, “would be characterizing the environment not only for eventual human missions but also to understand the planet and how it’s evolved and the runaway greenhouse effect and everything else that makes Venus so interesting.” If the airships bring small robotic landers with them, HAVOC would complete many if not most of the science objectives that NASA’s own Venus Exploration Analysis Group has been promoting for the past two decades.

“Venus has value as a destination in and of itself for exploration and colonization,” says Jones. “But it’s also complementary to current Mars plans.…There are things that you would need to do for a Mars mission, but we see a little easier path through Venus.” For example, in order to get to Mars, or anywhere else outside of the Earth-moon system, we’ll need experience with long-duration habitats, aerobraking and aerocapture, and carbon dioxide processing, among many other things. Arney continues: “If you did Venus first, you could get a leg up on advancing those technologies and those capabilities ahead of doing a human-scale Mars mission. It’s a chance to do a practice run, if you will, of going to Mars.”

It would take a substantial policy shift at NASA to put a crewed mission to Venus ahead of one to Mars, no matter how much sense it might make to take a serious look at HAVOC. But that in no way invalidates the overall concept for the mission, the importance of a crewed mission to Venus, or the vision of an eventual long-term human presence there in cities in the clouds. “If one does see humanity’s future as expanding beyond just Earth, in all likelihood, Venus is probably no worse than the second planet you might go to behind Mars,” says Arney. “Given that Venus’s upper atmosphere is a fairly hospitable destination, we think it can play a role in humanity’s future in space.”

Zipline Partners With Walmart on Commercial Drone Delivery

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/drones/zipline-walmart-drone-delivery

Today, Walmart and Zipline are announcing preliminary plans “to bring first-of-its kind drone delivery service to the United States.” What makes this drone-delivery service the first of its kind is that Zipline uses fixed-wing drones rather than rotorcraft, giving them a relatively large payload capacity and very long range at the cost of a significantly more complicated launch, landing, and delivery process. Zipline has made this work very well in Rwanda, and more recently in North Carolina. But expanding into commercial delivery to individual households is a much different challenge. 

Along with a press release that doesn’t say much, Walmart and Zipline have released a short video of how they see the delivery operation happening, and it’s a little bit more, uh, optimistic than we’re entirely comfortable with.

Video Friday: Drone Helps Explore World’s Deepest Ice Caves

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-flyability-drone-greenland

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
CYBATHLON 2020 – November 13-14, 2020 – [Online Event]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.


Video Friday: Even Robots Know That You Need a Mask

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-9420

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

CLAWAR 2020 – August 24-26, 2020 – [Online Conference]
Other Than Human – September 3-10, 2020 – Stockholm, Sweden
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
CYBATHLON 2020 – November 13-14, 2020 – [Online Event]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.


Video Friday: This Robot Will Restock Shelves at Japanese Convenience Stores

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-telexistence-model-t-robot

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

CLAWAR 2020 – August 24-26, 2020 – [Online Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
CYBATHLON 2020 – November 13-14, 2020 – [Online Event]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.