Tag Archives: robotics/humanoids

Video Friday: Japan’s Gundam Robot Takes BIG Step Forward

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/humanoids/video-friday-japan-gundam-robot-big-step

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ACRA 2020 – December 8-10, 2020 – [Online]

Let us know if you have suggestions for next week, and enjoy today’s videos.


Why We Need a Robot Registry


Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/robotics/humanoids/why-we-need-a-robot-registry

I have a confession to make: A robot haunts my nightmares. For me, Boston Dynamics’ Spot robot is 32.5 kilograms (71.1 pounds) of pure terror. It can climb stairs. It can open doors. Seeing it in a video cannot prepare you for the moment you cross paths on a trade-show floor. Now that companies can buy a Spot robot for US $74,500, you might encounter Spot anywhere.

Spot robots now patrol public parks in Singapore to enforce social distancing during the pandemic. They meet with COVID-19 patients at Boston’s Brigham and Women’s Hospital so that doctors can conduct remote consultations. Imagine coming across Spot while walking in the park or returning to your car in a parking garage. Wouldn’t you want to know why this hunk of metal is there and who’s operating it? Or at least whom to call to report a malfunction?

Robots are becoming more prominent in daily life, which is why I think governments need to create national registries of robots. Such a registry would let citizens and law enforcement look up the owner of any roaming robot, as well as learn that robot’s purpose. It’s not a far-fetched idea: The U.S. Federal Aviation Administration already has a registry for drones.

Governments could create national databases that require any companies operating robots in public spaces to report the robot make and model, its purpose, and whom to contact if the robot breaks down or causes problems. To allow anyone to use the database, all public robots would have an easily identifiable marker or model number on their bodies. Think of it as a license plate or pet microchip, but for bots.

There are some smaller-scale registries today. San Jose’s Department of Transportation (SJDOT), for example, is working with Kiwibot, a delivery robot manufacturer, to get real-time data from the robots as they roam the city’s streets. The Kiwibots report their location to SJDOT using the open-source Mobility Data Specification, which was originally developed by Los Angeles to track Bird scooters.

Real-time location reporting makes sense for Kiwibots and Spots wandering the streets, but it’s probably overkill for bots confined to cleaning floors or patrolling parking lots. That said, any robots that come in contact with the general public should clearly provide basic credentials and a way to hold their operators accountable. Given that many robots use cameras, people may also be interested in looking up who’s collecting and using that data.

I starting thinking about robot registries after Spot became available in June for anyone to purchase. The idea gained specificity after listening to Andra Keay, founder and managing director at Silicon Valley Robotics, discuss her five rules of ethical robotics at an Arm event in October. I had already been thinking that we needed some way to track robots, but her suggestion to tie robot license plates to a formal registry made me realize that people also need a way to clearly identify individual robots.

Keay pointed out that in addition to sating public curiosity and keeping an eye on robots that could cause harm, a registry could also track robots that have been hacked. For example, robots at risk of being hacked and running amok could be required to report their movements to a database, even if they’re typically restricted to a grocery store or warehouse. While we’re at it, Spot robots should be required to have sirens, because there’s no way I want one of those sneaking up on me.

This article appears in the December 2020 print issue as “Who’s Behind That Robot?”

AI-Directed Robotic Hand Learns How to Grasp

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/automaton/robotics/humanoids/robotic-hand-uses-artificial-neural-network-to-learn-how-to-grasp-different-objects

Reaching for a nearby object seems like a mindless task, but the action requires a sophisticated neural network that took humans millions of years to evolve. Now, robots are acquiring that same ability using artificial neural networks. In a recent study, a robotic hand “learns” to pick up objects of different shapes and hardness using three different grasping motions.  

The key to this development is something called a spiking neuron. Like real neurons in the brain, artificial neurons in a spiking neural network (SNN) fire together to encode and process temporal information. Researchers study SNNs because this approach may yield insights into how biological neural networks function, including our own. 

“The programming of humanoid or bio-inspired robots is complex,” says Juan Camilo Vasquez Tieck, a research scientist at FZI Forschungszentrum Informatik in Karlsruhe, Germany. “And classical robotics programming methods are not always suitable to take advantage of their capabilities.”

Conventional robotic systems must perform extensive calculations, Tieck says, to track trajectories and grasp objects. But a robotic system like Tieck’s, which relies on a SNN, first trains its neural net to better model system and object motions. After which it grasps items more autonomously—by adapting to the motion in real-time. 

The new robotic system by Tieck and his colleagues uses an existing robotic hand, called a Schunk SVH 5-finger hand, which has the same number of fingers and joints as a human hand.

The researchers incorporated a SNN into their system, which is divided into several sub-networks. One sub-network controls each finger individually, either flexing or extending the finger. Another concerns each type of grasping movement, for example whether the robotic hand will need to do a pinching, spherical or cylindrical movement.

For each finger, a neural circuit detects contact with an object using the currents of the motors and the velocity of the joints. When contact with an object is detected, a controller is activated to regulate how much force the finger exerts.

“This way, the movements of generic grasping motions are adapted to objects with different shapes, stiffness and sizes,” says Tieck. The system can also adapt its grasping motion quickly if the object moves or deforms.

The robotic grasping system is described in a study published October 24 in IEEE Robotics and Automation Letters. The researchers’ robotic hand used its three different grasping motions on objects without knowing their properties. Target objects included a plastic bottle, a soft ball, a tennis ball, a sponge, a rubber duck, different balloons, a pen, and a tissue pack. The researchers found, for one, that pinching motions required more precision than cylindrical or spherical grasping motions.

“For this approach, the next step is to incorporate visual information from event-based cameras and integrate arm motion with SNNs,” says Tieck. “Additionally, we would like to extend the hand with haptic sensors.”

The long-term goal, he says, is to develop “a system that can perform grasping similar to humans, without intensive planning for contact points or intense stability analysis, and [that is] able to adapt to different objects using visual and haptic feedback.”
 

Disney Research Makes Robotic Gaze Interaction Eerily Lifelike

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/humanoids/disney-research-robotic-gaze

While it’s not totally clear to what extent human-like robots are better than conventional robots for most applications, one area I’m personally comfortable with them is entertainment. The folks over at Disney Research, who are all about entertainment, have been working on this sort of thing for a very long time, and some of their animatronic attractions are actually quite impressive.

The next step for Disney is to make its animatronic figures, which currently feature scripted behaviors, to perform in an interactive manner with visitors. The challenge is that this is where you start to get into potential Uncanny Valley territory, which is what happens when you try to create “the illusion of life,” which is what Disney (they explicitly say) is trying to do.

In a paper presented at IROS this month, a team from Disney Research, Caltech, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering is trying to nail that illusion of life with a single, and perhaps most important, social cue: eye gaze.

Video Friday: Robot vs. Human Workout Challenge

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/humanoids/video-friday-digit-robot-personal-trainer

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRA 2020 – May 31, 2020 – [Virtual Conference]  * * * STARTS SUNDAY!  REGISTER NOW! * * *
RSS 2020 – July 12-16, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
ICSR 2020 – November 14-16, 2020 – Golden, Colorado

Let us know if you have suggestions for next week, and enjoy today’s videos.


The Short, Strange Life of the First Friendly Robot

Post Syndicated from Yulia Frumer original https://spectrum.ieee.org/robotics/humanoids/the-short-strange-life-of-the-first-friendly-robot

In 1923, a play featuring artificial humans opened in Tokyo. Rossum’s Universal Robots—or R.U.R., as it had become known—had premiered two years earlier in Prague and had already become a worldwide sensation. The play, written by Karel Čapek, describes the creation of enslaved synthetic humans, or robots—a term derived from robota, the Czech word for “forced labor.” Čapek’s robots, originally made to serve their human masters, gained consciousness and rebelled, soon killing all humans on Earth. In the play’s final scene, the robots reveal that they possess emotions just like we do, and the audience is left wondering whether they would also achieve the ability to reproduce—the only thing still separating robots from humans.

The play was deeply disturbing for Makoto Nishimura, a 40-year-old professor of marine biology at the Hokkaido Imperial University, in the northern Japanese city of Sapporo. As Nishimura would later explain in a newspaper article, he was troubled because the play convincingly portrayed “the emergence of a perverse world in which humans become subordinate to artificial humans.” A machine modeled on a human being but designed to work as a slave implies that the model itself (that is, we humans) are slaves, too, he argued. More concerning for Nishimura was that a struggle between humans and artificial humans was an aberration, something that went against nature.

Perhaps he wouldn’t have been so upset by a work of fiction if he hadn’t witnessed how this fantasy was already becoming a reality in his world. He was familiar with early European and Japanese automata—mechanical figures devised to exhibit autonomous behaviors. While some of these mechanical wonders displayed noble or creative abilities, like playing musical instruments, drawing, writing calligraphy, or shooting arrows, others carried out mindless tasks. The latter were the ones that troubled Nishimura.

By the end of the 19th century, the world had also seen a variety of “steam men”—walking humanoids powered by internal steam engines (later models were powered by electric motors or gasoline engines) that appeared to pull carriages or floats. Many of these were developed in the United States. In the early 20th century, mechanical nurses, beauty queens, and policemen were introduced, and in his writings Nishimura also mentions mechanical receptionists, boat operators, and traffic cops. He may have seen some of these machines while he was living in New York City from 1916 to 1919 and pursuing a doctorate at Columbia University.

From our perspective, those machines seem more like curiosities than working devices. But for Nishimura, their potential to perform useful labor was as convincing as today’s new AI and robots are to us. As a scientist he had witnessed attempts to create artificial cells in the laboratory, which only reinforced his belief that artificial human beings would one day populate Earth. The question was, what kind of artificial humans would those be, and what kind of relationship would they have with biological humans?

In Nishimura’s opinion, the character of artificial humans—or any technology, for that matter—was determined by the intentions of their creators. And the intent behind carriage-pulling, steam-powered humanoids was to create workers enslaved by their human masters. That, in Nishimura’s mind, would inevitably lead to the formation of an R.U.R.-style exploited underclass that was bound to rebel.

Fearing a scenario that he described as humanity “destroyed by the pinnacle of its creation,” Nishimura decided to intervene, hoping to change the course of history. His solution was to create a different kind of artificial human, one that would celebrate nature and manifest humanity’s loftiest ideals—a robot that was not a slave but a friend, and even an inspirational model, to people. In 1926, he resigned his professorship, moved to Osaka, and started building his ideal artificial human.

Nishimura’s creation was a direct response to one particular machine: Westinghouse’s Televox, which debuted in 1927. Televox was a clunky, vaguely primate-shaped being designed to connect telephone calls. It exemplified all that Nishimura had come to fear. The creation of such a slavelike artificial human was not just shortsighted but also contrary to what Nishimura, the scientist, conceived of as the laws of nature. It was an abomination.

A curious thing about Nishimura’s decision to build a robot was that he wasn’t an engineer and possessed no expertise in mechanical or electrical systems. He was a marine biologist with a Ph.D. in botany. When he first encountered Čapek’s R.U.R., he was finishing an article on the cytology of marimo—aquatic moss balls endemic to the cold Lake Akan in northeast Hokkaido.

Yet it was precisely his background in biology that motivated Nishimura. An avid supporter of evolutionary theory, he was nevertheless skeptical of the idea of “survival of the fittest” and loathed the rhetoric of social Darwinism, which pitted humans against one another. Instead he favored “mutual aid” as the key driver of evolutionary change. He claimed that the core natural dynamic was collaboration, and that success by one individual (or one species) could be broadly beneficial.

“Today, human advances are framed in terms of a ‘conquest of nature,’ ” Nishimura wrote in Earth’s Belly (Daichi no harawata), a 1931 book detailing his philosophy of nature. “Rather than reflecting awe of the natural world, such triumph is more about kindling the struggle between humans.” But look at human society, he urged: “We cannot ignore the fact that humans achieved civilization through collaborative effort.”

Nishimura’s take on evolution and natural hierarchies had a profound effect on his views of artificial humans. This set him apart from European writers like Samuel Butler, H.G. Wells, and Čapek, who favored the “survival of the fittest” model and postulated that the success of humanoid machines would spell doom for humankind. Nishimura insisted that flesh-and-blood humans could benefit from the evolution of artificial humans—if the artificial humans were designed to be inspirational models rather than slaves.

The artificial human Nishimura created was indeed meant to awe. Picture a giant seated on a gilded pedestal with its eyes closed, seemingly lost in thought. In its left hand, it holds an electric light with a crystal-shaped bulb, which it slowly lifts into the air. At the exact moment the lightbulb illuminates, the giant opens its eyes, as if having a realization. Seemingly pleased with its eureka moment, it smiles. Soon, it shifts its attention to a blank piece of paper lying in front of it and begins to write, eagerly recording its newfound insights.

Nishimura refused to call his creation a “robot.” Instead, he bestowed upon it the name Gakutensoku, which means “learning from the rules of nature.” He considered his creation the first member of a new species, whose raison d’être was to inspire biological humans and facilitate human evolution by expanding our intellectual horizons. Nishimura rendered the word gakutensoku in katakana, a Japanese script also used for the scientific names of biological organisms. Nishimura envisioned future gakutensoku continuously evolving and becoming more complex.

Exactly how Gakutensoku operated has long intrigued historians and roboticists. Just a few years after its completion, the machine was lost under somewhat mysterious circumstances (more on that later). A few photos of the original design exist. But our only glimpse into its inner workings comes from an article Nishimura wrote in 1931. A talented writer, he often sacrificed technical detail for the sake of captivating prose and poetic expression.

Gakutensoku’s chief mechanism was set in motion by an air compressor. Presumably the compressor was powered by electricity. The airflow was controlled by a rotating drum affixed with pegs. When the mechanism was activated, the pegs opened and closed the valves on numerous rubber pipes that sent air to particular parts of Gakutensoku’s body and caused them to move. Similar to the mechanisms of classic automata, the arrangement of the pegs on the drum allowed for a rudimentary programming of the sequence of motions. Unlike in traditional automata, only the pegs and the drum were mechanical, while the rest of the mechanism was pneumatic.

Nishimura strove to create “naturalness” in his machine. As he described in his article, he sought to “transcend the mechanical look” and the noisiness and clumsiness of “tin-can” bodies by eliminating as much metal as possible from Gakutensoku’s frame. Only the robot’s “skeleton” was made of metal. For his creature’s soft tissue, he used rubber, the elasticity of which “made the movement much more natural, smooth, and without any sense of forced action.” Additionally, “unlike the American robots” that relied on steam, Gakutensoku was set in motion by compressed air, which Nishimura considered to be a more “natural” power. He claimed that he arrived at the idea of using pneumatics after playing the shakuhachi (traditional Japanese bamboo flute) and experimenting with differences in the airflow. By varying the airflow and using different kinds of rubber with different elasticities, Nishimura was able to achieve complex, layered movement, which he described “as if within a big wave there were a medium-size wave and inside it another tiny wave as well.”

For Nishimura, the most important feature of Gakutensoku was its ability to express human affect. Again, this was the result of carefully modulating the compressed air flowing through the rubber tubes. Prolonged pressure put on the outer bottom corners of the eyes and to the side of the mouth resulted in a smile. Slight airflow applied to one side of the neck produced a contemplative head tilt. Pleased with the results, Nishimura wrote that “compared to the American artificial humans, only ours has the ability to be expressive.”

But when the flow of air to Gakutensoku’s face was turned off, its facial expression suddenly and disturbingly collapsed. Nishimura and his team had to invent a device to allow the gradual release of air pressure, which they described as “multiple wart-shaped convex parts aligned on one rotating axis…. Only when this modification was put in place did Gakutensoku finally stop looking like a mad man.”

Nishimura emphasized the similarities between the structure of his artificial human and the anatomy of real human bodies. He claimed, for instance, that the compressed air that circulated inside Gakutensoku had a function similar to that of blood. Humans acquire energy by consuming food, and they distribute this energy through their circulatory system. Artificial humans, according to Nishimura, did something similar: They acquired electrical energy, then distributed this energy to different parts of their bodies by means of compressed air flowing in rubber tubes.

Nishimura’s deep interest in biologically inspired design sometimes pushed him into philosophical semantics. “Humans naturally get energy from their mother’s womb, while artificial humans get it from humans,” he wrote in Earth’s Belly. “So if humans are Nature’s children, artificial humans are born by the power of the human hand and thus might be referred to as Nature’s grandchildren.”

Gakutensoku debuted in September 1928 at an exhibition in Kyoto celebrating the recently crowned Emperor Shōwa (known in the West as Hirohito). Reflecting on the exhibition several years later in Earth’s Belly, Nishimura reported that Gakutensoku awed the crowds. Despite its being over 3 meters tall, observers said that it “looked more human than many expressionless humans.” The following year, Gakutensoku was taken on tour and exhibited in Tokyo, Osaka, and Hiroshima, as well as in Korea and China, where the artificial human “work[ed] from 6 a.m. to 8 p.m.” greeting spectators. Newspapers in Japan, China, and Korea reported on the exhibition and published photographs of the gentle giant, so that even those who could not see it in person had an idea of what it looked like.

Then Gakutensoku disappeared.

Nishimura himself never explained what happened. In an interview published in 1991, the scientist’s son Kō Nishimura said the automaton vanished en route to Germany during the 1930s. But Kō was only a child at the time of the disappearance. While I corroborated that Gakutensoku indeed traveled to Korea and China, I have found no record of its being sent to Germany. Even if the claim is true, we don’t know where exactly it vanished or who took it.

Despite its mysterious disappearance, Gakutensoku left a legacy that continues to reverberate in Japanese pop culture and robotics. During World War II, Japanese animators produced propaganda-heavy cartoons in which robots were depicted as inspirational heroes that used their supreme powers to assist humans. In the 1950s, Osamu Tezuka’s Astro Boy (Tetsuwan Atomu, or “Mighty Atom”) solidified the image of robots as emotionally sophisticated saviors driven by their empathy for other beings. I haven’t seen any concrete evidence that Tezuka knew of Gakutensoku, but he grew up in the same suburb of Osaka where Nishimura lived and worked as a schoolteacher during the war.

Gakutensoku itself was featured as a character in a 1988 science-fiction movie titled Teito Monogatari (sometimes translated as Tokyo: The Last Megalopolis), in which the robot helps defeat the magical powers of a demonic villain. Nishimura, who died in 1956 at age 72, makes an appearance, played by his son Kō, who by then was one of Japan’s best-known actors. The movie in turn inspired several books and TV shows exploring the origins of Japanese robots. In 1995 an asteroid spotted by Japanese astronomers was named 9786 Gakutensoku.

Then there’s Gakutensoku’s impact on Japanese robotics. Above all, many robot makers are guided by the underlying philosophy that machines are not in opposition to nature but rather part of it. Robots built in Japan from the 1970s on have a number of design characteristics reminiscent of those outlined by Nishimura—a preference for silent motion and the use of pneumatics; an emphasis on the textures of the robot’s “skin” and “face”; and most important, an attentiveness to how people perceive and respond to a robot’s humanoid features.

Since the ’90s Japanese roboticists have channeled increasing efforts into cognitive robotics, exploring how humans think and behave in attempts to create likeable robot designs. Their aim is to build robots that not only perform tasks but also elicit positive emotional reactions in their users—robots that are friendly, in other words. It would be a stretch to say that Nishimura single-handedly shaped Japan’s view of robotics, but the fact remains that few Japanese today fear R.U.R.-like scenarios of humanity being annihilated by their robotic overlords. As if fulfilling Nishimura’s vision for a desirable relationship between humans and artificial humans, the common adage in Japan nowadays is that “robots are friends.”

About the Author

Yulia Frumer is an associate professor of East Asian science in the department of history of science and technology at Johns Hopkins University, in Baltimore. She is currently researching the development of Japanese humanoid robotics, focusing on the historical roots of emotional responses to robots.

SoftBank’s Pepper Goes to School to Train Next-Gen Roboticists

Post Syndicated from Erico Guizzo original https://spectrum.ieee.org/automaton/robotics/humanoids/softbank-pepper-next-gen-roboticists

When the group of high schoolers arrived for the coding camp, the idea of spending the day staring at a computer screen didn’t seem too exciting to them. But then Pepper rolled into the room.

“All of a sudden everyone wanted to become a robot coder,” says Kass Dawson, head of marketing and business strategy at SoftBank Robotics America, in San Francisco. He saw the same thing happen in other classrooms, where the friendly humanoid was an instant hit with students.

Iran Unveils Its Most Advanced Humanoid Robot Yet

Post Syndicated from Erico Guizzo original https://spectrum.ieee.org/automaton/robotics/humanoids/iran-surena-iv-humanoid-robot

A little over a decade ago, researchers at the University of Tehran introduced a rudimentary humanoid robot called Surena. An improved model capable of walking, Surena II, was announced not long after, followed by the more capable Surena III in 2015.

Now the Iranian roboticists have unveiled Surena IV. The new robot is a major improvement over previous designs. A video highlighting its capabilities shows the robot mimicking a person’s pose, grasping a water bottle, and writing its name on a whiteboard.

Surena is also shown taking a group selfie with its human pals.

Video Friday: UBTECH’s Walker Robot Serves Drinks, Does Yoga

Post Syndicated from Erico Guizzo original https://spectrum.ieee.org/automaton/robotics/humanoids/video-friday-ubtech-walker-robot

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
HRI 2020 – March 23-26, 2020 – Cambridge, U.K.
ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores
ICRA 2020 – May 31-4, 2020 – Paris, France
ICUAS 2020 – June 9-12, 2020 – Athens, Greece
CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

Japan Is Building a Giant Gundam Robot That Can Walk

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/humanoids/japan-building-giant-gundam-robot

Japan has had a robust robot culture for decades, thanks (at least in part) to the success of the Gundam series, which are bipedal humanoid robots controlled by a human who rides inside of them. I would tell you how many different TV series and video games and manga there are about Gundam, but I’m certain I can’t count that high—there’s like seriously a lot of Gundam stuff out there. One of the most visible bits of Gundam stuff is a real life full-scale Gundam statue in Tokyo, but who really wants a statue, right? C’mon, Japan! Bring us the real thing!

Video Friday: This Japanese Robot Can Conduct a Human Orchestra and Sing Opera

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/humanoids/video-friday-alter-3-robot-conductor

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
HRI 2020 – March 23-26, 2020 – Cambridge, U.K.
ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores
ICRA 2020 – May 31-4, 2020 – Paris, France

Let us know if you have suggestions for next week, and enjoy today’s videos.


Hiro-chan Is a Faceless Robot Baby

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/humanoids/faceless-robot-baby

This robot is Hiro-chan. It’s made by Vstone, a Japanese robotics company known for producing a variety of totally normal educational and hobby robotics kits and parts. Hiro-chan is not what we would call totally normal, since it very obviously does not have a face. Vstone calls Hiro-chan a “healing communication device,” and while the whole faceless aspect is definitely weird, there is a reason for it, which unsurprisingly involves Hiroshi Ishiguro and his ATR Lab.

How Boston Dynamics Is Redefining Robot Agility

Post Syndicated from Erico Guizzo original https://spectrum.ieee.org/robotics/humanoids/how-boston-dynamics-is-redefining-robot-agility

With their jaw-dropping agility and animal-like reflexes, Boston Dynamics’ bioinspired robots have always seemed to have no equal. But that preeminence hasn’t stopped the company from pushing its technology to new heights, sometimes literally. Its latest crop of legged machines can trudge up and down hills, clamber over obstacles, and even leap into the air like a gymnast. There’s no denying their appeal: Every time Boston Dynamics uploads a new video to YouTube, it quickly racks up millions of views. These are probably the first robots you could call Internet stars.

Boston Dynamics, once owned by Google’s parent company, Alphabet, and now by the Japanese conglomerate SoftBank, has long been secretive about its designs. Few publications have been granted access to its Waltham, Mass., headquarters, near Boston. But one morning this past August, IEEE Spectrum got in. We were given permission to do a unique kind of photo shoot that day. We set out to capture the company’s robots in action—running, climbing, jumping—by using high-speed cameras coupled with powerful strobes. The results you see on this page: freeze-frames of pure robotic agility.

We also used the photos to create interactive views, which you can explore online on our Robots Guide. These interactives let you spin the robots 360 degrees, or make them walk and jump on your screen.

Boston Dynamics has amassed a minizoo of robotic beasts over the years, with names like BigDog, SandFlea, and WildCat. When we visited, we focused on the two most advanced machines the company has ever built: Spot, a nimble quadruped, and Atlas, an adult-size humanoid.

Spot can navigate almost any kind of terrain while sensing its environment. Boston Dynamics recently made it available for lease, with plans to manufacture something like a thousand units per year. It envisions Spot, or even packs of them, inspecting industrial sites, carrying out hazmat missions, and delivering packages. And its YouTube fame has not gone unnoticed: Even entertainment is a possibility, with Cirque du Soleil auditioning Spot as a potential new troupe member.

“It’s really a milestone for us going from robots that work in the lab to these that are hardened for work out in the field,” Boston Dynamics CEO Marc Raibert says in an interview.

Our other photographic subject, Atlas, is Boston Dynamics’ biggest celebrity. This 150-centimeter-tall (4-foot-11-inch-tall) humanoid is capable of impressive athletic feats. Its actuators are driven by a compact yet powerful hydraulic system that the company engineered from scratch. The unique system gives the 80-kilogram (176-pound) robot the explosive strength needed to perform acrobatic leaps and flips that don’t seem possible for such a large humanoid to do. Atlas has inspired a string of parody videos on YouTube and more than a few jokes about a robot takeover.

While Boston Dynamics excels at making robots, it has yet to prove that it can sell them. Ever since its founding in 1992 as a spin-off from MIT, the company has been an R&D-centric operation, with most of its early funding coming from U.S. military programs. The emphasis on commercialization seems to have intensified after the acquisition by SoftBank, in 2017. SoftBank’s founder and CEO, Masayoshi Son, is known to love robots—and profits.

The launch of Spot is a significant step for Boston Dynamics as it seeks to “productize” its creations. Still, Raibert says his long-term goals have remained the same: He wants to build machines that interact with the world dynamically, just as animals and humans do. Has anything changed at all? Yes, one thing, he adds with a grin. In his early career as a roboticist, he used to write papers and count his citations. Now he counts YouTube views.

  • In the Spotlight

    Boston Dynamics designed Spot as a versatile mobile machine suitable for a variety of applications. The company has not announced how much Spot will cost, saying only that it is being made available to select customers, which will be able to lease the robot. A payload bay lets you add up to 14 kilograms of extra hardware to the robot’s back. One of the accessories that Boston Dynamics plans to offer is a 6-degrees-of-freedom arm, which will allow Spot to grasp objects and open doors.

  • Super Senses

    Spot’s hardware is almost entirely custom-designed. It includes powerful processing boards for control as well as sensor modules for perception. The ­sensors are located on the front, rear, and sides of the robot’s body. Each module consists of a pair of stereo cameras, a wide-angle camera, and a texture projector, which enhances 3D sensing in low light. The sensors allow the robot to use the navigation method known as SLAM, or simultaneous localization and mapping, to get around autonomously.

  • Stepping Up

    In addition to its autonomous behaviors, Spot can also be steered by a remote operator with a game-style controller. But even when in manual mode, the robot still exhibits a high degree of autonomy. If there’s an obstacle ahead, Spot will go around it. If there are stairs, Spot will climb them. The robot goes into these operating modes and then performs the related actions completely on its own, without any input from the operator. To go down a flight of stairs, Spot walks backward, an approach Boston Dynamics says provides greater stability.

  • Funky Feet

    Spot’s legs are powered by 12 custom DC motors, each geared down to provide high torque. The robot can walk forward, sideways, and backward, and trot at a top speed of 1.6 meters per second. It can also turn in place. Other gaits include crawling and pacing. In one wildly popular YouTube video, Spot shows off its fancy footwork by dancing to the pop hit “Uptown Funk.”

  • Robot Blood

    Atlas is powered by a hydraulic system consisting of 28 actuators. These actuators are basically cylinders filled with pressurized fluid that can drive a piston with great force. Their high performance is due in part to custom servo valves that are significantly smaller and lighter than the aerospace models that Boston Dynamics had been using in earlier designs. Though not visible from the outside, the innards of an Atlas are filled with these hydraulic actuators as well as the lines of fluid that connect them. When one of those lines ruptures, Atlas bleeds the hydraulic fluid, which happens to be red.

  • Next Generation

    The current version of Atlas is a thorough upgrade of the original model, which was built for the DARPA Robotics Challenge in 2015. The newest robot is lighter and more agile. Boston Dynamics used industrial-grade 3D printers to make key structural parts, giving the robot greater strength-to-weight ratio than earlier designs. The next-gen Atlas can also do something that its predecessor, famously, could not: It can get up after a fall.

  • Walk This Way

    To control Atlas, an operator provides general steering via a manual controller while the robot uses its stereo cameras and lidar to adjust to changes in the environment. Atlas can also perform certain tasks autonomously. For example, if you add special bar-code-type tags to cardboard boxes, Atlas can pick them up and stack them or place them on shelves.

  • Biologically Inspired

    Atlas’s control software doesn’t explicitly tell the robot how to move its joints, but rather it employs mathematical models of the underlying physics of the robot’s body and how it interacts with the environment. Atlas relies on its whole body to balance and move. When jumping over an obstacle or doing acrobatic stunts, the robot uses not only its legs but also its upper body, swinging its arms to propel itself just as an athlete would.

This article appears in the December 2019 print issue as “By Leaps and Bounds.”

What Is the Uncanny Valley?

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/automaton/robotics/humanoids/what-is-the-uncanny-valley

Have you ever encountered a lifelike humanoid robot or a realistic computer-generated face that seem a bit off or unsettling, though you can’t quite explain why?

Take for instance AVA, one of the “digital humans” created by New Zealand tech startup Soul Machines as an on-screen avatar for Autodesk. Watching a lifelike digital being such as AVA can be both fascinating and disconcerting. AVA expresses empathy through her demeanor and movements: slightly raised brows, a tilt of the head, a nod.

By meticulously rendering every lash and line in its avatars, Soul Machines aimed to create a digital human that is virtually undistinguishable from a real one. But to many, rather than looking natural, AVA actually looks creepy. There’s something about it being almost human but not quite that can make people uneasy.

Like AVA, many other ultra-realistic avatars, androids, and animated characters appear stuck in a disturbing in-between world: They are so lifelike and yet they are not “right.” This void of strangeness is known as the uncanny valley.

This MIT Robot Wants to Use Your Reflexes to Walk and Balance

Post Syndicated from Erico Guizzo original https://spectrum.ieee.org/automaton/robotics/humanoids/mit-little-hermes

MIT researchers have demonstrated a new kind of teleoperation system that allows a two-legged robot to “borrow” a human operator’s physical skills to move with greater agility. The system works a bit like those haptic suits from the Spielberg movie “Ready Player One.” But while the suits in the film were used to connect humans to their VR avatars, the MIT suit connects the operator to a real robot.

Agility Robotics Unveils Upgraded Digit Walking Robot

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/humanoids/agility-robotics-digit-v2-biped-robot

Last time we saw Agility Robotics’ Digit biped, it was picking up a box from a Ford delivery van and autonomously dropping it off on a porch, while at the same time managing to not trip over stairs, grass, or small children. As a demo, it was pretty impressive, but of course there’s an enormous gap between making a video of a robot doing a successful autonomous delivery and letting that robot out into the semi-structured world and expecting it to reliably do a good job.

Agility Robotics is aware of this, of course, and over the last six months they’ve been making substantial improvements to Digit to make it more capable and robust. A new video posted today shows what’s new with the latest version of Digit—Digit v2.