Tag Archives: robotics

Bipedal Robot Cassie Cal Learns to Juggle

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/uc-berkeley-cassie-cal-robot-juggle

There’s no particular reason why knowing how to juggle would be a useful skill for a robot. Despite this, robots are frequently taught how to juggle things. Blind robots can juggle, humanoid robots can juggle, and even drones can juggle. Why? Because juggling is hard, man! You have to think about a bunch of different things at once, and also do a bunch of different things at once, which this particular human at least finds to be overly stressful. While juggling may not stress robots out, it does require carefully coordinated sensing and computing and actuation, which means that it’s as good a task as any (and a more entertaining task than most) for testing the capabilities of your system.

UC Berkeley’s Cassie Cal robot, which consists of two legs and what could be called a torso if you were feeling charitable, has just learned to juggle by bouncing a ball on what would be her head if she had one of those. The idea is that if Cassie can juggle while balancing at the same time, she’ll be better able to do other things that require dynamic multitasking, too. And if that doesn’t work out, she’ll still be able to join the circus.

Why People Demanded Privacy to Confide in the World’s First Chatbot

Post Syndicated from Oscar Schwartz original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/why-people-demanded-privacy-to-confide-in-the-worlds-first-chatbot

This is part four of a six-part series on the history of natural language processing.

Between 1964 and 1966, Joseph Weizenbaum, a German American computer scientist at MIT’s artificial intelligence lab, developed the first-ever chatbot [PDF].

While there were already some rudimentary digital language generators in existence—programs that could spit out somewhat coherent lines of text—Weizenbaum’s program was the first designed explicitly for interactions with humans. The user could type in some statement or set of statements in their normal language, press enter, and receive a response from the machine. As Weizenbaum explained, his program made “certain kinds of natural-language conversation between man and computer possible.”

He named the program Eliza after Eliza Doolittle, the working-class hero of George Bernard Shaw’s Pygmalion who learns how to talk with an upper-class accent. The new Eliza was written for the 36-bit IBM 7094, an early transistorized mainframe computer, in a programming language that Weizenbaum developed called MAD-SLIP.  

Because computer time was a valuable resource, Eliza could only be run via a time-sharing system; the user interacted with the program remotely via an electric typewriter and printer. When the user typed in a sentence and pressed enter, a message was sent to the mainframe computer. Eliza scanned the message for the presence of a keyword and used it in a new sentence to form a response that was sent back, printed out, and read by the user.

Video Friday: Invasion of the Mini Cheetah Robots

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-mit-mini-cheetah-robots

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.


Quadruped Robots Can Climb Ladders Now

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/quadruped-robots-can-climb-ladders-now

When we look at quadruped robots, it’s impossible not to compare them to quadruped animals like dogs and cats. Over the last several years, such robots have begun to approach the capabilities of their biological counterparts under just a few very specific situations, like walking without falling over. Biology provides a gold standard that robots are striving to reach, and it’s going to take us a very long time to make quadrupeds that can do everything that animals can.

The cool thing about robots, though, is that they don’t have to be constrained by biology, meaning that there’s always the potential for them to learn new behaviors that animals simply aren’t designed for. At IROS 2019 last week, we saw one such example, with a quadruped robot that’s able to climb vertical ladders.

Andrey Markov & Claude Shannon Counted Letters to Build the First Language-Generation Models

Post Syndicated from Oscar Schwartz original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/andrey-markov-and-claude-shannon-built-the-first-language-generation-models

This is part three of a six-part series on the history of natural language processing.

In 1913, the Russian mathematician Andrey Andreyevich Markov sat down in his study in St. Petersburg with a copy of Alexander Pushkin’s 19th century verse novel, Eugene Onegin, a literary classic at the time. Markov, however, did not start reading Pushkin’s famous text. Rather, he took a pen and piece of drafting paper, and wrote out the first 20,000 letters of the book in one long string of letters, eliminating all punctuation and spaces. Then he arranged these letters in 200 grids (10-by-10 characters each) and began counting the vowels in every row and column, tallying the results.

The Blogger Behind “AI Weirdness” Thinks Today’s AI Is Dumb and Dangerous

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/blogger-behind-ai-weirdness-thinks-todays-ai-is-dumb-and-dangerous

Sure, artificial intelligence is transforming the world’s societies and economies—but can an AI come up with plausible ideas for a Halloween costume? 

Janelle Shane has been asking such probing questions since she started her AI Weirdness blog in 2016. She specializes in training neural networks (which underpin most of today’s machine learning techniques) on quirky data sets such as compilations of knitting instructions, ice cream flavors, and names of paint colors. Then she asks the neural net to generate its own contributions to these categories—and hilarity ensues. AI is not likely to disrupt the paint industry with names like “Ronching Blue,” “Dorkwood,” and “Turdly.” 

Shane’s antics have a serious purpose. She aims to illustrate the serious limitations of today’s AI, and to counteract the prevailing narrative that describes AI as well on its way to superintelligence and complete human domination. “The danger of AI is not that it’s too smart,” Shane writes in her new book, “but that it’s not smart enough.” 

The book, which came out on Tuesday, is called You Look Like a Thing and I Love You. It takes its odd title from a list of AI-generated pick-up lines, all of which would at least get a person’s attention if shouted, preferably by a robot, in a crowded bar. Shane’s book is shot through with her trademark absurdist humor, but it also contains real explanations of machine learning concepts and techniques. It’s a painless way to take AI 101. 

She spoke with IEEE Spectrum about the perils of placing too much trust in AI systems, the strange AI phenomenon of “giraffing,” and her next potential Halloween costume. 

Janelle Shane on . . .


  1. The un-delicious origin of her blog
  2. “The narrower the problem, the smarter the AI will seem”
  3. Why overestimating AI is dangerous
  4. Giraffing!
  5. Machine and human creativity
  1. The un-delicious origin of her blog

    IEEE Spectrum: You studied electrical engineering as an undergrad, then got a master’s degree in physics. How did that lead to you becoming the comedian of AI? 

    Janelle Shane: I’ve been interested in machine learning since freshman year of college. During orientation at Michigan State, a professor who worked on evolutionary algorithms gave a talk about his work. It was full of the most interesting anecdotes–some of which I’ve used in my book. He told an anecdote about people setting up a machine learning algorithm to do lens design, and the algorithm did end up designing an optical system that works… except one of the lenses was 50 feet thick, because they didn’t specify that it couldn’t do that.  

    I started working in his lab on optics, doing ultra-short laser pulse work. I ended up doing a lot more optics than machine learning, but I always found it interesting. One day I came across a list of recipes that someone had generated using a neural net, and I thought it was hilarious and remembered why I thought machine learning was so cool. That was in 2016, ages ago in machine learning land.

    Spectrum: So you decided to “establish weirdness as your goal” for your blog. What was the first weird experiment that you blogged about? 

    Shane: It was generating cookbook recipes. The neural net came up with ingredients like: “Take ¼ pounds of bones or fresh bread.” That recipe started out: “Brown the salmon in oil, add creamed meat to the mixture.” It was making mistakes that showed the thing had no memory at all. 

    Spectrum: You say in the book that you can learn a lot about AI by giving it a task and watching it flail. What do you learn?

    Shane: One thing you learn is how much it relies on surface appearances rather than deep understanding. With the recipes, for example: It got the structure of title, category, ingredients, instructions, yield at the end. But when you look more closely, it has instructions like “Fold the water and roll it into cubes.” So clearly this thing does not understand water, let alone the other things. It’s recognizing certain phrases that tend to occur, but it doesn’t have a concept that these recipes are describing something real. You start to realize how very narrow the algorithms in this world are. They only know exactly what we tell them in our data set. 

    BACK TO TOP↑

    “The narrower the problem, the smarter the AI will seem”

    Spectrum: That makes me think of DeepMind’s AlphaGo, which was universally hailed as a triumph for AI. It can play the game of Go better than any human, but it doesn’t know what Go is. It doesn’t know that it’s playing a game. 

    Shane: It doesn’t know what a human is, or if it’s playing against a human or another program. That’s also a nice illustration of how well these algorithms do when they have a really narrow and well-defined problem. 

    The narrower the problem, the smarter the AI will seem. If it’s not just doing something repeatedly but instead has to understand something, coherence goes down. For example, take an algorithm that can generate images of objects. If the algorithm is restricted to birds, it could do a recognizable bird. If this same algorithm is asked to generate images of any animal, if its task is that broad, the bird it generates becomes an unrecognizable brown feathered smear against a green background.

    Spectrum: That sounds… disturbing. 

    Shane: It’s disturbing in a weird amusing way. What’s really disturbing is the humans it generates. It hasn’t seen them enough times to have a good representation, so you end up with an amorphous, usually pale-faced thing with way too many orifices. If you asked it to generate an image of a person eating pizza, you’ll have blocks of pizza texture floating around. But if you give that image to an image-recognition algorithm that was trained on that same data set, it will say, “Oh yes, that’s a person eating pizza.”

    BACK TO TOP↑

    Why overestimating AI is dangerous

    Spectrum: Do you see it as your role to puncture the AI hype? 

    Shane: I do see it that way. Not a lot of people are bringing out this side of AI. When I first started posting my results, I’d get people saying, “I don’t understand, this is AI, shouldn’t it be better than this? Why doesn’t it understand?” Many of the impressive examples of AI have a really narrow task, or they’ve been set up to hide how little understanding it has. There’s a motivation, especially among people selling products based on AI, to represent the AI as more competent and understanding than it actually is. 

    Spectrum: If people overestimate the abilities of AI, what risk does that pose? 

    Shane: I worry when I see people trusting AI with decisions it can’t handle, like hiring decisions or decisions about moderating content. These are really tough tasks for AI to do well on. There are going to be a lot of glitches. I see people saying, “The computer decided this so it must be unbiased, it must be objective.” 

    That’s another thing I find myself highlighting in the work I’m doing. If the data includes bias, the algorithm will copy that bias. You can’t tell it not to be biased, because it doesn’t understand what bias is. I think that message is an important one for people to understand. 

    If there’s bias to be found, the algorithm is going to go after it. It’s like, “Thank goodness, finally a signal that’s reliable.” But for a tough problem like: Look at these resumes and decide who’s best for the job. If its task is to replicate human hiring decisions, it’s going to glom onto gender bias and race bias. There’s an example in the book of a hiring algorithm that Amazon was developing that discriminated against women, because the historical data it was trained on had that gender bias. 

    Spectrum: What are the other downsides of using AI systems that don’t really understand their tasks? 

    Shane: There is a risk in putting too much trust in AI and not examining its decisions. Another issue is that it can solve the wrong problems, without anyone realizing it. There have been a couple of cases in medicine. For example, there was an algorithm that was trained to recognize things like skin cancer. But instead of recognizing the actual skin condition, it latched onto signals like the markings a surgeon makes on the skin, or a ruler placed there for scale. It was treating those things as a sign of skin cancer. It’s another indication that these algorithms don’t understand what they’re looking at and what the goal really is. 

    BACK TO TOP↑

    Giraffing

    Spectrum: In your blog, you often have neural nets generate names for things—such as ice cream flavors, paint colors, cats, mushrooms, and types of apples. How do you decide on topics?

    Shane: Quite often it’s because someone has written in with an idea or a data set. They’ll say something like, “I’m the MIT librarian and I have a whole list of MIT thesis titles.” That one was delightful. Or they’ll say, “We are a high school robotics team, and we know where there’s a list of robotics team names.” It’s fun to peek into a different world. I have to be careful that I’m not making fun of the naming conventions in the field. But there’s a lot of humor simply in the neural net’s complete failure to understand. Puns in particular—it really struggles with puns. 

    Spectrum: Your blog is quite absurd, but it strikes me that machine learning is often absurd in itself. Can you explain the concept of giraffing?

    Shane: This concept was originally introduced by [internet security expert] Melissa Elliott. She proposed this phrase as a way to describe the algorithms’ tendency to see giraffes way more often than would be likely in the real world. She posted a whole bunch of examples, like a photo of an empty field in which an image-recognition algorithm has confidently reported that there are giraffes. Why does it think giraffes are present so often when they’re actually really rare? Because they’re trained on data sets from online. People tend to say, “Hey look, a giraffe!” And then take a photo and share it. They don’t do that so often when they see an empty field with rocks. 

    There’s also a chatbot that has a delightful quirk. If you show it some photo and ask it how many giraffes are in the picture, it will always answer with some non zero number. This quirk comes from the way the training data was generated: These were questions asked and answered by humans online. People tended not to ask the question “How many giraffes are there?” when the answer was zero. So you can show it a picture of someone holding a Wii remote. If you ask it how many giraffes are in the picture, it will say two. 

    BACK TO TOP↑

    Machine and human creativity

    Spectrum: AI can be absurd, and maybe also creative. But you make the point that AI art projects are really human-AI collaborations: Collecting the data set, training the algorithm, and curating the output are all artistic acts on the part of the human. Do you see your work as a human-AI art project?

    Shane: Yes, I think there is artistic intent in my work; you could call it literary or visual. It’s not so interesting to just take a pre-trained algorithm that’s been trained on utilitarian data, and tell it to generate a bunch of stuff. Even if the algorithm isn’t one that I’ve trained myself, I think about, what is it doing that’s interesting, what kind of story can I tell around it, and what do I want to show people. 

    Spectrum: For the past three years you’ve been getting neural nets to generate ideas for Halloween costumes. As language models have gotten dramatically better over the past three years, are the costume suggestions getting less absurd? 

    Shane: Yes. Before I would get a lot more nonsense words. This time I got phrases that were related to real things in the data set. I don’t believe the training data had the words Flying Dutchman or barnacle. But it was able to draw on its knowledge of which words are related to suggest things like sexy barnacle and sexy Flying Dutchman. 

    Spectrum: This year, I saw on Twitter that someone made the gothy giraffe costume happen. Would you ever dress up for Halloween in a costume that the neural net suggested? 

    Shane: I think that would be fun. But there would be some challenges. I would love to go as the sexy Flying Dutchman. But my ambition may constrict me to do something more like a list of leg parts. 

    BACK TO TOP↑

What Is the Uncanny Valley?

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/automaton/robotics/humanoids/what-is-the-uncanny-valley

Have you ever encountered a lifelike humanoid robot or a realistic computer-generated face that seem a bit off or unsettling, though you can’t quite explain why?

Take for instance AVA, one of the “digital humans” created by New Zealand tech startup Soul Machines as an on-screen avatar for Autodesk. Watching a lifelike digital being such as AVA can be both fascinating and disconcerting. AVA expresses empathy through her demeanor and movements: slightly raised brows, a tilt of the head, a nod.

By meticulously rendering every lash and line in its avatars, Soul Machines aimed to create a digital human that is virtually undistinguishable from a real one. But to many, rather than looking natural, AVA actually looks creepy. There’s something about it being almost human but not quite that can make people uneasy.

Like AVA, many other ultra-realistic avatars, androids, and animated characters appear stuck in a disturbing in-between world: They are so lifelike and yet they are not “right.” This void of strangeness is known as the uncanny valley.

Harvard’s UrchinBot Is One of the Weirdest Looking Robots We’ve Ever Seen

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/harvard-amphibious-urchinbot

On the spectrum of weird stuff that can be found in the ocean, sea urchins are probably somewhere in the middle. They’re an interesting combination of rigid and flexible, with shells covered in hard movable spines as well as soft tubular appendages that work like a combination of legs and sticky feet. The mobility strategy of sea urchins leverages both of these appendages, and while they may not be speedy, they can get themselves into all kinds of potentially useful nooks and crannies, which seems like a capability that could be valuable in a robot.

At IROS 2019 this week, roboticists from Harvard presented a bioinspired robot that “incorporates anatomical features unique to sea urchins,” actuated by pneumatics or hydraulics and operating without a tether. It may be based on a real animal, but even so, UrchinBot is definitely one of the weirdest looking robots we’ve ever seen.

As it turns out, adult sea urchins are complicated critters, and making a robotic version of one of them was asking a bit much. Juvenile sea urchins incorporate the same basic features in a much simpler body, and while they’re only 0.5 millimeters in size, a scaled-up version (with a body 230 mm in diameter) was much more feasible.

Just like the adults, sea urchin babies have two mobility appendages: movable spines, and sticky tube feet. The physical resemblance is striking, but it’s much more than just aesthetics, as the researchers emphasize that “particular attention was paid to accurately replicating the geometry and ranges of motion of the anatomical features on which our design was based.”

UrchinBot’s spines (which the real animal uses for protection, mobility, and to jam itself into crevices) reflect the two different kinds of spines that you see on juvenile urchins. Nobody’s quite sure why the babies have fancier spines than the adults, but UrchinBot replicates that detail too. Each spine is connected to the body with a ball joint, and a triangle of three pneumatic domes around the joint can inflate to push the spine in different directions. All of the domes are interconnected inside of the robot which means both that the spines can’t be actuated separately and that you get a satisfyingly symmetric rotational motion whenever the spines move. As they rotate against a surface that UrchinBot is resting on, the robot slowly turns itself in the opposite direction.

The tube feet are a little more complicated, because real urchins excrete sticky stuff that they use to glue themselves to surfaces, and then excrete an enzyme that dissolves the glue when they want to move. UrchinBot instead uses extendable and retractable toe magnets, which work perfectly well as long as the robot is moving on a ferrous surface. As the tube feet inflate, they move outward and angle their tips down, and with enough pressure, the toe magnets pop out and adhere. UrchinBot then reverses its hydraulics to suck the tube foot back in, pulling itself towards the adhesion point, and causing the magnet to pop off again once it gets there.

The rest of UrchinBot’s body is taken up with pumps, valves, and electronics that allows it to operate completely untethered, both on land and underwater. Here it is in action:

It turns out that UrchinBot’s spines exhibit a range of motion similar to that of an actual urchin, which is neat. The tube feet can achieve an extension ratio of 6:1, which is reasonably close to a juvenile urchin’s 10:1 ratio, but much less than an adult urchin, which can extend its tube feet out to a 50:1 ratio. UrchinBot is not as fast as the real thing, which is to be expected with most bioinspired robots. Top speed is 6 mm/s, or 0.027 body-lengths per second, quite a bit slower than a juvenile urchin (which can hit 10 body-lengths per second going flat out) but only half as fast as an adult urchin.

UrchinBot may not be the speediest robot under the sea, but the researchers say that it could be useful for underwater cleaning and inspection applications, especially in situations where heavy fouling would be a challenge for more conventional robots. The priority for UrchinBot upgrades is to stuff it with as many extra actuators as it’ll hold, with the goal of making the spines actuate individually and giving the tube feet extra degrees of freedom. While UrchinBot may not find near-term applications, it serves as a testbed to help researchers identify physical features and control techniques that could result in new types of more versatile and effective underwater robots.

“Design, Fabrication, and Characterization of an Untethered Amphibious Sea Urchin-Inspired Robot,” by Thibaut Paschal, Michael A. Bell, Jakob Sperry, Satchel Sieniewicz, Robert J. Wood, and James C. Weaver from Harvard’s Wyss Institute, was presented this week at IROS 2019 in Macau.

Trump CTO Addresses AI, Facial Recognition, Immigration, Tech Infrastructure, and More

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/robotics/artificial-intelligence/trump-cto-addresses-ai-facial-recognition-immigration-tech-infrastructure-and-more

Michael Kratsios, the Chief Technology Officer of the United States, took the stage at Stanford University last week to field questions from Stanford’s Eileen Donahoe and attendees at the 2019 Fall Conference of the Institute for Human-Centered Artificial Intelligence (HAI).

Kratsios, the fourth to hold the U.S. CTO position since its creation by President Barack Obama in 2009, was confirmed in August as President Donald Trump’s first CTO. Before joining the Trump administration, he was chief of staff at investment firm Thiel Capital and chief financial officer of hedge fund Clarium Capital. Donahoe is Executive Director of Stanford’s Global Digital Policy Incubator and served as the first U.S. Ambassador to the United Nations Human Rights Council during the Obama Administration.

The conversation jumped around, hitting on both accomplishments and controversies. Kratsios touted the administration’s success in fixing policy around the use of drones, its memorandum on STEM education, and an increase in funding for basic research in AI—though the magnitude of that increase wasn’t specified. He pointed out that the Trump administration’s AI policy has been a continuation of the policies of the Obama administration, and will continue to build on that foundation. As proof of this, he pointed to Trump’s signing of the American AI Initiative earlier this year. That executive order, Kratsios said, was intended to bring various government agencies together to coordinate their AI efforts and to push the idea that AI is a tool for the American worker. The AI Initiative, he noted, also took into consideration that AI will cause job displacement, and asked private companies to pledge to retrain workers.

The administration, he said, is also looking to remove barriers to AI innovation. In service of that goal, the government will, in the next month or so, release a regulatory guidance memo instructing government agencies about “how they should think about AI technologies,” said Kratsios.

U.S. vs China in AI

A few of the exchanges between Kratsios and Donahoe hit on current hot topics, starting with the tension between the U.S. and China.

Donahoe:

“You talk a lot about unique U.S. ecosystem. In which aspect of AI is the U.S. dominant, and where is China challenging us in dominance?

Kratsios:

“They are challenging us on machine vision. They have more data to work with, given that they have surveillance data.”

Donahoe:

“To what extent would you say the quantity of data collected and available will be a determining factor in AI dominance?”

Kratsios:

“It makes a big difference in the short term. But we do research on how we get over these data humps. There is a future where you don’t need as much data, a lot of federal grants are going to [research in] how you can train models using less data.”

Donahoe turned the conversation to a different tension—that between innovation and values.

Donahoe:

“A lot of conversation yesterday was about the tension between innovation and values, and how do you hold those things together and lead in both realms.”

Kratsios:

“We recognized that the U.S. hadn’t signed on to principles around developing AI. In May, we signed [the Organization for Economic Cooperation and Development Principles on Artificial Intelligence], coming together with other Western democracies to say that these are values that we hold dear.

[Meanwhile,] we have adversaries around the world using AI to surveil people, to suppress human rights. That is why American leadership is so critical: We want to come out with the next great product. And we want our values to underpin the use cases.”

A member of the audience pushed further:

“Maintaining U.S. leadership in AI might have costs in terms of individuals and society. What costs should individuals and society bear to maintain leadership?”

Kratsios:

“I don’t view the world that way. Our companies big and small do not hesitate to talk about the values that underpin their technology. [That is] markedly different from the way our adversaries think. The alternatives are so dire [that we] need to push efforts to bake the values that we hold dear into this technology.”

Facial recognition

And then the conversation turned to the use of AI for facial recognition, an application which (at least for police and other government agencies) was recently banned in San Francisco.

Donahoe:

“Some private sector companies have called for government regulation of facial recognition, and there already are some instances of local governments regulating it. Do you expect federal regulation of facial recognition anytime soon? If not, what ought the parameters be?”

Kratsios:

“A patchwork of regulation of technology is not beneficial for the country. We want to avoid that. Facial recognition has important roles—for example, finding lost or displaced children. There are use cases, but they need to be underpinned by values.”

A member of the audience followed up on that topic, referring to some data presented earlier at the HAI conference on bias in AI:

“Frequently the example of finding missing children is given as the example of why we should not restrict use of facial recognition. But we saw Joy Buolamwini’s  presentation on bias in data. I would like to hear your thoughts about how government thinks we should use facial recognition, knowing about this bias.”

Kratsios:

“Fairness, accountability, and robustness are things we want to bake into any technology—not just facial recognition—as we build rules governing use cases.”

Immigration and innovation

A member of the audience brought up the issue of immigration:

“One major pillar of innovation is immigration, does your office advocate for it?”

Kratsios:

“Our office pushes for best and brightest people from around the world to come to work here and study here. There are a few efforts we have made to move towards a more merit-based immigration system, without congressional action. [For example, in] the H1-B visa system, you go through two lotteries. We switched the order of them in order to get more people with advanced degrees through.”

The government’s tech infrastructure

Donahoe brought the conversation around to the tech infrastructure of the government itself:

 “We talk about the shiny object, AI, but the 80 percent is the unsexy stuff, at federal and state levels. We don’t have a modern digital infrastructure to enable all the services—like a research cloud. How do we create this digital infrastructure?”

Kratsios:

“I couldn’t agree more; the least partisan issue in Washington is about modernizing IT infrastructure. We spend like $85 billion a year on IT at the federal level, we can certainly do a better job of using those dollars.”

Microsoft’s AI Research Draws Controversy Over Possible Disinformation Use

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/microsofts-ai-research-draws-controversy-over-possible-disinformation-use

AI capable of automatically posting relevant comments on news articles has raised concerns that the technology could empower online disinformation campaigns designed to influence public opinion and national elections. The AI research in question, conducted by Microsoft Research Asia and Beihang University in China, became the subject of controversy even prior to the paper’s scheduled presentation at a major AI conference this week.

We’re at IROS 2019 to Bring You the Most Exciting Robotics Research From Around the World

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/were-at-iros-2019-in-macau

The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) is taking place in Macau this week, featuring well over a thousand presentations on the newest and most amazing robotics research from around the world. There are also posters, workshops, tutorials, an exhibit hall, and plenty of social events where roboticists have the chance to get a little tipsy and talk about all the really interesting stuff.

As always, our plan is to bring you all of the coolest, weirdest, and most interesting things that we find at the show, and here are just a few of the things we’re looking forward to this week:

  • Flying robots with wings, tails, and… arms?
  • Spherical robot turtles
  • An update on that crazy jet-powered iCub
  • Agile and tiny robot insects
  • Metallic self-healing robot bones
  • How to train robots by messing with them
  • A weird robot sea urchin

And all that is happening just on Tuesday!

Our IROS coverage will continue beyond this week, so keep checking back for more of the best new robotics from Macau.

[ IROS 2019 ]

In the 17th Century, Leibniz Dreamed of a Machine That Could Calculate Ideas

Post Syndicated from Oscar Schwartz original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/in-the-17th-century-leibniz-dreamed-of-a-machine-that-could-calculate-ideas

This is part two of a six-part series on the history of natural language processing.

In 1666, the German polymath Gottfried Wilhelm Leibniz published an enigmatic dissertation entitled On the Combinatorial Art. Only 20 years old but already an ambitious thinker, Leibniz outlined a theory for automating knowledge production via the rule-based combination of symbols.

Leibniz’s central argument was that all human thoughts, no matter how complex, are combinations of basic and fundamental concepts, in much the same way that sentences are combinations of words, and words combinations of letters. He believed that if he could find a way to symbolically represent these fundamental concepts and develop a method by which to combine them logically, then he would be able to generate new thoughts on demand.

The idea came to Leibniz through his study of Ramon Llull, a 13th century Majorcan mystic who devoted himself to devising a system of theological reasoning that would prove the “universal truth” of Christianity to non-believers.

Llull himself was inspired by Jewish Kabbalists’ letter combinatorics (see part one of this series), which they used to produce generative texts that supposedly revealed prophetic wisdom. Taking the idea a step further, Llull invented what he called a volvelle, a circular paper mechanism with increasingly small concentric circles on which were written symbols representing the attributes of God. Llull believed that by spinning the volvelle in various ways, bringing the symbols into novel combinations with one another, he could reveal all the aspects of his deity.

Leibniz was much impressed by Llull’s paper machine, and he embarked on a project to create his own method of idea generation through symbolic combination. He wanted to use his machine not for theological debate, but for philosophical reasoning. He proposed that such a system would require three things: an “alphabet of human thoughts”; a list of logical rules for their valid combination and re-combination; and a mechanism that could carry out the logical operations on the symbols quickly and accurately—a fully mechanized update of Llull’s paper volvelle. 

He imagined that this machine, which he called “the great instrument of reason,” would be able to answer all questions and resolve all intellectual debate. “When there are disputes among persons,” he wrote, “we can simply say, ‘Let us calculate,’ and without further ado, see who is right.”

Video Friday: DJI’s Mavic Mini Is a $400 Palm-Sized Foldable Drone

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/drones/video-friday-dji-mavic-mini-palm-sized-foldable-drone

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IROS 2019 – November 4-8, 2019 – Macau

Let us know if you have suggestions for next week, and enjoy today’s videos.


This MIT Robot Wants to Use Your Reflexes to Walk and Balance

Post Syndicated from Erico Guizzo original https://spectrum.ieee.org/automaton/robotics/humanoids/mit-little-hermes

MIT researchers have demonstrated a new kind of teleoperation system that allows a two-legged robot to “borrow” a human operator’s physical skills to move with greater agility. The system works a bit like those haptic suits from the Spielberg movie “Ready Player One.” But while the suits in the film were used to connect humans to their VR avatars, the MIT suit connects the operator to a real robot.

Blue Frog Robotics Answers (Some of) Our Questions About Its Delayed Social Robot Buddy

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/home-robots/interview-blue-frog-robotics-ceo

In September of 2015, Buddy the social home robot closed its Indiegogo crowdfunding campaign more than 600 percent over its funding goal. A thousand people pledged for a robot originally scheduled to be delivered in December of 2016. But nearly three years later, the future of Buddy is still unclear. Last May, Blue Frog Robotics asked for forgiveness from its backers and announced the launch of an “equity crowdfunding campaign” to try to raise the additional funding necessary to deliver the robot in April of 2020.

By the time the crowdfunding campaign launched in August, the delivery date had slipped again, to September 2020, even as Blue Frog attempted to draw investors by estimating that sales of Buddy would “increase from 2000 robots in 2020 to 20,000 in 2023.” Blue Frog’s most recent communication with backers, in September, mentions a new CTO and a North American office, but does little to reassure backers of Buddy that they’ll ever be receiving their robot. 

Backers of the robot are understandably concerned about the future of Buddy, so we sent a series of questions to the founder and CEO of Blue Frog Robotics, Rodolphe Hasselvander.

Natural Language Processing Dates Back to Kabbalist Mystics

Post Syndicated from Oscar Schwartz original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/natural-language-processing-dates-back-to-kabbalist-mystics

This is part one of a six-part series on the history of natural language processing.

We’re in the middle of a boom time for natural language processing (NLP), the field of computer science that focuses on linguistic interactions between humans and machines. Thanks to advances in machine learning over the past decade, we’ve seen vast improvements in speech recognition and machine translation software. Language generators are now good enough to write coherent news articles, and virtual agents like Siri and Alexa are becoming part of our daily lives.

Most trace the origins of this field back to the beginning of the computer age, when Alan Turing, writing in 1950, imagined a smart machine that could interact fluently with a human via typed text on a screen. For this reason, machine-generated language is mostly understood as a digital phenomenon—and a central goal of artificial intelligence (AI) research.

This six-part series will challenge that common understanding of NLP. In fact, attempts to design formal rules and machines that can analyze, process, and generate language go back hundreds of years.

While specific technologies have changed over time, the basic idea of treating language as a material that can be artificially manipulated by rule-based systems has been pursued by many people in many cultures and for many different reasons. These historical experiments reveal the promise and perils of attempting to simulate human language in non-human ways—and they hold lessons for today’s practitioners of cutting-edge NLP techniques. 

The story begins in medieval Spain. In the late 1200s, a Jewish mystic by the name of Abraham Abulafia sat down at a table in his small house in Barcelona, picked up a quill, dipped it in ink, and began combining the letters of the Hebrew alphabet in strange and seemingly random ways. Aleph with Bet, Bet with Gimmel, Gimmel with Aleph and Bet, and so on.

Video Friday: Kuka’s Robutt Is a Robot Designed to Assess New Car Seats

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-kuka-robutt-robot-new-car-seats

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau

Let us know if you have suggestions for next week, and enjoy today’s videos.


Let’s Build Robots That Are as Smart as Babies

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/robotics/artificial-intelligence/lets-build-robots-that-are-as-smart-as-babies

Let’s face it: Robots are dumb. At best they are idiot savants, capable of doing one thing really well. In general, even those robots require specialized environments in which to do their one thing really well. This is why autonomous cars or robots for home health care are so difficult to build. They’ll need to react to an uncountable number of situations, and they’ll need a generalized understanding of the world in order to navigate them all.

Babies as young as two months already understand that an unsupported object will fall, while five-month-old babies know materials like sand and water will pour from a container rather than plop out as a single chunk. Robots lack these understandings, which hinders them as they try to navigate the world without a prescribed task and movement.

But we could see robots with a generalized understanding of the world (and the processing power required to wield it) thanks to the video-game industry. Researchers are bringing physics engines—the software that provides real-time physical interactions in complex video-game worlds—to robotics. The goal is to develop robots’ understanding in order to learn about the world in the same way babies do.

Giving robots a baby’s sense of physics helps them navigate the real world and can even save on computing power, according to Lochlainn Wilson, the CEO of SE4, a Japanese company building robots that could operate on Mars. SE4 plans to avoid the problems of latency caused by distance from Earth to Mars by building robots that can operate independently for a few hours before receiving more instructions from Earth.

Wilson says that his company uses simple physics engines such as PhysX to help build more-independent robots. He adds that if you can tie a physics engine to a coprocessor on the robot, the real-time basic physics intuitions won’t take compute cycles away from the robot’s primary processor, which will often be focused on a more complicated task.

Wilson’s firm occasionally still turns to a traditional graphics engine, such as Unity or the Unreal Engine, to handle the demands of a robot’s movement. In certain cases, however, such as a robot accounting for friction or understanding force, you really need a robust physics engine, Wilson says, not a graphics engine that simply simulates a virtual environment. For his projects, he often turns to the open-source Bullet Physics engine built by Erwin Coumans, who is now an employee at Google.

Bullet is a popular physics-engine option, but it isn’t the only one out there. Nvidia Corp., for example, has realized that its gaming and physics engines are well-placed to handle the computing demands required by robots. In a lab in Seattle, Nvidia is working with teams from the University of Washington to build kitchen robots, fully articulated robot hands and more, all equipped with Nvidia’s tech.

When I visited the lab, I watched a robot arm move boxes of food from counters to cabinets. That’s fairly straightforward, but that same robot arm could avoid my body if I got in its way, and it could adapt if I moved a box of food or dropped it onto the floor.

The robot could also understand that less pressure is needed to grasp something like a cardboard box of Cheez-It crackers versus something more durable like an aluminum can of tomato soup.

Nvidia’s silicon has already helped advance the fields of artificial intelligence and computer vision by making it possible to process multiple decisions in parallel. It’s possible that the company’s new focus on virtual worlds will help advance the field of robotics and teach robots to think like babies.

This article appears in the November 2019 print issue as “Robots as Smart as Babies.”

Robot Teaches Kids Hand Washing Skills in Rural India

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/robot-teaches-kids-hand-washing-skills-in-rural-india

Last Tuesday, October 15th, was Global Handwashing Day, which I’m sure you celebrated by washing your hands every time you were supposed to. Hand washing is a skill that can be difficult for kids to learn, because there usually aren’t immediate negative consequences to not washing your hands. The not immediate consequences can be severe, though—hand washing with soap can prevent 40 percent of diarrhea and respiratory infections, which kill about 1,300 young children around the world each day. Kids that don’t die can still get very sick, and about 443 million school days are lost globally each year due to water and sanitation related diseases in developing countries.

In rural India, only about 18 percent of people wash their hands with soap, and the way to change this is to start teaching hand washing as a skill at a young age. It’s especially important to do this in schools, which are both places for teaching things as well as places where diseases get transmitted (although hopefully more the first thing than the second thing). But we don’t want teachers spending their time standing by the bathroom sink nagging their pupils, and we don’t have to, because robots can help with this.