All posts by Eliza Strickland

The Blogger Behind “AI Weirdness” Thinks Today’s AI Is Dumb and Dangerous

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/blogger-behind-ai-weirdness-thinks-todays-ai-is-dumb-and-dangerous

Sure, artificial intelligence is transforming the world’s societies and economies—but can an AI come up with plausible ideas for a Halloween costume? 

Janelle Shane has been asking such probing questions since she started her AI Weirdness blog in 2016. She specializes in training neural networks (which underpin most of today’s machine learning techniques) on quirky data sets such as compilations of knitting instructions, ice cream flavors, and names of paint colors. Then she asks the neural net to generate its own contributions to these categories—and hilarity ensues. AI is not likely to disrupt the paint industry with names like “Ronching Blue,” “Dorkwood,” and “Turdly.” 

Shane’s antics have a serious purpose. She aims to illustrate the serious limitations of today’s AI, and to counteract the prevailing narrative that describes AI as well on its way to superintelligence and complete human domination. “The danger of AI is not that it’s too smart,” Shane writes in her new book, “but that it’s not smart enough.” 

The book, which came out on Tuesday, is called You Look Like a Thing and I Love You. It takes its odd title from a list of AI-generated pick-up lines, all of which would at least get a person’s attention if shouted, preferably by a robot, in a crowded bar. Shane’s book is shot through with her trademark absurdist humor, but it also contains real explanations of machine learning concepts and techniques. It’s a painless way to take AI 101. 

She spoke with IEEE Spectrum about the perils of placing too much trust in AI systems, the strange AI phenomenon of “giraffing,” and her next potential Halloween costume. 

Janelle Shane on . . .


  1. The un-delicious origin of her blog
  2. “The narrower the problem, the smarter the AI will seem”
  3. Why overestimating AI is dangerous
  4. Giraffing!
  5. Machine and human creativity
  1. The un-delicious origin of her blog

    IEEE Spectrum: You studied electrical engineering as an undergrad, then got a master’s degree in physics. How did that lead to you becoming the comedian of AI? 

    Janelle Shane: I’ve been interested in machine learning since freshman year of college. During orientation at Michigan State, a professor who worked on evolutionary algorithms gave a talk about his work. It was full of the most interesting anecdotes–some of which I’ve used in my book. He told an anecdote about people setting up a machine learning algorithm to do lens design, and the algorithm did end up designing an optical system that works… except one of the lenses was 50 feet thick, because they didn’t specify that it couldn’t do that.  

    I started working in his lab on optics, doing ultra-short laser pulse work. I ended up doing a lot more optics than machine learning, but I always found it interesting. One day I came across a list of recipes that someone had generated using a neural net, and I thought it was hilarious and remembered why I thought machine learning was so cool. That was in 2016, ages ago in machine learning land.

    Spectrum: So you decided to “establish weirdness as your goal” for your blog. What was the first weird experiment that you blogged about? 

    Shane: It was generating cookbook recipes. The neural net came up with ingredients like: “Take ¼ pounds of bones or fresh bread.” That recipe started out: “Brown the salmon in oil, add creamed meat to the mixture.” It was making mistakes that showed the thing had no memory at all. 

    Spectrum: You say in the book that you can learn a lot about AI by giving it a task and watching it flail. What do you learn?

    Shane: One thing you learn is how much it relies on surface appearances rather than deep understanding. With the recipes, for example: It got the structure of title, category, ingredients, instructions, yield at the end. But when you look more closely, it has instructions like “Fold the water and roll it into cubes.” So clearly this thing does not understand water, let alone the other things. It’s recognizing certain phrases that tend to occur, but it doesn’t have a concept that these recipes are describing something real. You start to realize how very narrow the algorithms in this world are. They only know exactly what we tell them in our data set. 

    BACK TO TOP↑

    “The narrower the problem, the smarter the AI will seem”

    Spectrum: That makes me think of DeepMind’s AlphaGo, which was universally hailed as a triumph for AI. It can play the game of Go better than any human, but it doesn’t know what Go is. It doesn’t know that it’s playing a game. 

    Shane: It doesn’t know what a human is, or if it’s playing against a human or another program. That’s also a nice illustration of how well these algorithms do when they have a really narrow and well-defined problem. 

    The narrower the problem, the smarter the AI will seem. If it’s not just doing something repeatedly but instead has to understand something, coherence goes down. For example, take an algorithm that can generate images of objects. If the algorithm is restricted to birds, it could do a recognizable bird. If this same algorithm is asked to generate images of any animal, if its task is that broad, the bird it generates becomes an unrecognizable brown feathered smear against a green background.

    Spectrum: That sounds… disturbing. 

    Shane: It’s disturbing in a weird amusing way. What’s really disturbing is the humans it generates. It hasn’t seen them enough times to have a good representation, so you end up with an amorphous, usually pale-faced thing with way too many orifices. If you asked it to generate an image of a person eating pizza, you’ll have blocks of pizza texture floating around. But if you give that image to an image-recognition algorithm that was trained on that same data set, it will say, “Oh yes, that’s a person eating pizza.”

    BACK TO TOP↑

    Why overestimating AI is dangerous

    Spectrum: Do you see it as your role to puncture the AI hype? 

    Shane: I do see it that way. Not a lot of people are bringing out this side of AI. When I first started posting my results, I’d get people saying, “I don’t understand, this is AI, shouldn’t it be better than this? Why doesn’t it understand?” Many of the impressive examples of AI have a really narrow task, or they’ve been set up to hide how little understanding it has. There’s a motivation, especially among people selling products based on AI, to represent the AI as more competent and understanding than it actually is. 

    Spectrum: If people overestimate the abilities of AI, what risk does that pose? 

    Shane: I worry when I see people trusting AI with decisions it can’t handle, like hiring decisions or decisions about moderating content. These are really tough tasks for AI to do well on. There are going to be a lot of glitches. I see people saying, “The computer decided this so it must be unbiased, it must be objective.” 

    That’s another thing I find myself highlighting in the work I’m doing. If the data includes bias, the algorithm will copy that bias. You can’t tell it not to be biased, because it doesn’t understand what bias is. I think that message is an important one for people to understand. 

    If there’s bias to be found, the algorithm is going to go after it. It’s like, “Thank goodness, finally a signal that’s reliable.” But for a tough problem like: Look at these resumes and decide who’s best for the job. If its task is to replicate human hiring decisions, it’s going to glom onto gender bias and race bias. There’s an example in the book of a hiring algorithm that Amazon was developing that discriminated against women, because the historical data it was trained on had that gender bias. 

    Spectrum: What are the other downsides of using AI systems that don’t really understand their tasks? 

    Shane: There is a risk in putting too much trust in AI and not examining its decisions. Another issue is that it can solve the wrong problems, without anyone realizing it. There have been a couple of cases in medicine. For example, there was an algorithm that was trained to recognize things like skin cancer. But instead of recognizing the actual skin condition, it latched onto signals like the markings a surgeon makes on the skin, or a ruler placed there for scale. It was treating those things as a sign of skin cancer. It’s another indication that these algorithms don’t understand what they’re looking at and what the goal really is. 

    BACK TO TOP↑

    Giraffing

    Spectrum: In your blog, you often have neural nets generate names for things—such as ice cream flavors, paint colors, cats, mushrooms, and types of apples. How do you decide on topics?

    Shane: Quite often it’s because someone has written in with an idea or a data set. They’ll say something like, “I’m the MIT librarian and I have a whole list of MIT thesis titles.” That one was delightful. Or they’ll say, “We are a high school robotics team, and we know where there’s a list of robotics team names.” It’s fun to peek into a different world. I have to be careful that I’m not making fun of the naming conventions in the field. But there’s a lot of humor simply in the neural net’s complete failure to understand. Puns in particular—it really struggles with puns. 

    Spectrum: Your blog is quite absurd, but it strikes me that machine learning is often absurd in itself. Can you explain the concept of giraffing?

    Shane: This concept was originally introduced by [internet security expert] Melissa Elliott. She proposed this phrase as a way to describe the algorithms’ tendency to see giraffes way more often than would be likely in the real world. She posted a whole bunch of examples, like a photo of an empty field in which an image-recognition algorithm has confidently reported that there are giraffes. Why does it think giraffes are present so often when they’re actually really rare? Because they’re trained on data sets from online. People tend to say, “Hey look, a giraffe!” And then take a photo and share it. They don’t do that so often when they see an empty field with rocks. 

    There’s also a chatbot that has a delightful quirk. If you show it some photo and ask it how many giraffes are in the picture, it will always answer with some non zero number. This quirk comes from the way the training data was generated: These were questions asked and answered by humans online. People tended not to ask the question “How many giraffes are there?” when the answer was zero. So you can show it a picture of someone holding a Wii remote. If you ask it how many giraffes are in the picture, it will say two. 

    BACK TO TOP↑

    Machine and human creativity

    Spectrum: AI can be absurd, and maybe also creative. But you make the point that AI art projects are really human-AI collaborations: Collecting the data set, training the algorithm, and curating the output are all artistic acts on the part of the human. Do you see your work as a human-AI art project?

    Shane: Yes, I think there is artistic intent in my work; you could call it literary or visual. It’s not so interesting to just take a pre-trained algorithm that’s been trained on utilitarian data, and tell it to generate a bunch of stuff. Even if the algorithm isn’t one that I’ve trained myself, I think about, what is it doing that’s interesting, what kind of story can I tell around it, and what do I want to show people. 

    Spectrum: For the past three years you’ve been getting neural nets to generate ideas for Halloween costumes. As language models have gotten dramatically better over the past three years, are the costume suggestions getting less absurd? 

    Shane: Yes. Before I would get a lot more nonsense words. This time I got phrases that were related to real things in the data set. I don’t believe the training data had the words Flying Dutchman or barnacle. But it was able to draw on its knowledge of which words are related to suggest things like sexy barnacle and sexy Flying Dutchman. 

    Spectrum: This year, I saw on Twitter that someone made the gothy giraffe costume happen. Would you ever dress up for Halloween in a costume that the neural net suggested? 

    Shane: I think that would be fun. But there would be some challenges. I would love to go as the sexy Flying Dutchman. But my ambition may constrict me to do something more like a list of leg parts. 

    BACK TO TOP↑

Racial Bias Found in Algorithms That Determine Health Care for Millions of Patients

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/the-human-os/biomedical/ethics/racial-bias-found-in-algorithms-that-determine-health-care-for-millions-of-patients

An algorithm that a major medical center used to identify patients for extra care has been shown to be racially biased. 

The algorithm screened patients for enrollment in an intensive care management program, which gave them access to a dedicated hotline for a nurse practitioner, help refilling prescriptions, and so forth. The screening was meant to identify those patients who would most benefit from the program. But the white patients flagged for enrollment had fewer chronic health conditions than the black patients who were flagged.

In other words, black patients had to reach a higher threshold of illness before they were considered for enrollment. Care was not actually going to those people who needed it most.

Alarmingly, the algorithm was performing its task correctly. The problem was with how the task was defined.

The findings, described in a paper that was just published in Science, point to a system-wide problem, says coauthor Ziad Obermeyer, a physician and researcher at the UC Berkeley School of Public Health. Similar screening tools are used throughout the country; according to industry estimates, these types of algorithms are making health decisions for 200 million people per year. 

Baseball’s Engineer: Ben Hansen Says Biometrics Can Save Pitchers’ Elbows

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/the-human-os/biomedical/devices/baseballs-engineer-ben-hansen-says-biometrics-can-save-pitchers-elbows

When Benjamin Hansen was playing baseball in high school, around 2006, technologies to monitor athletes’ bodies and performance weren’t yet commonplace. Yet Hansen wanted to collect data any way he could. “I would sit on the bench with a calculator and a stopwatch, timing the pitchers,” he says. He clicked the stopwatch when the pitcher released the baseball and again when the ball popped into the catcher’s mitt, then factored in the pitcher’s height in calculating the pitch velocity.

Hansen’s coach, however, was not impressed. “My coach should have embraced it,” he says, wistfully. “But instead he made me run laps.”

Hansen kept playing baseball through college, pitching for his team at the Milwaukee School of Engineering. But he was plagued by injuries. He well remembers a practice game in which he logged 15 straight outs—then felt a sharp pain in his elbow. He had partially torn his ulnar collateral ligament (UCL) and had to sit out the rest of the season. “I always asked the question: Why is this happening?” he says.

Today, Hansen is the vice president of biomechanics and innovation for Motus Global, in St. Petersburg, Fla., a startup that produces wearable sports technology. For IEEE Spectrum’s October issue, he describes Motus’s product for baseball pitchers, a compression sleeve with sensors to measure workload and muscle fatigue. From Little League to Major League Baseball, pitchers are using Motus gear to understand their bodies, improve performance, and prevent injuries.

Traditional wisdom holds that pitcher injuries result from faulty form. But data from Motus’s wearable indicates that it’s the accumulated workload on a player’s muscles and ligaments that causes injuries like UCL tears, which have become far too common in baseball. By displaying measurements of fatigue and suggesting training regimens, rehab workouts, and in-game strategies, the wearable can help prevent players from pushing themselves past their limits. It’s a goal that even Hansen’s old coach would probably endorse.

This article appears in the October 2019 print issue as “Throwing Data Around.”

The Ultimate Optimization Problem: How to Best Use Every Square Meter of the Earth’s Surface

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/tech-talk/energy/environment/the-ultimate-optimization-problem-how-to-best-use-every-square-meter-of-the-earths-surface

Lucas Joppa thinks big. Even while gazing down into his cup of tea in his modest office on Microsoft’s campus in Redmond, Washington, he seems to see the entire planet bobbing in there like a spherical tea bag. 

As Microsoft’s first chief environmental officer, Joppa came up with the company’s AI for Earth program, a five-year effort that’s spending US $50 million on AI-powered solutions to global environmental challenges.

The program is not just about specific deliverables, though. It’s also about mindset, Joppa told IEEE Spectrum in an interview in July. “It’s a plea for people to think about the Earth in the same way they think about the technologies they’re developing,” he says. “You start with an objective. So what’s our objective function for Earth?” (In computer science, an objective function describes the parameter or parameters you are trying to maximize or minimize for optimal results.)

AI for Earth launched in December 2017, and Joppa’s team has since given grants to more than 400 organizations around the world. In addition to receiving funding, some grantees get help from Microsoft’s data scientists and access to the company’s computing resources. 

In a wide-ranging interview about the program, Joppa described his vision of the “ultimate optimization problem”—figuring out which parts of the planet should be used for farming, cities, wilderness reserves, energy production, and so on. 

Every square meter of land and water on Earth has an infinite number of possible utility functions. It’s the job of Homo sapiens to describe our overall objective for the Earth. Then it’s the job of computers to produce optimization results that are aligned with the human-defined objective.

I don’t think we’re close at all to being able to do this. I think we’re closer from a technology perspective—being able to run the model—than we are from a social perspective—being able to make decisions about what the objective should be. What do we want to do with the Earth’s surface?

AI Agents Startle Researchers With Unexpected Hide-and-Seek Strategies

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-agents-startle-researchers-with-unexpected-strategies-in-hideandseek

After 25 million games, the AI agents playing hide-and-seek with each other had mastered four basic game strategies. The researchers expected that part.

After a total of 380 million games, the AI players developed strategies that the researchers didn’t know were possible in the game environment—which the researchers had themselves created. That was the part that surprised the team at OpenAI, a research company based in San Francisco.

The AI players learned everything via a machine learning technique known as reinforcement learning. In this learning method, AI agents start out by taking random actions. Sometimes those random actions produce desired results, which earn them rewards. Via trial-and-error on a massive scale, they can learn sophisticated strategies.

In the context of games, this process can be abetted by having the AI play against another version of itself, ensuring that the opponents will be evenly matched. It also locks the AI into a process of one-upmanship, where any new strategy that emerges forces the opponent to search for a countermeasure. Over time, this “self-play” amounted to what the researchers call an “auto-curriculum.” 

According to OpenAI researcher Igor Mordatch, this experiment shows that self-play “is enough for the agents to learn surprising behaviors on their own—it’s like children playing with each other.”

Three Steps to a Moon Base

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/aerospace/space-flight/three-steps-to-a-moon-base

Space agencies and private companies are working on rockets, landers, and other tech for lunar settlement

graphic link to special report landing page
graphic link to special report landing  page

In 1968, NASA astronaut Jim Lovell gazed out of a porthole from lunar orbit and remarked on the “vast loneliness” of the moon. It may not be lonely place for much longer. Today, a new rush of enthusiasm for lunar exploration has swept up government space agencies, commercial space companies funded by billionaires, and startups that want in on the action. Here’s the tech they’re building that may enable humanity’s return to the moon, and the building of the first permanent moon base.

A Wearable That Helps Women Get / Not Get Pregnant

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/the-human-os/biomedical/devices/a-wearable-that-helps-women-get-not-get-pregnant

The in-ear sensor from Yono Labs will soon predict a woman’s fertile days

Journal Watch report logo, link to report landing page

Women’s bodies can be mysterious things—even to the women who inhabit them. But a wearable gadget called the Yono aims to replace mystery with knowledge derived from statistics, big data, and machine learning.

A woman who is trying to get pregnant may spend months tracking her ovulation cycle, often making a daily log of biological signals to determine her few days of fertility. While a plethora of apps promise to help, several studies have questioned these apps’ accuracy and efficacy.

Meanwhile, a woman who is trying to avoid pregnancy by the “fertility awareness method” may well not avoid it, since the method is only 75 percent effective

IEEE Spectrum Introduces Drones and 360 Video to Its Reporting

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/tech-talk/robotics/drones/ieee-spectrum-introduces-drones-and-360-video-to-its-reporting

To produce 360 videos, reporters navigated technical challenges and red tape

Tech Expedition report logo, link to report landing page

Reporters used to head out into the field with nothing but a notebook. But when IEEE Spectrum associate editor Michael Koziol [left] and contributing editor Evan Ackerman [right] traveled to East Africa last October, they had plenty more gear to schlep. In addition to their laptops and digital recorders, they brought two DJI drones and an assortment of 360-degree video cameras

The trip was part of an experiment in immersive storytelling. Koziol and Ackerman journeyed through Rwanda and Tanzania to report on pioneering companies that are using small consumer drones for such tasks as delivering medical supplies and surveying roads. The two gathered material for several articles, including a feature story about a company called Zipline that’s delivering blood to Rwanda’s rural hospitals.

They also sent their own drones aloft bearing their cameras, capturing footage for two videos included in our special report on drones in East Africa. One video brings viewers inside Zipline’s operations in Rwanda, the other, which will be published on Thursday, takes to the air with Tanzania’s drone startups. In these 360-degree videos the viewer can rotate the point of view in a complete circle, to see everything on all sides of the camera. That maneuvering is particularly fun when the camera is flying high over the landscape. 

The reporters had to overcome various logistical and technical challenges to get their shots. First they had to navigate the maze of regulations that govern the flying of consumer drones, rules that differ from country to country. Because the technology is so new, the rules and procedures are a work in progress. And filming with the 360-degree cameras presented novel technical issues, says Ackerman, as he and Koziol had to consider everything that the viewer might see in the shot. There were two questions they often asked themselves, Ackerman says: “How do we make the whole scene interesting? And how do we hide from the camera?”

The technology and the travel were funded by a generous grant from the IEEE Foundation. We thank the foundation for supporting Spectrum’s mission: covering emerging technology that can change the world, and using cutting-edge technology to do so.  

A version of this post appears in the May 2019 print magazine as “Flying Into the Unknown.”

With “Leapfrog” Technologies, Africa Aims to Skip the Present and Go Straight to the Future

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/robotics/drones/with-leapfrog-technologies-africa-aims-to-skip-the-present-and-go-straight-to-the-future

IEEE is promoting engineering education and access to the latest technologies through its Africa initiative

By 2022, forecasters estimate that sub-Saharan Africa will have nearly 1 billion mobile phones—enough for the vast majority of the projected 1.2 billion people who will live there. What it won’t have are the endless streams of telephone poles and wires that cascade across other continents. Development experts call this an example of a “leapfrog technology.” By going straight to mobile, many African nations will be able to skip the step of building extensive and expensive landline infrastructure.

In fact, “some places will go straight to 5G,” says Vincent Kaabunga, chair of the IEEE Ad Hoc Committee on Africa, which has helped craft IEEE’s strategy to increase engineering capacity on the continent. With this kind of leapfrogging, African nations can take the lead in certain types of technological development and deployment, he says. Just look at mobile money: Companies such as M-Pesa sprang up to solve a local problem—people’s lack of access to brick-and-mortar banks—and became a way for people not only to make payments, but also to get loans and insurance. “We’ve refined the concept of mobile money over the last 10 or 15 years,” says Kaabunga, “while other parts of the world are just now coming around to embracing it.”

IEEE and its members in Africa are facilitating the application of new technologies by promoting education and access, says Kaabunga, who also works for a technology consulting firm in Kampala, Uganda. The IEEE in Africa Strategy, launched in 2017, calls for IEEE to support engineering education at every level and to advise government policymakers, efforts that Kaabunga and his colleagues in the region have already begun. For example, they’re currently working with the Smart Africa alliance, an initiative that aims to create standard policies for information and communications technology to enable a single digital marketplace across the continent.

Some governments are particularly aware of the need for more in-country engineering expertise. IEEE is currently concentrating its efforts in five nations—Ghana, Kenya, Rwanda, Uganda, and Zambia—all of which have long-term development plans emphasizing science and technology. These plans call for building human capital in the form of well-trained scientists and engineers, in addition to building fiber-optic broadband networks and clean power plants.

Rwanda is the setting of IEEE Spectrum’s behind-the-scenes report on a drone delivery company. The Rwandan government is trying to modernize its highway infrastructure, but it will be a long process; currently, narrow roads wind through the hilly terrain. Rwandan hospitals needing emergency supplies find that truck deliveries are often too slow for patients who require urgent care.

To address this need, a drone company called Zipline is providing air deliveries of blood to hospitals all across the country, and will soon begin delivering other lightweight medical supplies as well. While Zipline hails from California, it’s dedicated to building up local engineering capacity, and offers extensive training and professional development to its employees in Rwanda. And Zipline is not an anomaly. A vibrant drone startup scene has arisen in East Africa, with local companies finding applications in agriculture, road surveying, mining, and other areas.

Zipline is now expanding its drone delivery services to Ghana. It’s not yet clear how successful the company will be in its attempt to scale up its operations. Zipline’s financials are opaque, and its business model may not be viable without government subsidies. But the company is now working on a demonstration project in North Carolina. Whether or not medical drone delivery works out in the long run, one thing is certain: Africa is trying it first.

This article appears in the May 2019 print issue as “Engineering Change in Africa.”