All posts by Eliza Strickland

AI Can Help Hospitals Triage COVID-19 Patients

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/the-human-os/artificial-intelligence/medical-ai/ai-can-help-hospitals-triage-covid19-patients

As the coronavirus pandemic brings floods of people to hospital emergency rooms around the world, physicians are struggling to triage patients, trying to determine which ones will need intensive care. Volunteer doctors and nurses with no special pulmonary training must assess the condition of patients’ lungs. In Italy, at the peak of that country’s crisis, doctors faced terrible decisions about who should receive help and resources. 

An Official WHO Coronavirus App Will Be a “Waze for COVID-19”

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/the-human-os/biomedical/devices/who-official-coronavirus-app-waze-covid19

There’s no shortage of information about the coronavirus pandemic: News sites cover every development, and sites like the Johns Hopkins map constantly update the number of global cases (246,276 as of this writing).

But for the most urgent questions, there seem to be no answers. Is it possible that I caught the virus when I went out today? Did I cross paths with someone who’s infected? How prevalent is the coronavirus in my local community? And if I’m feeling sick, where can I go to get tested or find treatment?

A group of doctors and engineers have come together to create an app that will answer such questions. Daniel Kraft, the U.S.-based physician who’s leading the charge, says his group has “gotten the green light” from the World Health Organization (WHO) to build the open-source app, and that it will be an official WHO app to help people around the world cope with COVID-19, the official name of the illness caused by the new coronavirus.

Biotech Pioneer Is Making a Home Test Kit for Coronavirus

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/the-human-os/biomedical/diagnostics/biotech-pioneer-home-test-kit-coronavirus

Here’s something that sounds decidedly useful right now: A home test kit that anyone could use to see if they have the coronavirus. A kit that’s nearly as easy to use as a pregnancy test, and that would give results in half an hour. 

It doesn’t exist yet, but serial biotech entrepreneur Jonathan Rothberg is working on it. 

We’ve heard from Rothberg before. Over the past two decades, he has founded several genetic sequencing companies that introduced breakthrough technologies, making genome sequencing faster and cheaper. (With one company he made news by sequencing the first individual human genome, that of Jim Watson; he sold another company for US $375 million.)  In 2014, Rothberg launched a startup accelerator called 4Catalyzer dedicated to the invention of smart medical devices. The first company to emerge from the accelerator, Butterfly Network, now sells a handheld ultrasound tool that provides readouts on a smartphone screen. 

Rothberg announced his interest in a coronavirus test kit in a Twitter thread on 7 March. Initially describing it as a “thought experiment,” it quickly became a real project and a driving mission. 

DARPA Races To Create a “Firebreak” Treatment for the Coronavirus

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/the-human-os/biomedical/bionics/darpas-firebreak-treatment-for-the-coronavirus

When DARPA launched its Pandemic Preparedness Platform (P3) program two years ago, the pandemic was theoretical. It seemed like a prudent idea to develop a quick response to emerging infectious diseases. Researchers working under the program sought ways to confer instant (but short-term) protection from a dangerous virus or bacteria. 

Today, as the novel coronavirus causes a skyrocketing number of COVID-19 cases around the world, the researchers are racing to apply their experimental techniques to a true pandemic playing out in real time. “Right now, they have one shot on goal,” says DARPA program manager Amy Jenkins. “We’re really hoping it works.”

The P3 program’s plan was to start with a new pathogen and to “develop technology to deliver medical countermeasures in under 60 days—which was crazy, unheard of,” says Jenkins. The teams have proven they can meet this ambitious timeline in previous trials using the influenza and Zika viruses. Now they’re being asked to pull off the same feat with the new coronavirus, which more formally goes by the name SARS-CoV-2 and causes the illness known as COVID-19.

James Crowe, director of the Vanderbilt Vaccine Center at the Vanderbilt University Medical Center, leads one of the four P3 teams. He spoke with IEEE Spectrum from Italy, where he and his wife are currently traveling—and currently recovering from a serious illness. “We both got pretty sick,” Crowe says, “we’re pretty sure we already had the coronavirus.” For his wife, he says, it was the worst illness she’s had in three decades. They’re trying to get tested for the virus.

Ironically, Crowe is quite likely harboring the raw material that his team and others need to do their work. The DAPRA approach called for employing antibodies, the proteins that our bodies naturally use to fight infectious diseases, which remain in our bodies after an infection.

In the P3 program, the 60-day clock begins when a blood sample is taken from a person who has fully recovered from the disease of interest. Then the researchers screen that sample to find all the protective antibodies the person’s body has made to fight off the virus or bacteria. They use modeling and bioinformatics to choose the antibody that seems most effective at neutralizing the pathogen, and then determine the genetic sequence that codes for the creation of that particular antibody. That snippet of genetic code can then be manufactured quickly and at scale, and injected into people.

Jenkins says this approach is much faster than manufacturing the antibodies themselves. Once the genetic snippets are delivered by an injection, “your body becomes the bioreactor” that creates the antibodies, she says. The P3 program’s goal is to have protective levels of the antibodies circulating within 6 to 24 hours.

DARPA calls this a “firebreak” technology, because it can provide immediate immunity to medical personnel, first responders, and other vulnerable people. However, it wouldn’t create the permanent protection that vaccines provide. (Vaccines work by presenting the body with a safe form of the pathogen, thus giving the body a low-stakes opportunity to learn how to respond, which it remembers on future exposures.)

Robert Carnahan, who works with Crowe at the Vanderbilt Vaccine Center, explains that their method offers only temporary protection because the snippets of genetic code are messenger RNA, molecules that carry instructions for protein production. When the team’s specially designed mRNA is injected into the body, it’s taken up by cells (likely those in the liver) that churn out the needed antibodies. But eventually that RNA degrades, as do the antibodies that circulate through the blood stream. 

“We haven’t taught the body how to make the antibody,” Carnahan says, so the protection isn’t permanent. To put it in terms of a folksy aphorism: “We haven’t taught the body to fish, we’ve just given it a fish.”

Carnahan says the team has been scrambling for weeks to build the tools that enable them to screen for the protective antibodies; since very little is known about this new coronavirus, “the toolkit is being built on the fly.” They were also waiting for a blood sample from a fully recovered U.S. patient. Now they have their first sample, and are hoping for more in the coming weeks, so the real work is beginning. “We are doggedly pursuing neutralizing antibodies,” he says. “Right now we have active pursuit going on.”

Jenkins says that all of the P3 groups (the others are Greg Semposki’s lab at Duke University, a small Vancouver company called AbCellera, and the big pharma company AstraZeneca) have made great strides in technologies that rapidly identify promising antibodies. In their earlier trials, the longer part of the process was manufacturing the mRNA and preparing for safety studies in animals. If the mRNA is intended for human use, the manufacturing and testing processes will be much slower because there will be many more regulatory hoops to jump through. 

Crowe and Carnahan’s team has seen another project through to human trials; last year the lab worked with the biotech company Moderna to make mRNA that coded for antibodies that protected against the Chikungunya virus. “We showed it can be done,” Crowe says.

Moderna was involved in a related DARPA program known as ADEPT that has since ended. The company’s work on mRNA-based therapies has led it in another interesting direction—last week, the company made news with its announcement that it was testing an mRNA-based vaccine for the coronavirus. That vaccine works by delivering mRNA that instructs the body to make the “spike” protein that’s present on the surface of the coronavirus, thus provoking an immune response that the body will remember if it encounters the whole virus. 

While Crowe’s team isn’t pursuing a vaccine, they are hedging their bets by considering both the manufacture of mRNA to code for the protective antibodies, and manufacturing the antibodies themselves. Crowe says that the latter process is typically slower (perhaps 18 to 24 months), but he’s talking with companies that are working on ways to speed it up. The direct injection of antibodies is a standard type of immunotherapy.   

Crowe’s convalescence in Italy isn’t particularly relaxing. He’s working long days, trying to facilitate the shipment of samples to his lab from Asia, Europe, and the United States, and he’s talking to pharmaceutical companies that could manufacture whatever his lab comes up with. “We have to figure out a cooperative agreement for something we haven’t made yet,” he says, “but I expect that this week we’ll have agreements in place.”

Manufacturing agreements aren’t the end of the story, however. Even if everything goes perfectly for Crowe’s team and they have a potent antibody or mRNA ready for manufacture by the end of April, they’d have to get approval from the U.S. Food and Drug Administration. To get a therapy approved for human use typically takes years of studies on toxicity, stability, and efficacy. Crowe says that one possible shortcut is the FDA’s compassionate use program, which allows people to use unapproved drugs in certain life-threatening situations. 

Gregory Poland, director of the Mayo Clinic Vaccine Research Group, says that mRNA-based prophylactics and vaccines are an important and interesting idea. But he notes that all talk of these possibilities should be considered “very forward-looking statements.” He has seen too many promising candidates fail after years of clinical trials to get excited about a new bright idea, he says. 

Poland also says the compassionate use shortcut would likely only be relevant “if we’re facing a situation where something like Wuhan is happening in a major city in the U.S., and we have reason to believe that a new therapy would be efficacious and safe. Then that’s a possibility, but we’re not there yet,” he says. “We’d be looking for an unknown benefit and accepting an unknown risk.” 

Yet the looming regulatory hurdles haven’t stopped the P3 researchers from sprinting. AbCellera, the Vancouver-based biotech company, tells IEEE Spectrum that its researchers have spent the last month looking at antibodies against the related SARS virus that emerged in 2002, and that they’re now beginning to study the coronavirus directly. “Our main discovery effort from a COVID-19 convalescent donor is about to be underway, and we will unleash our full discovery capabilities there,” a representative wrote in an email. 

At Crowe’s lab, Carnahan notes that the present outbreak is the first one to benefit from new technologies that enable a rapid response. Carnahan points to the lab’s earlier work on the Zika virus: “In 78 days we went from a sample to a validated antibody that was highly protective. Will we be able to hit 78 days with the coronavirus? I don’t know, but that’s what we’re gunning for,” he says. “It’s possible in weeks and months, not years.”

Facebook AI Launches Its Deepfake Detection Challenge

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/facebook-ai-launches-its-deepfake-detection-challenge

In September, Facebook sent out a strange casting call: We need all types of people to look into a webcam or phone camera and say very mundane things. The actors stood in bedrooms, hallways, and backyards, and they talked about topics such as the perils of junk food and the importance of arts education. It was a quick and easy gig—with an odd caveat. Facebook researchers would be altering the videos, extracting each person’s face and fusing it onto another person’s head. In other words, the participants had to agree to become deepfake characters. 

Facebook’s artificial intelligence (AI) division put out this casting call so it could ethically produce deepfakes—a term that originally referred to videos that had been modified using a certain face-swapping technique but is now a catchall for manipulated video. The Facebook videos are part of a training data set that the company assembled for a global competition called the Deepfake Detection Challenge. In this competition—produced in cooperation with Amazon, Microsoft, the nonprofit Partnership on AI, and academics from eight universities—researchers around the world are vying to create automated tools that can spot fraudulent media.

The competition launched today, with an announcement at the AI conference NeurIPS, and will accept entries through March 2020. Facebook has dedicated more than US $10 million for awards and grants.

Cristian Canton Ferrer helped organize the challenge as research manager for Facebook’s AI Red Team, which analyzes the threats that AI poses to the social media giant. He says deepfakes are a growing danger not just to Facebook but to democratic societies. Manipulated videos that make politicians appear to do and say outrageous things could go viral before fact-checkers have a chance to step in.

While such a full-blown synthetic scandal has yet to occur, the Italian public recently got a taste of the possibilities. In September, a satirical news show aired a deepfake video featuring a former Italian prime minister apparently lavishing insults on other politicians. Most viewers realized it was a parody, but a few did not.

The U.S. presidential elections in 2020 are an added incentive to get ahead of the problem, says Canton Ferrer. He believes that media manipulation will become much more common over the coming year, and that the deepfakes will get much more sophisticated and believable. “We’re thinking about what will be happening a year from now,” he says. “It’s a cat-and-mouse approach.” Canton ­Ferrer’s team aims to give the cat a head start, so it will be ready to pounce.

The growing threat of deepfakes

Just how easy is it to make deepfakes? A recent audit of online resources for altering videos found that the available open-source software still requires a good amount of technical expertise. However, the audit also turned up apps and services that are making it easier for almost anyone to get in on the action. In China, a deepfake app called Zao took the country by storm in September when it offered people a simple way to superimpose their own faces onto those of actors like Leonardo DiCaprio and Marilyn Monroe.

It may seem odd that the data set compiled for Facebook’s competition is filled with unknown people doing unremarkable things. But a deepfake detector that works on those mundane videos should work equally well for videos featuring politicians. To make the Facebook challenge as realistic as possible, Canton Ferrer says his team used the most common open-source techniques to alter the videos—but he won’t name the methods, to avoid tipping off contestants. “In real life, they will not be able to ask the bad actors, ‘Can you tell me what method you used to make this deepfake?’” he says.

In the current competition, detectors will be scanning for signs of facial manipulation. However, the Facebook team is keeping an eye on new and emerging attack methods, such as full-body swaps that change the appearance and actions of a person from head to toe. “There are some of those out there, but they’re pretty obvious now,” ­Canton Ferrer says. “As they get better, we’ll add them to the data set.” Even after the detection challenge concludes in March, he says, the Facebook team will keep working on the problem of deepfakes.

As for how the winning detection methods will be used and whether they’ll be integrated into Facebook’s operations, Canton Ferrer says those decisions aren’t up to him. The Partnership on AI’s steering committee on AI and media integrity, which is overseeing the competition, will decide on the next steps, he says. Claire Leibowicz, who leads that steering committee, says the group will consider “coordinated efforts” to fight back against the global challenge of synthetic and manipulated media.

DARPA’s efforts on deepfake detection

The Facebook challenge is far from the only effort to counter deepfakes. DARPA’s Media Forensics program launched in 2016, a year before the first deepfake videos surfaced on Reddit. Program manager Matt Turek says that as the technology took off, the researchers working under the program developed a number of detection technologies, generally looking for “digital integrity, physical integrity, or semantic integrity.”

Digital integrity is defined by the patterns in an image’s pixels that are invisible to the human eye. These patterns can arise from cameras and video processing software, and any inconsistencies that appear are a tip-off that a video has been altered. Physical integrity refers to the consistency in lighting, shadows, and other physical attributes in an image. Semantic integrity considers the broader context. If a video shows an outdoor scene, for example, a deepfake detector might check the time stamp and location to look up the weather report from that time and place. The best automated detector, Turek says, would “use all those techniques to produce a single integrity score that captures everything we know about a digital asset.”

Turek says his team has created a prototype Web portal (restricted to its government partners) to demonstrate a sampling of the detectors developed during the program. When the user uploads a piece of media via the Web portal, more than 20 detectors employ a range of different approaches to try to determine whether an image or video has been manipulated. Turek says his team continues to add detectors to the system, which is already better than humans at spotting fakes.

A successor to the Media Forensics program will launch in mid-2020: the Semantic Forensics program. This broader effort will cover all types of media—text, images, videos, and audio—and will go beyond simply detecting manipulation. It will also seek methods to understand the importance of the manipulations, which could help organizations decide which content requires human review. “If you manipulate a vacation photo by adding a beach ball, it really doesn’t matter,” Turek says. “But if you manipulate an image about a protest and add an object like a flag, that could change people’s understanding of who was involved.”

The Semantic Forensics program will also try to develop tools to determine if a piece of media really comes from the source it claims. Eventually, Turek says, he’d like to see the tech community embrace a system of watermarking, in which a digital signature would be embedded in the media itself to help with the authentication process. One big challenge of this idea is that every software tool that interacts with the image, video, or other piece of media would have to “respect that watermark, or add its own,” Turek says. “It would take a long time for the ecosystem to support that.”

A deepfake detection tool for consumers

In the meantime, the AI Foundation has a plan. This nonprofit is building a tool called Reality Defender that’s due to launch in early 2020. “It will become your personal AI guardian who’s watching out for you,” says Rob Meadows, president and chief technology officer for the foundation.

Reality Defender is a plug-in for Web browsers and an app for mobile phones. It scans everything on the screen using a suite of automatic detectors, then alerts the user about altered media. Detection alone won’t make for a useful tool, since ­Photoshop and other editing tools are widely used in fashion, advertising, and entertainment. If Reality Defender draws attention to every altered piece of content, Meadows notes, “it will flood consumers to the point where they say, ‘We don’t care anymore, we have to tune it out.’”

To avoid that problem, users will be able to dial the tool’s sensitivity up or down, depending on how many alerts they want. Meadows says beta testers are currently training the system, giving it feedback on which types of manipulations they care about. Once Reality Defender launches, users will be able to personalize their AI guardian by giving it a thumbs-up or thumbs-down on alerts, until it learns their preferences. “A user can say, ‘For my level of paranoia, this is what works for me,’ ” Meadows says.

He sees the software as a useful stopgap solution, but ultimately he hopes that his group’s technologies will be integrated into platforms such as Facebook, YouTube, and Twitter. He notes that Biz Stone, cofounder of Twitter, is a member of the AI Foundation’s board. To truly protect society from fake media, Meadows says, we need tools that prevent falsehoods from getting hosted on platforms and spread via social media. Debunking them after they’ve already spread is too late.

The researchers at Jigsaw, a unit of Alphabet that works on technology solutions for global challenges, would tend to agree. Technical research manager Andrew Gully says his team identified synthetic media as a societal threat some years back. To contribute to the fight, Jigsaw teamed up with sister company Google AI to produce a deepfake data set of its own in late 2018, which they contributed to the FaceForensics data set hosted by the Technical University of Munich.

Gully notes that while we haven’t yet seen a political crisis triggered by a deepfake, these videos are also used for bullying and “revenge porn,” in which a targeted woman’s face is pasted onto the face of an actor in a porno. (While pornographic deepfakes could in theory target men, a recent audit of deepfake content found that 100 percent of the pornographic videos focused on women.) What’s more, Gully says people are more likely to be credulous of videos featuring unknown individuals than famous politicians.

But it’s the threat to free and fair elections that feels most crucial in this U.S. election year. Gully says systems that detect deepfakes must take a careful approach in communicating the results to users. “We know already how difficult it is to convince people in the face of their own biases,” Gully says. “Detecting a deepfake video is hard enough, but that’s easy compared to how difficult it is to convince people of things they don’t want to believe.”

Yoshua Bengio, Revered Architect of AI, Has Some Ideas About What to Build Next

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/yoshua-bengio-revered-architect-of-ai-has-some-ideas-about-what-to-build-next

Yoshua Bengio is known as one of the “three musketeers” of deep learning, the type of artificial intelligence (AI) that dominates the field today. 

Bengio, a professor at the University of Montreal, is credited with making key breakthroughs in the use of neural networksand just as importantly, with persevering with the work through the long cold AI winter of the late 1980s and the 1990s, when most people thought that neural networks were a dead end. 

He was rewarded for his perseverance in 2018, when he and his fellow musketeers (Geoffrey Hinton and Yann LeCun) won the Turing Award, which is often called the Nobel Prize of computing.

Today, there’s increasing discussion about the shortcomings of deep learning. In that context, IEEE Spectrum spoke to Bengio about where the field should go from here. He’ll speak on a similar subject tomorrow at NeurIPS, the biggest and buzziest AI conference in the world; his talk is titled “From System 1 Deep Learning to System 2 Deep Learning.”

Yoshua Bengio on . . .

  1. Deep learning and its discontents
  2. The dawn of brain-inspired computation
  3. Learning to learn
  4. “This is not ready for industry”
  5. Physics, language, and common sense
  1. Deep learning and its discontents

    IEEE Spectrum: What do you think about all the discussion of deep learning’s limitations?

    Yoshua Bengio: Too many public-facing venues don’t understand a central thing about the way we do research, in AI and other disciplines: We try to understand the limitations of the theories and methods we currently have, in order to extend the reach of our intellectual tools. So deep learning researchers are looking to find the places where it’s not working as well as we’d like, so we can figure out what needs to be added and what needs to be explored.

    This is picked up by people like Gary Marcus, who put out the message: “Look, deep learning doesn’t work.” But really, what researchers like me are doing is expanding its reach. When I talk about things like the need for AI systems to understand causality, I’m not saying that this will replace deep learning. I’m trying to add something to the toolbox.

    What matters to me as a scientist is what needs to be explored in order to solve the problems. Not who’s right, who’s wrong, or who’s praying at which chapel.

    Spectrum: How do you assess the current state of deep learning?

    Bengio: In terms of how much progress we’ve made in this work over the last two decades: I don’t think we’re anywhere close today to the level of intelligence of a two-year-old child. But maybe we have algorithms that are equivalent to lower animals, for perception. And we’re gradually climbing this ladder in terms of tools that allow an entity to explore its environment.

    One of the big debates these days is: What are the elements of higher-level cognition? Causality is one element of it, and there’s also reasoning and planning, imagination, and credit assignment (“what should I have done?”). In classical AI, they tried to obtain these things with logic and symbols. Some people say we can do it with classic AI, maybe with improvements.

    Then there are people like me, who think that we should take the tools we’ve built in last few years to create these functionalities in a way that’s similar to the way humans do reasoning, which is actually quite different from the way a purely logical system based on search does it.

    BACK TO TOP↑

    The dawn of brain-inspired computation

    Spectrum: How can we create functions similar to human reasoning?

    Bengio: Attention mechanisms allow us to learn how to focus our computation on a few elements, a set of computations. Humans do that—it’s a particularly important part of conscious processing. When you’re conscious of something, you’re focusing on a few elements, maybe a certain thought, then you move on to another thought. This is very different from standard neural networks, which are instead parallel processing on a big scale. We’ve had big breakthroughs on computer vision, translation, and memory thanks to these attention mechanisms, but I believe it’s just the beginning of a different style of brain-inspired computation.

    It’s not that we have solved the problem, but I think we have a lot of the tools to get started. And I’m not saying it’s going to be easy. I wrote a paper in 2017 called “The Consciousness Prior” that laid out the issue. I have several students working on this and I know it is a long-term endeavor.

    Spectrum: What other aspects of human intelligence would you like to replicate in AI?

    Bengio: We also talk about the ability of neural nets to imagine: Reasoning, memory, and imagination are three aspects of the same thing going on in your mind. You project yourself into the past or the future, and when you move along these projections, you’re doing reasoning. If you anticipate something bad happening in the future, you change course—that’s how you do planning. And you’re using memory too, because you go back to things you know in order to make judgments. You select things from the present and things from the past that are relevant.

    Attention is the crucial building block here. Let’s say I’m translating a book into another language. For every word, I have to carefully look at a very small part of the book. Attention allows you abstract out a lot of irrelevant details and focus what matters. Being able to pick out the relevant elementsthat’s what attention does.

    Spectrum: How does that translate to machine learning?

    Bengio: You don’t have to tell the neural net what to pay attention to—that’s the beauty of it. It learns it on its own. The neural net learns how much attention, or weight, it should give to each element in a set of possible elements to consider.

    BACK TO TOP↑

    Learning to learn

    Spectrum: How is your recent work on causality related to these ideas?

    Bengio: The kind of high-level concepts that you reason with tend to be variables that are cause and/or effect. You don’t reason based on pixels. You reason based on concepts like door or knob or open or closed. Causality is very important for the next steps of progress of machine learning.

    And it’s related to another topic that is much on the minds of people in deep learning. Systematic generalization is the ability humans have to generalize the concepts we know, so they can be combined in new ways that are unlike anything else we’ve seen. Today’s machine learning doesn’t know how to do that. So you often have problems relating to training on a particular data set. Say you train in one country, and then deploy in another country. You need generalization and transfer learning. How do you train a neural net so that if you transfer it into a new environment, it continues to work well or adapts quickly?

    Spectrum: What’s the key to that kind of adaptability?

    Bengio: Meta-learning is a very hot topic these days: Learning to learn. I wrote an early paper on this in 1991, but only recently did we get the computational power to implement this kind of thing. It’s computationally expensive. The idea: In order to generalize to a new environment, you have to practice generalizing to a new environment. It’s so simple when you think about it. Children do it all the time. When they move from one room to another room, the environment is not static, it keeps changing. Children train themselves to be good at adaptation. To do that efficiently, they have to use the pieces of knowledge they’ve acquired in the past. We’re starting to understand this ability, and to build tools to replicate it.

    One critique of deep learning is that it requires a huge amount of data. That’s true if you just train it on one task. But children have the ability to learn based on very little data. They capitalize on the things they’ve learned before. But more importantly, they’re capitalizing on their ability to adapt and generalize.

    BACK TO TOP↑

    “This is not ready for industry”

    Spectrum: Will any of these ideas be used in the real world anytime soon?

    Bengio: No. This is all very basic research using toy problems. That’s fine, that’s where we’re at. We can debug these ideas, move on to new hypotheses. This is not ready for industry tomorrow morning.

    But there are two practical limitations that industry cares about, and that this research may help. One is building systems that are more robust to changes in the environment. Two: How do we build natural language processing systems, dialogue systems, virtual assistants? The problem with the current state of the art systems that use deep learning is that they’re trained on huge quantities of data, but they don’t really understand well what they’re talking about. People like Gary Marcus pick up on this and say, “That’s proof that deep learning doesn’t work.” People like me say, “That’s interesting, let’s tackle the challenge.”

    BACK TO TOP↑

    Physics, language, and common sense

    Spectrum: How could chatbots do better?

    Bengio: There’s an idea called grounded language learning which is attracting new attention recently. The idea is, an AI system should not learn only from text. It should learn at the same time how the world works, and how to describe the world with language. Ask yourself: Could a child understand the world if they were only interacting with the world via text? I suspect they would have a hard time.

    This has to do with conscious versus unconscious knowledge, the things we know but can’t name. A good example of that is intuitive physics. A two-year-old understands intuitive physics. They don’t know Newton’s equations, but they understand concepts like gravity in a concrete sense. Some people are now trying to build systems that interact with their environment and discover the basic laws of physics.

    Spectrum: Why would a basic grasp of physics help with conversation?

    Bengio: The issue with language is that often the system doesn’t really understand the complexity of what the words are referring to. For example, the statements used in the Winograd schema; in order to make sense of them, you have to capture physical knowledge. There are sentences like: “Jim wanted to put the lamp into his luggage, but it was too large.” You know that if this object is too large for putting in the luggage, it must be the “it,” the subject of the second phrase. You can communicate that kind of knowledge in words, but it’s not the kind of thing we go around saying: “The typical size of a piece of luggage is x by x.”

    We need language understanding systems that also understand the world. Currently, AI researchers are looking for shortcuts. But they won’t be enough. AI systems also need to acquire a model of how the world works.

    BACK TO TOP↑

The Blogger Behind “AI Weirdness” Thinks Today’s AI Is Dumb and Dangerous

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/blogger-behind-ai-weirdness-thinks-todays-ai-is-dumb-and-dangerous

Sure, artificial intelligence is transforming the world’s societies and economies—but can an AI come up with plausible ideas for a Halloween costume? 

Janelle Shane has been asking such probing questions since she started her AI Weirdness blog in 2016. She specializes in training neural networks (which underpin most of today’s machine learning techniques) on quirky data sets such as compilations of knitting instructions, ice cream flavors, and names of paint colors. Then she asks the neural net to generate its own contributions to these categories—and hilarity ensues. AI is not likely to disrupt the paint industry with names like “Ronching Blue,” “Dorkwood,” and “Turdly.” 

Shane’s antics have a serious purpose. She aims to illustrate the serious limitations of today’s AI, and to counteract the prevailing narrative that describes AI as well on its way to superintelligence and complete human domination. “The danger of AI is not that it’s too smart,” Shane writes in her new book, “but that it’s not smart enough.” 

The book, which came out on Tuesday, is called You Look Like a Thing and I Love You. It takes its odd title from a list of AI-generated pick-up lines, all of which would at least get a person’s attention if shouted, preferably by a robot, in a crowded bar. Shane’s book is shot through with her trademark absurdist humor, but it also contains real explanations of machine learning concepts and techniques. It’s a painless way to take AI 101. 

She spoke with IEEE Spectrum about the perils of placing too much trust in AI systems, the strange AI phenomenon of “giraffing,” and her next potential Halloween costume. 

Janelle Shane on . . .


  1. The un-delicious origin of her blog
  2. “The narrower the problem, the smarter the AI will seem”
  3. Why overestimating AI is dangerous
  4. Giraffing!
  5. Machine and human creativity
  1. The un-delicious origin of her blog

    IEEE Spectrum: You studied electrical engineering as an undergrad, then got a master’s degree in physics. How did that lead to you becoming the comedian of AI? 

    Janelle Shane: I’ve been interested in machine learning since freshman year of college. During orientation at Michigan State, a professor who worked on evolutionary algorithms gave a talk about his work. It was full of the most interesting anecdotes–some of which I’ve used in my book. He told an anecdote about people setting up a machine learning algorithm to do lens design, and the algorithm did end up designing an optical system that works… except one of the lenses was 50 feet thick, because they didn’t specify that it couldn’t do that.  

    I started working in his lab on optics, doing ultra-short laser pulse work. I ended up doing a lot more optics than machine learning, but I always found it interesting. One day I came across a list of recipes that someone had generated using a neural net, and I thought it was hilarious and remembered why I thought machine learning was so cool. That was in 2016, ages ago in machine learning land.

    Spectrum: So you decided to “establish weirdness as your goal” for your blog. What was the first weird experiment that you blogged about? 

    Shane: It was generating cookbook recipes. The neural net came up with ingredients like: “Take ¼ pounds of bones or fresh bread.” That recipe started out: “Brown the salmon in oil, add creamed meat to the mixture.” It was making mistakes that showed the thing had no memory at all. 

    Spectrum: You say in the book that you can learn a lot about AI by giving it a task and watching it flail. What do you learn?

    Shane: One thing you learn is how much it relies on surface appearances rather than deep understanding. With the recipes, for example: It got the structure of title, category, ingredients, instructions, yield at the end. But when you look more closely, it has instructions like “Fold the water and roll it into cubes.” So clearly this thing does not understand water, let alone the other things. It’s recognizing certain phrases that tend to occur, but it doesn’t have a concept that these recipes are describing something real. You start to realize how very narrow the algorithms in this world are. They only know exactly what we tell them in our data set. 

    BACK TO TOP↑

    “The narrower the problem, the smarter the AI will seem”

    Spectrum: That makes me think of DeepMind’s AlphaGo, which was universally hailed as a triumph for AI. It can play the game of Go better than any human, but it doesn’t know what Go is. It doesn’t know that it’s playing a game. 

    Shane: It doesn’t know what a human is, or if it’s playing against a human or another program. That’s also a nice illustration of how well these algorithms do when they have a really narrow and well-defined problem. 

    The narrower the problem, the smarter the AI will seem. If it’s not just doing something repeatedly but instead has to understand something, coherence goes down. For example, take an algorithm that can generate images of objects. If the algorithm is restricted to birds, it could do a recognizable bird. If this same algorithm is asked to generate images of any animal, if its task is that broad, the bird it generates becomes an unrecognizable brown feathered smear against a green background.

    Spectrum: That sounds… disturbing. 

    Shane: It’s disturbing in a weird amusing way. What’s really disturbing is the humans it generates. It hasn’t seen them enough times to have a good representation, so you end up with an amorphous, usually pale-faced thing with way too many orifices. If you asked it to generate an image of a person eating pizza, you’ll have blocks of pizza texture floating around. But if you give that image to an image-recognition algorithm that was trained on that same data set, it will say, “Oh yes, that’s a person eating pizza.”

    BACK TO TOP↑

    Why overestimating AI is dangerous

    Spectrum: Do you see it as your role to puncture the AI hype? 

    Shane: I do see it that way. Not a lot of people are bringing out this side of AI. When I first started posting my results, I’d get people saying, “I don’t understand, this is AI, shouldn’t it be better than this? Why doesn’t it understand?” Many of the impressive examples of AI have a really narrow task, or they’ve been set up to hide how little understanding it has. There’s a motivation, especially among people selling products based on AI, to represent the AI as more competent and understanding than it actually is. 

    Spectrum: If people overestimate the abilities of AI, what risk does that pose? 

    Shane: I worry when I see people trusting AI with decisions it can’t handle, like hiring decisions or decisions about moderating content. These are really tough tasks for AI to do well on. There are going to be a lot of glitches. I see people saying, “The computer decided this so it must be unbiased, it must be objective.” 

    That’s another thing I find myself highlighting in the work I’m doing. If the data includes bias, the algorithm will copy that bias. You can’t tell it not to be biased, because it doesn’t understand what bias is. I think that message is an important one for people to understand. 

    If there’s bias to be found, the algorithm is going to go after it. It’s like, “Thank goodness, finally a signal that’s reliable.” But for a tough problem like: Look at these resumes and decide who’s best for the job. If its task is to replicate human hiring decisions, it’s going to glom onto gender bias and race bias. There’s an example in the book of a hiring algorithm that Amazon was developing that discriminated against women, because the historical data it was trained on had that gender bias. 

    Spectrum: What are the other downsides of using AI systems that don’t really understand their tasks? 

    Shane: There is a risk in putting too much trust in AI and not examining its decisions. Another issue is that it can solve the wrong problems, without anyone realizing it. There have been a couple of cases in medicine. For example, there was an algorithm that was trained to recognize things like skin cancer. But instead of recognizing the actual skin condition, it latched onto signals like the markings a surgeon makes on the skin, or a ruler placed there for scale. It was treating those things as a sign of skin cancer. It’s another indication that these algorithms don’t understand what they’re looking at and what the goal really is. 

    BACK TO TOP↑

    Giraffing

    Spectrum: In your blog, you often have neural nets generate names for things—such as ice cream flavors, paint colors, cats, mushrooms, and types of apples. How do you decide on topics?

    Shane: Quite often it’s because someone has written in with an idea or a data set. They’ll say something like, “I’m the MIT librarian and I have a whole list of MIT thesis titles.” That one was delightful. Or they’ll say, “We are a high school robotics team, and we know where there’s a list of robotics team names.” It’s fun to peek into a different world. I have to be careful that I’m not making fun of the naming conventions in the field. But there’s a lot of humor simply in the neural net’s complete failure to understand. Puns in particular—it really struggles with puns. 

    Spectrum: Your blog is quite absurd, but it strikes me that machine learning is often absurd in itself. Can you explain the concept of giraffing?

    Shane: This concept was originally introduced by [internet security expert] Melissa Elliott. She proposed this phrase as a way to describe the algorithms’ tendency to see giraffes way more often than would be likely in the real world. She posted a whole bunch of examples, like a photo of an empty field in which an image-recognition algorithm has confidently reported that there are giraffes. Why does it think giraffes are present so often when they’re actually really rare? Because they’re trained on data sets from online. People tend to say, “Hey look, a giraffe!” And then take a photo and share it. They don’t do that so often when they see an empty field with rocks. 

    There’s also a chatbot that has a delightful quirk. If you show it some photo and ask it how many giraffes are in the picture, it will always answer with some non zero number. This quirk comes from the way the training data was generated: These were questions asked and answered by humans online. People tended not to ask the question “How many giraffes are there?” when the answer was zero. So you can show it a picture of someone holding a Wii remote. If you ask it how many giraffes are in the picture, it will say two. 

    BACK TO TOP↑

    Machine and human creativity

    Spectrum: AI can be absurd, and maybe also creative. But you make the point that AI art projects are really human-AI collaborations: Collecting the data set, training the algorithm, and curating the output are all artistic acts on the part of the human. Do you see your work as a human-AI art project?

    Shane: Yes, I think there is artistic intent in my work; you could call it literary or visual. It’s not so interesting to just take a pre-trained algorithm that’s been trained on utilitarian data, and tell it to generate a bunch of stuff. Even if the algorithm isn’t one that I’ve trained myself, I think about, what is it doing that’s interesting, what kind of story can I tell around it, and what do I want to show people. 

    Spectrum: For the past three years you’ve been getting neural nets to generate ideas for Halloween costumes. As language models have gotten dramatically better over the past three years, are the costume suggestions getting less absurd? 

    Shane: Yes. Before I would get a lot more nonsense words. This time I got phrases that were related to real things in the data set. I don’t believe the training data had the words Flying Dutchman or barnacle. But it was able to draw on its knowledge of which words are related to suggest things like sexy barnacle and sexy Flying Dutchman. 

    Spectrum: This year, I saw on Twitter that someone made the gothy giraffe costume happen. Would you ever dress up for Halloween in a costume that the neural net suggested? 

    Shane: I think that would be fun. But there would be some challenges. I would love to go as the sexy Flying Dutchman. But my ambition may constrict me to do something more like a list of leg parts. 

    BACK TO TOP↑

Racial Bias Found in Algorithms That Determine Health Care for Millions of Patients

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/the-human-os/biomedical/ethics/racial-bias-found-in-algorithms-that-determine-health-care-for-millions-of-patients

An algorithm that a major medical center used to identify patients for extra care has been shown to be racially biased. 

The algorithm screened patients for enrollment in an intensive care management program, which gave them access to a dedicated hotline for a nurse practitioner, help refilling prescriptions, and so forth. The screening was meant to identify those patients who would most benefit from the program. But the white patients flagged for enrollment had fewer chronic health conditions than the black patients who were flagged.

In other words, black patients had to reach a higher threshold of illness before they were considered for enrollment. Care was not actually going to those people who needed it most.

Alarmingly, the algorithm was performing its task correctly. The problem was with how the task was defined.

The findings, described in a paper that was just published in Science, point to a system-wide problem, says coauthor Ziad Obermeyer, a physician and researcher at the UC Berkeley School of Public Health. Similar screening tools are used throughout the country; according to industry estimates, these types of algorithms are making health decisions for 200 million people per year. 

Baseball’s Engineer: Ben Hansen Says Biometrics Can Save Pitchers’ Elbows

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/the-human-os/biomedical/devices/baseballs-engineer-ben-hansen-says-biometrics-can-save-pitchers-elbows

When Benjamin Hansen was playing baseball in high school, around 2006, technologies to monitor athletes’ bodies and performance weren’t yet commonplace. Yet Hansen wanted to collect data any way he could. “I would sit on the bench with a calculator and a stopwatch, timing the pitchers,” he says. He clicked the stopwatch when the pitcher released the baseball and again when the ball popped into the catcher’s mitt, then factored in the pitcher’s height in calculating the pitch velocity.

Hansen’s coach, however, was not impressed. “My coach should have embraced it,” he says, wistfully. “But instead he made me run laps.”

Hansen kept playing baseball through college, pitching for his team at the Milwaukee School of Engineering. But he was plagued by injuries. He well remembers a practice game in which he logged 15 straight outs—then felt a sharp pain in his elbow. He had partially torn his ulnar collateral ligament (UCL) and had to sit out the rest of the season. “I always asked the question: Why is this happening?” he says.

Today, Hansen is the vice president of biomechanics and innovation for Motus Global, in St. Petersburg, Fla., a startup that produces wearable sports technology. For IEEE Spectrum’s October issue, he describes Motus’s product for baseball pitchers, a compression sleeve with sensors to measure workload and muscle fatigue. From Little League to Major League Baseball, pitchers are using Motus gear to understand their bodies, improve performance, and prevent injuries.

Traditional wisdom holds that pitcher injuries result from faulty form. But data from Motus’s wearable indicates that it’s the accumulated workload on a player’s muscles and ligaments that causes injuries like UCL tears, which have become far too common in baseball. By displaying measurements of fatigue and suggesting training regimens, rehab workouts, and in-game strategies, the wearable can help prevent players from pushing themselves past their limits. It’s a goal that even Hansen’s old coach would probably endorse.

This article appears in the October 2019 print issue as “Throwing Data Around.”

The Ultimate Optimization Problem: How to Best Use Every Square Meter of the Earth’s Surface

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/tech-talk/energy/environment/the-ultimate-optimization-problem-how-to-best-use-every-square-meter-of-the-earths-surface

Lucas Joppa thinks big. Even while gazing down into his cup of tea in his modest office on Microsoft’s campus in Redmond, Washington, he seems to see the entire planet bobbing in there like a spherical tea bag. 

As Microsoft’s first chief environmental officer, Joppa came up with the company’s AI for Earth program, a five-year effort that’s spending US $50 million on AI-powered solutions to global environmental challenges.

The program is not just about specific deliverables, though. It’s also about mindset, Joppa told IEEE Spectrum in an interview in July. “It’s a plea for people to think about the Earth in the same way they think about the technologies they’re developing,” he says. “You start with an objective. So what’s our objective function for Earth?” (In computer science, an objective function describes the parameter or parameters you are trying to maximize or minimize for optimal results.)

AI for Earth launched in December 2017, and Joppa’s team has since given grants to more than 400 organizations around the world. In addition to receiving funding, some grantees get help from Microsoft’s data scientists and access to the company’s computing resources. 

In a wide-ranging interview about the program, Joppa described his vision of the “ultimate optimization problem”—figuring out which parts of the planet should be used for farming, cities, wilderness reserves, energy production, and so on. 

Every square meter of land and water on Earth has an infinite number of possible utility functions. It’s the job of Homo sapiens to describe our overall objective for the Earth. Then it’s the job of computers to produce optimization results that are aligned with the human-defined objective.

I don’t think we’re close at all to being able to do this. I think we’re closer from a technology perspective—being able to run the model—than we are from a social perspective—being able to make decisions about what the objective should be. What do we want to do with the Earth’s surface?

AI Agents Startle Researchers With Unexpected Hide-and-Seek Strategies

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-agents-startle-researchers-with-unexpected-strategies-in-hideandseek

After 25 million games, the AI agents playing hide-and-seek with each other had mastered four basic game strategies. The researchers expected that part.

After a total of 380 million games, the AI players developed strategies that the researchers didn’t know were possible in the game environment—which the researchers had themselves created. That was the part that surprised the team at OpenAI, a research company based in San Francisco.

The AI players learned everything via a machine learning technique known as reinforcement learning. In this learning method, AI agents start out by taking random actions. Sometimes those random actions produce desired results, which earn them rewards. Via trial-and-error on a massive scale, they can learn sophisticated strategies.

In the context of games, this process can be abetted by having the AI play against another version of itself, ensuring that the opponents will be evenly matched. It also locks the AI into a process of one-upmanship, where any new strategy that emerges forces the opponent to search for a countermeasure. Over time, this “self-play” amounted to what the researchers call an “auto-curriculum.” 

According to OpenAI researcher Igor Mordatch, this experiment shows that self-play “is enough for the agents to learn surprising behaviors on their own—it’s like children playing with each other.”

Three Steps to a Moon Base

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/aerospace/space-flight/three-steps-to-a-moon-base

Space agencies and private companies are working on rockets, landers, and other tech for lunar settlement

graphic link to special report landing page
graphic link to special report landing  page

In 1968, NASA astronaut Jim Lovell gazed out of a porthole from lunar orbit and remarked on the “vast loneliness” of the moon. It may not be lonely place for much longer. Today, a new rush of enthusiasm for lunar exploration has swept up government space agencies, commercial space companies funded by billionaires, and startups that want in on the action. Here’s the tech they’re building that may enable humanity’s return to the moon, and the building of the first permanent moon base.

A Wearable That Helps Women Get / Not Get Pregnant

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/the-human-os/biomedical/devices/a-wearable-that-helps-women-get-not-get-pregnant

The in-ear sensor from Yono Labs will soon predict a woman’s fertile days

Journal Watch report logo, link to report landing page

Women’s bodies can be mysterious things—even to the women who inhabit them. But a wearable gadget called the Yono aims to replace mystery with knowledge derived from statistics, big data, and machine learning.

A woman who is trying to get pregnant may spend months tracking her ovulation cycle, often making a daily log of biological signals to determine her few days of fertility. While a plethora of apps promise to help, several studies have questioned these apps’ accuracy and efficacy.

Meanwhile, a woman who is trying to avoid pregnancy by the “fertility awareness method” may well not avoid it, since the method is only 75 percent effective

IEEE Spectrum Introduces Drones and 360 Video to Its Reporting

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/tech-talk/robotics/drones/ieee-spectrum-introduces-drones-and-360-video-to-its-reporting

To produce 360 videos, reporters navigated technical challenges and red tape

Tech Expedition report logo, link to report landing page

Reporters used to head out into the field with nothing but a notebook. But when IEEE Spectrum associate editor Michael Koziol [left] and contributing editor Evan Ackerman [right] traveled to East Africa last October, they had plenty more gear to schlep. In addition to their laptops and digital recorders, they brought two DJI drones and an assortment of 360-degree video cameras

The trip was part of an experiment in immersive storytelling. Koziol and Ackerman journeyed through Rwanda and Tanzania to report on pioneering companies that are using small consumer drones for such tasks as delivering medical supplies and surveying roads. The two gathered material for several articles, including a feature story about a company called Zipline that’s delivering blood to Rwanda’s rural hospitals.

They also sent their own drones aloft bearing their cameras, capturing footage for two videos included in our special report on drones in East Africa. One video brings viewers inside Zipline’s operations in Rwanda, the other, which will be published on Thursday, takes to the air with Tanzania’s drone startups. In these 360-degree videos the viewer can rotate the point of view in a complete circle, to see everything on all sides of the camera. That maneuvering is particularly fun when the camera is flying high over the landscape. 

The reporters had to overcome various logistical and technical challenges to get their shots. First they had to navigate the maze of regulations that govern the flying of consumer drones, rules that differ from country to country. Because the technology is so new, the rules and procedures are a work in progress. And filming with the 360-degree cameras presented novel technical issues, says Ackerman, as he and Koziol had to consider everything that the viewer might see in the shot. There were two questions they often asked themselves, Ackerman says: “How do we make the whole scene interesting? And how do we hide from the camera?”

The technology and the travel were funded by a generous grant from the IEEE Foundation. We thank the foundation for supporting Spectrum’s mission: covering emerging technology that can change the world, and using cutting-edge technology to do so.  

A version of this post appears in the May 2019 print magazine as “Flying Into the Unknown.”

With “Leapfrog” Technologies, Africa Aims to Skip the Present and Go Straight to the Future

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/robotics/drones/with-leapfrog-technologies-africa-aims-to-skip-the-present-and-go-straight-to-the-future

IEEE is promoting engineering education and access to the latest technologies through its Africa initiative

By 2022, forecasters estimate that sub-Saharan Africa will have nearly 1 billion mobile phones—enough for the vast majority of the projected 1.2 billion people who will live there. What it won’t have are the endless streams of telephone poles and wires that cascade across other continents. Development experts call this an example of a “leapfrog technology.” By going straight to mobile, many African nations will be able to skip the step of building extensive and expensive landline infrastructure.

In fact, “some places will go straight to 5G,” says Vincent Kaabunga, chair of the IEEE Ad Hoc Committee on Africa, which has helped craft IEEE’s strategy to increase engineering capacity on the continent. With this kind of leapfrogging, African nations can take the lead in certain types of technological development and deployment, he says. Just look at mobile money: Companies such as M-Pesa sprang up to solve a local problem—people’s lack of access to brick-and-mortar banks—and became a way for people not only to make payments, but also to get loans and insurance. “We’ve refined the concept of mobile money over the last 10 or 15 years,” says Kaabunga, “while other parts of the world are just now coming around to embracing it.”

IEEE and its members in Africa are facilitating the application of new technologies by promoting education and access, says Kaabunga, who also works for a technology consulting firm in Kampala, Uganda. The IEEE in Africa Strategy, launched in 2017, calls for IEEE to support engineering education at every level and to advise government policymakers, efforts that Kaabunga and his colleagues in the region have already begun. For example, they’re currently working with the Smart Africa alliance, an initiative that aims to create standard policies for information and communications technology to enable a single digital marketplace across the continent.

Some governments are particularly aware of the need for more in-country engineering expertise. IEEE is currently concentrating its efforts in five nations—Ghana, Kenya, Rwanda, Uganda, and Zambia—all of which have long-term development plans emphasizing science and technology. These plans call for building human capital in the form of well-trained scientists and engineers, in addition to building fiber-optic broadband networks and clean power plants.

Rwanda is the setting of IEEE Spectrum’s behind-the-scenes report on a drone delivery company. The Rwandan government is trying to modernize its highway infrastructure, but it will be a long process; currently, narrow roads wind through the hilly terrain. Rwandan hospitals needing emergency supplies find that truck deliveries are often too slow for patients who require urgent care.

To address this need, a drone company called Zipline is providing air deliveries of blood to hospitals all across the country, and will soon begin delivering other lightweight medical supplies as well. While Zipline hails from California, it’s dedicated to building up local engineering capacity, and offers extensive training and professional development to its employees in Rwanda. And Zipline is not an anomaly. A vibrant drone startup scene has arisen in East Africa, with local companies finding applications in agriculture, road surveying, mining, and other areas.

Zipline is now expanding its drone delivery services to Ghana. It’s not yet clear how successful the company will be in its attempt to scale up its operations. Zipline’s financials are opaque, and its business model may not be viable without government subsidies. But the company is now working on a demonstration project in North Carolina. Whether or not medical drone delivery works out in the long run, one thing is certain: Africa is trying it first.

This article appears in the May 2019 print issue as “Engineering Change in Africa.”