Last Wednesday, Todd Goldstein was working on other projects. Then physicians in the New York-based hospital system where he works, hard hit by a surge in COVID-19 cases, told him they were worried about running out of supplies.
Specifically, they needed more nasal test swabs. A nasopharyngeal swab for COVID-19 is no ordinary Q-tip. These specialty swabs cannot be made of cotton, nor have wood handles. They must be long and skinny to fit up behind the nose into the upper part of the throat.
Goldstein, director of 3D Design and Innovation at Northwell Health, a network of 23 hospitals and 800 outpatient facilities, thought, “Well, we can make that.” He quickly organized a collaboration with Summer Decker and Jonathan Ford of the University of South Florida, and 3D-printing manufacturer Formlabs. In one week, the group designed, made, tested, and are now distributing 3D-printed COVID-19 test swabs.
France and wine—what an iconic link, and for centuries, how immutable! Wine was introduced by Greeks before the Romans conquered Gaul. Production greatly expanded during the Middle Ages, and since then the very names of the regions—Bordeaux, Bourgogne, Champagne—have become a symbol of quality everywhere. Thus has the French culture of wine long been a key signifier of national identity.
Statistics for French wine consumption begin in 1850 with a high mean of 121 liters per capita per year, which is nearly two glasses per day. By 1890, a Phylloxera infestation had cut the country’s grape harvest by nearly 70 percent from its 1875 peak, and French vineyards had to be reconstituted by grafting on resistant rootstocks from the United States. Although annual consumption of wine did fluctuate, rising imports prevented any steep decline in the total supply. Vineyard recovery brought the per capita consumption to a pre-World War I peak of 125 L in 1909, equaled again only in 1924. The all-time record of 136 L was set in 1926, after which the rate fell only slightly to 124 liters per capita in 1950.
Postwar, the French standard of living remained surprisingly low: According to the 1954 census, only 25 percent of homes had an indoor toilet. But rapidly rising incomes during the 1960s brought dietary shifts, notably a decline in wine drinking per capita. It fell to about 95 L in 1980, to 71 L in 1990, and then to 58 L in 2000—about half what it had been a century before. The latest available data shows the mean at just 40 L.
France’s wine consumption survey of 2015 shows deep gender and generational divides that explain the falling trend. Forty years ago, more than half of French adults drank wine nearly every day; now it’s just 16 percent, with 23 percent among men and only 11 percent among women. Among people over 65, the rate is 38 percent; for people 25 to 34 years of age, it is 5 percent, and for 15- to 24-year-olds, it’s only 1 percent. The same divides apply to all alcoholic drinks, as beer, liquors, and cider have also seen gradual consumption declines, while the beverages with the highest average per capita gains include mineral and spring water, roughly doubling since 1990, as well as fruit juices and carbonated soft drinks.
Alcoholic beverages are thus fast disappearing from French culture. And although no other traditional wine-drinking country has seen greater declines in absolute or relative terms, Italy comes close, and wine consumption has also decreased in Spain and Greece.
Only one upward trend persists: French exports of wine set a new record, at about €9.7 billion, in 2018. Premium prices and exports to the United States and China are the key factors. American drinkers have been the largest importers of French wines, and demand by newly rich Chinese has also claimed a growing share of sales. But in the country that gave the world countless vins ordinaires as well as exorbitantly priced Grand Crus Classés, the clinking of stemmed glasses and wishes of santé have become an endangered habit.
This article appears in the April 2020 print issue as “(Not) Drinking Wine.”
More than 100 COVID-19 patients at a hospital in Beijing are receiving injections of mesenchymal stem cells to help them fend off the disease. The experimental treatment is part of an ongoing clinical trial, which coordinators say has shown early promise in alleviating COVID-19 symptoms.
However, other experts criticize the trial’s design and caution that there’s not sufficient evidence to show that the treatment works for COVID-19. They say other treatments have far greater potential than stem cells in aiding patients during the pandemic.
Researchers have so far reported results from only seven patients treated with stem cells at Beijing You’an Hospital. Each patient suffered from COVID-19 symptoms including fevers and difficulty breathing. They each received a single infusion of mesenchymal stem cells sometime between 23 January and 16 February. A few days later, investigators say, all symptoms disappeared in all seven patients. They reported no side effects.
Jahar Bhattacharya, a professor of physiology and cellular biophysics and medicine at Columbia University, who was not involved in the work, says injecting mesenchymal stem cells into a patient’s bloodstream remains an unproven treatment for COVID-19 patients and could cause harmful side effects.
“You are injecting large numbers of cells in a patient’s veins,” Bhattacharya says. “If those cells go and clog the lungs, and cause damage because of the clogging—well, that’s not good at all.”
He adds that the study’s sample size is much too small to draw any meaningful conclusions about the treatment’s efficacy at this stage. “Folks do all kinds of things and they’ll say—we got a result,” Bhattacharya says. “It’s very risky to go by any of those.”
Kunlin Jin, a lead author in the trial and professor of pharmacology and neuroscience at the University of North Texas Health Science Center, says his group now has unpublished data from 31 additional COVID-19 patients who received the treatment. In every case, he claims, their symptoms improved after treatment. “I think the results are very promising,” he says.
According to Jin, 120 COVID-19 patients are now receiving mesenchymal stem cell injections in Beijing for the trial.
Jin’s team isn’t alone in considering the use of stem cells to treat COVID-19 patients. Another mesenchymal stem cell trial registered to clinicaltrials.gov aims to enroll 20 COVID-19 patients across four hospitals in China. The Australia-based firm Mesoblast says it’s evaluating its stem cell therapy for use against COVID-19. And in the United States, the Biomedical Advanced Research and Development Authority recently contacted the company Athersys to request information about its stem cell treatment called MultiStem for its potential as a COVID-19 therapy.
Mesenchymal stem cells (a term some experts criticize as too broad) can be isolated from different kinds of tissues and, once injected into a patient, grow into a wide variety of cells. They have not been approved for COVID-19 therapeutic use by the U.S. Food and Drug Administration.
The new coronavirus invades the body through a spike protein that lives on the surface of virus cells. The S protein, as it’s called, binds to a receptor called angiotensin-converting enzyme 2 (ACE2) on a healthy cell’s surface. Once attached, the cells fuse and the virus is able to infect the healthy cell.
ACE2 receptors are present on cells in many places throughout the body, and especially in the lungs. Cells in the lungs are also some of the first to encounter the virus, since the primary form of transmission is thought to be breathing in droplets after an infected person has coughed or sneezed.
However, cells from other parts of the body—including those which produce mesenchymal stem cells—lack ACE2 receptors, which makes them immune to the virus.
In many COVID-19 cases, a patient’s immune system responds to the virus so strongly, it harms healthy cells in the process. Jin explains that, once mesenchymal stem cells are injected into the blood, these cells can travel to the lungs and secrete growth factor and other cytokines—anti-inflammatory substances that modulate the immune system so it doesn’t go into overdrive.
But Lawrence Goldstein, director of UC San Diego’s stem cell program, says it’s not clear from the trial how many of the injected cells actually made it to the lungs, or how long they stayed there. He criticized the classification of patients in the study as “common,” “severe,” or “critically severe,” saying those categories weren’t well defined (Jin says these labels are defined by the National Health Commission of China). And Goldstein noted the lack of information about the properties of the stem cells used in the trial.
“It’s pretty weak,” Goldstein says of the trial design.
Steven Peckman, deputy director of UCLA’s Broad Stem Cell Research Center, adds: “Researchers and clinicians should use a critical eye when reviewing such reports and avoid the ‘therapeutic misconception,’ namely, a willingness to view experimental interventions as both safe and effective without the support of compelling scientific evidence.”
Jin himself doesn’t think most COVID-19 patients should receive stem cell infusions. “I think for the moderate patients, maybe don’t need the stem cell treatment,” he says. “For life-threatening cases, I think it’s essential to use mesenchymal stem cell treatment if no other drug is available.”
Goldstein says other potential treatments for COVID-19—such as drugs that modulate the body’s immune system—appear much more promising than stem cells. Many such drugs have been shown to be safe and effective at regulating the immune system and are already approved by regulatory authorities. It’s also easier to use drugs to treat a large number of patients compared with stem cell infusions.
“When you’ve got a hundred things you want to try, it’s not obvious that this one is on the short list,” Goldstein says of stem cell trials for COVID-19. “It’s a higher priority to test well-known immune modulators than to test these cells.”
Imagine this scene: It’s nearly dinnertime, and little Jimmy is in the kitchen. His mom is rushing to get dinner on the table, and she puts all the silverware in a pile on the counter. Jimmy, who’s on the autism spectrum, wants the silverware to be more orderly, and while his mom is at the stove he carefully begins to put each fork, knife, and spoon back in its slot in the silverware drawer. Suddenly Jimmy hears shouting. His mom is loud; her face looks different. He continues what he’s doing.
Now imagine that Jimmy is wearing a special kind of Google Glass, the augmented-reality headset that Google introduced in 2013. When he looks up at his mom, the head-up display lights up with a green box, which alerts Jimmy that he’s “found a face.” As he focuses on her face, an emoji pops up, which tells Jimmy, “You found an angry face.” He thinks about why his mom might be annoyed. Maybe he should stop what he’s doing with the silverware and ask her.
Our team has been working for six years on this assistive technology for children with autism, which the kids themselves named Superpower Glass. Our system provides behavioral therapy to the children in their homes, where social skills are first learned. It uses the glasses’ outward-facing camera to record the children’s interactions with family members; then our software detects the faces in those videos and interprets their expressions of emotion. Through an app, caregivers can review auto-curated videos of social interactions.
Over the years we’ve refined our prototype and run clinical trials to prove its beneficial effects: We’ve found that its use increases kids’ eye contact and social engagement and also improves their recognition of emotions. Our team at Stanford University has worked with coauthor Dennis Wall’s spinoff company, Cognoa, to earn a “breakthrough therapy” designation for Superpower Glass, which puts the technology on a fast track toward approval by the U.S. Food and Drug Administration (FDA). We aim to get health insurance plans to cover the costs of the technology as an augmented-reality therapy.
When Google Glass first came out as a consumer device, many people didn’t see a need for it. Faced with lackluster reviews and sales, Google stopped making the consumer version in 2015. But when the company returned to the market in 2017 with a second iteration of the device, Glass Enterprise Edition, a variety of industries began to see its potential. Here we’ll tell the story of how we used the technology to give kids with autism a new way to look at the world.
Vivaan’s mother, Deepali Kulkarni, and his father, V.R. Ferose, use the Superpower Glass system to play games with their son. Photo: Gabriela Hasbun
The system is built around the second version of the Google Glass system, the Glass Enterprise Edition. Photo: Gabriela Hasbun
The glasses’ head-up display gives Vivaan information about the emotions on his parents’ faces. Photo: Gabriela Hasbun
The system includes games such as “Capture the Smile” and “Guess the Emotion” that encourage Vivaan to interact with his parents and experiment with facial expressions. Photo: Gabriela Hasbun
Many families with autistic children struggle to get the behavioral therapy their kids need. The Superpower Glass system enables them to take the therapy into their own hands. Photo: Gabriela Hasbun
Vivaan, who is nonverbal, uses laminated cards to indicate which emotion he has identified. Most autistic kids who use the Superpower Glass system are verbal and don’t use such cards. Photo: Gabriela Hasbun
When Jimmy puts on the glasses, he quickly gets accustomed to the head-up display (a prism) in the periphery of his field of view. When Jimmy begins to interact with family members, the glasses send the video data to his caregiver’s smartphone. Our app, enabled by the latest artificial-intelligence (AI) techniques, detects faces and emotions and sends the information back to the glasses. The boundary of the head-up display lights up green whenever a face is detected, and the display then identifies the facial expression via an emoticon, emoji, or written word. The users can also choose to have an audio cue—a voice identifying the emotion—from the bone-conducting speaker within the glasses, which sends sound waves through the skull to the inner ear. The system recognizes seven facial expressions—happiness, anger, surprise, sadness, fear, disgust, and contempt, which we labeled “meh” to be more child friendly. It also recognizes a neutral baseline expression.
To encourage children to wear Superpower Glass, the app currently offers two games: “Capture the Smile,” in which the child tries to elicit happiness or another emotion in others, and “Guess the Emotion,” in which people act out emotions for the child to name. The app also logs all activity within a session and tags moments of social engagement. That gives Jimmy and his mom the ability to watch together the video of their conflict in the kitchen, which could prompt a discussion of what happened and what they can do differently next time.
The three elements of our Superpower Glass system—face detection, emotion recognition, and in-app review—help autistic children learn as they go. The kids are motivated to seek out social interactions, they learn that faces are interesting, and they realize they can gather valuable information from the expressions on those faces. But the glasses are not meant to be a permanent prosthesis. The kids do 20-minute sessions a few times a week in their own homes, and the entire intervention currently lasts for six weeks. Children are expected to quickly learn how to detect the emotions of their social partners and then, after they’ve gained social confidence, stop using the glasses.
Our system is intended to ameliorate a serious problem: limited access to intensive behavioral therapy. Although there’s some evidence that such therapy can diminish or even eliminate core symptoms associated with autism, kids must start receiving it before the age of 8 to see real benefits. Currently the average age of diagnosis is between 4 and 5, and waitlists for therapy can stretch over 18 months. Part of the reason for the shortage is the shocking 600 percent rise since 1990 in diagnoses of autism in the United States, where about one in 40 kids is now affected; less dramatic surges have occurred in some parts of Asia and Europe.
Because of the increasing imbalance between the number of children requiring care and the number of specialists able to provide therapy, we believe that clinicians must therefore look to solutions that can scale up in a decentralized fashion. Rather than relying on the experts for everything, we think that data capture, monitoring, and therapy—the tools needed to help all these children—must be placed in the hands of the patients and their parents.
Efforts to provide in situ learning aids for autistic children date back to the 1990s, when Rosalind Picard, a professor at MIT, designed a system with a headset and minicomputer that displayed emotional cues. However, the wearable technology of the day was clunky and obtrusive, and the emotion-recognition software was primitive. Today, we have discreet wearables, such as Google Glass, and powerful AI tools that leverage massive amounts of publicly available data about facial expressions and social interactions.
The design of Google Glass was an impressive feat, as the company’s engineers essentially packed a smartphone into a lightweight frame resembling a pair of eyeglasses. But with that form factor comes an interesting challenge for developers: We had to make trade-offs among battery life, video streaming performance, and heat. For example, on-device processing can generate too much heat and automatically trigger a cutback in operations. When we tried running our computer-vision algorithms on the device, that automatic system often reduced the frame rate of the video being captured, which seriously compromised our ability to quickly identify emotions and provide feedback.
Our solution was to pair Glass with a smartphone via Wi-Fi. The glasses capture video, stream the frames to the phone, and deliver feedback to the wearer. The phone does the heavy computer-vision work of face detection and tracking, feature extraction, and facial-expression recognition, and also stores the video data.
But the Glass-to-phone streaming posed its own problem: While the glasses capture video at a decent resolution, we could stream it only at low resolution. We therefore wrote a protocol to make the glasses zoom in on each newly detected face so that the video stream is detailed enough for our vision algorithms.
Our computer-vision system originally used off-the-shelf tools. The software pipeline was composed of a face detector, a face tracker, and a facial-feature extractor; it fed data into an emotion classifier trained on both standard data sets and our own data sets. When we started developing our pipeline, it wasn’t yet feasible to run deep-learning algorithms that can handle real-time classification tasks on mobile devices. But the past few years have brought remarkable advances, and we’re now working on an updated version of Superpower Glass with deep-learning tools that can simultaneously track faces and classify emotions.
This update isn’t a simple task. Emotion-recognition software is primarily used in the advertising industry to gauge consumers’ emotional responses to ads. Our software differs in a few key ways. First, it won’t be used in computers but rather in wearables and mobile devices, so we have to keep its memory and processing requirements to a minimum. The wearable form factor also means that video will be captured not by stable webcams but by moving cameras worn by kids. We’ve added image stabilizers to cope with the jumpy video, and the face detector also reinitializes frequently to find faces that suddenly shift position within the scene.
Failure modes are also a serious concern. A commercial emotion-recognition system might claim, for example, a 98 percent accuracy rate; such statistics usually mean that the system works well on most people but consistently fails to recognize the expressions of a small handful of individuals. That situation might be fine for studying the aggregate sentiments of people watching an ad. But in the case of Superpower Glass, the software must interpret a child’s interactions with the same people on a regular basis. If the system consistently fails on two people who happen to be the child’s parents, that child is out of luck.
We’ve developed a number of customizations to address these problems. In our “neutral subtraction” method, the system first keeps a record of a particular person’s neutral-expression face. Then the software classifies that person’s expressions based on the differences it detects between the face he or she currently displays and the recorded neutral estimate. For example, the system might come to learn that just because Grandpa has a furrowed brow, it doesn’t mean he’s always angry. And we’re going further: We’re working on machine-learning techniques that will rapidly personalize the software for each user. Making a human–AI interaction system that adapts robustly, without too much frustration for the user, is a considerable challenge. We’re experimenting with several ways to gamify the calibration process, because we think the Superpower Glass system must have adaptive abilities to be commercially successful.
We realized from the start that the system would be imperfect, and we’ve designed feedback to reflect that reality. The green box face-detection feature was originally intended to mitigate frustration: If the system isn’t tracking a friend’s face, at least the user knows that and isn’t waiting for feedback that will never come. Over time, however, we came to think of the green box as an intervention in itself, as it provides feedback whenever the wearer looks at a face, a behavior that can be noticeably different for children on the autism spectrum.
To evaluate Superpower Glass, we conducted three studies over the past six years. The first one took place in our lab with a very rudimentary prototype, which we used to test how children on the autism spectrum would respond to wearing Google Glass and receiving emotional cues. Next, we built a proper prototype and ran a design trial in which families with autistic kids took the devices home for several weeks. We interacted with these families regularly and made changes to the prototype based on their feedback.
With a refined prototype in hand, we then set out to test the device’s efficacy in a rigorous way. We ran a randomized control trial in which one group of children received typical at-home behavioral therapy, while a second group received that therapy plus a regimen with Superpower Glass. We used four tests that are commonly deployed in autism research to look for improvement in emotion recognition and broader social skills. As we described in our 2019 paper in JAMA Pediatrics, the intervention group showed significant gains over the control group in one test (the socialization portion of the Vineland Adaptive Behavior Scales [PDF]).
We also asked parents to tell us what they had noticed. Their observations helped us refine the prototype’s design, as they commented on technical functionality, user frustrations, and new features they’d like to see. One email from the beginning of our at-home design trial stands out. The parent reported an immediate and dramatic improvement: “[Participant] is actually looking at us when he talks through google glasses during a conversation…it’s almost like a switch was turned.… Thank you!!! My son is looking into my face.”
This email was extremely encouraging, but it sounded almost too good to be true. Yet comments about increased eye contact continued throughout our studies, and we documented this anecdotal feedback in a publication about that design study. To this day, we continue to hear similar stories from a small group of “light switch” participants.
We’re confident that the Superpower Glass system works, but to be honest, we don’t really know why. We haven’t been able to determine the primary mechanism of action that leads to increased eye contact, social engagement, and emotion recognition. This unknown informs our current research. Is it the emotion-recognition feedback that most helps the children? Or is our device mainly helping by drawing attention to faces with its green box? Or are we simply providing a platform for increased social interaction within the family? Is the system helping all the kids in the same way, or does it meet the needs of various parts of the population differently? If we can answer such questions, we can design interventions in a more pointed and personalized way.
The startup Cognoa, founded by coauthor Dennis Wall, is now working to turn our Superpower Glass prototype into a clinical therapy that doctors can prescribe. The FDA breakthrough therapy designation for the technology, which we earned in February 2019, will speed the journey toward regulatory approval and acceptance by health insurance companies. Cognoa’s augmented-reality therapy will work with most types of smartphones, and it will be compatible not only with Google Glass but also with new brands of smart glasses that are beginning to hit the market. In a separate project, the company is working on a digital tool that physicians can use to diagnose autism in children as young as 18 months, which could prepare these young kids to receive treatment during a crucial window of brain development.
Ultimately, we feel that our treatment approach can be used for childhood concerns beyond autism. We can design games and feedback for kids who struggle with speech and language, for example, or who have been diagnosed with attention deficit hyperactivity disorder. We’re imagining all sorts of ubiquitous AI-powered devices that deliver treatment to users, and which feed into a virtuous cycle of technological improvement; while acting as learning aids, these devices can also capture data that helps us understand how to better personalize the treatment. Maybe we’ll even gain new scientific insights into the disorders in the process. Most important of all, these devices will empower families to take control of their own therapies and family dynamics. Through Superpower Glass and other wearables, they’ll see the way forward.
This article appears in the April 2020 print issue as “Making Emotions Transparent.”
About the Authors
When Stanford professor Dennis Wall met Catalin Voss and Nick Haber in 2013, it felt like a “serendipitous alignment of the stars,” Wall says. He was investigating new therapies for autism, Voss was experimenting with the Google Glass wearable, and Haber was working on machine learning and computer vision. Together, the three embarked on the Superpower Glass project to encourage autistic kids to interact socially and help them recognize emotions.
As COVID-19 sweeps through the planet, a number of researchers have advocated the use of digital contact tracing to reduce the spread of the disease. The controversial technique can be effective, but can have disastrous consequences if not implemented with proper privacy checks and encryption.
In a number of critical cases of COVID-19, a hyper-vigilant immune response is triggered in patients that, if untreated, can itself prove fatal. Fortunately, some pharmaceutical treatments are available—although it’s not yet fully understood how well they might work in addressing the new coronavirus.
Meanwhile, a new blood filtration technology has successfully treated other, similar hyper-vigilant immune syndromes for people who underwent heart surgeries and the critically ill. Which could make it a possibly effective therapy (albeit still not FDA-approved) for some severe COVID-19 cases.
Inflammation, says Phillip Chan—an M.D./PhD and CEO of the New Jersey-based company CytoSorbents—is the body’s way of dealing with infection and injury. It’s why burns and sprained ankles turn red and swell up. “That’s the body’s way of bringing oxygen and nutrients to heal,” he said.
I get to meet plenty of smart people through my work as a mentor to entrepreneurs, but few have proven as talented as Mobin Nomvar, cofounder of Sydney startup Automated Process Synthesis Co., which develops machine-learning software for chemical-process engineering. When he first started his company, Nomvar knew a lot about coding, more about chemistry, but little about the biological processes going on within his own body.
You see, as he settled into the sedentary role of full-time managing director, his waistline expanded. So he studied the problem and planned to burn through the fat he’d accumulated by starving his body of carbohydrates, forcing it to metabolize fat instead of sugar for energy. But having never dieted before, he had no idea what to expect. As an engineer, he wanted a number he could measure to gauge how well he was doing. Blood-glucose level would be just the ticket, he thought.
Many millions of diabetics make that measurement every day, sometimes several times a day, by pricking a finger and swiping a drop of blood onto a test strip. Nomvar asked himself: Could there be a less painful way?
He suspected he might find one with a measurement of his body’s bioimpedance. For 20 years, researchers had published promising findings that showed it was possible to measure blood-glucose levels this way, but never with the accuracy of blood-based measurements.
Could a bioimpedance-based blood-glucose sensor be improved and perhaps commercialized? Nomvar built a prototype, strapped it on, and ate a bowl of carbohydrate-packed rice. Then he watched the measurements tick upward—validation enough to keep going. When he reached the limit of his understanding, he assembled a team to help: an expert in bioimpedance, a chemical engineer, and a software designer—people with the sorts of complementary talents needed to develop such a gizmo.
Together they reviewed the science and read the relevant literature, and only then went back to Nomvar’s prototype, seeking more signal from its sensor. Finding the needed signal turned out to be easier than they’d expected, though they had to borrow a US $50,000 commercial bioimpedance unit to confirm the results from the crude prototype, constructed at a thousandth the cost.
Sensitive bioimpedance measurements turned out to be necessary, but not sufficient. Many different body processes affect bioimpedance, and isolating the effects of changing glucose levels appeared nearly impossible. Indeed, trying to clean up the signal using machine learning produced only more noise. Every team member had a go at a solution, drawing from their expertise to tune the algorithm. A week-long burst of activity from one teammate got them over the line, resulting in a device that outperforms every other noninvasive technique that’s ever been used to measure blood glucose.
In just over three months, they perfected their ring-shaped sensor. Slipped over a finger and paired with the proper algorithm, it can measure blood glucose continuously and inexpensively. As the number of diabetics globally passes half a billion, such a device could have quite a large market—if it successfully makes it through the clinical testing needed to certify it as both accurate and safe. Many medical wonders fail to overcome that hurdle, and Nomvar’s work so far is just the first step in what could be a very long race.
It’s a step Nomvar admits he couldn’t have taken alone. But his team—diverse, experienced, and capable—had the strengths needed to succeed.
This article appears in the April 2020 print issue as “Don’t Go It Alone.”
Halfway to the moon and bleeding oxygen into space, the Apollo 13 spacecraft and its occupants seemed in dire straits. But the astronauts modified their CO2 scrubbers with a duct-tape-and-plastic-bag solution cooked up by NASA engineers and made famous in the 1995 movie. Now, in support of medical workers facing hardware shortages due to the coronavirus pandemic, several networks of volunteers are developing similarly MacGyver’d respiratory equipment using easy-to-find or printable parts.
Several such groups have taken on the open source mantle, and their stories illustrate some of the strengths and weaknesses of the wider open source movement.
One fast-moving team managed to use a 3D printer to produce 100 replacement valves for an Italian hospital’s intensive care unit, but was concerned that it might face legal threats from the original equipment manufacturer.
Another ;group is targeting a long list of supplies and devices, such as homemade hand sanitizer, 3D-printed face shields, nasal cannulas and ventilator machines. One company is prototyping an open-source oxygen concentrator. Some efforts are much lower-tech: One Indiana hospital asked volunteers to help sew facemasks following CDC guidelines.
The core idea is nothing new: anesthetist John Dingley and colleagues published free instructions for a low-cost emergency ventilator in 2010. But it may feel more urgent now that people are reading headlines about equipment shortages at hospitals in even the richest countries in the world.
One reason there often aren’t many manufacturers of a given medical device is the cost of getting the devices tested and approved for medical use. Even if the individual units don’t cost much, getting a medical device to market the usual way costs anywhere from $31 million to around $94 million, depending on the complexity and application, according to a 2010 estimate.
There are also issues with whether 3D-printed parts can be cleaned properly. Many ad hoc fixes won’t be as durable as products produced with an eye toward longer-term cost-effectiveness.
Still, moonshot-gone-wrong solutions may obtain expedited review from medical regulators for narrow uses, as is the case with the Open Source Ventilator Ireland group, which told Forbes it is getting a sped-up examination from Ireland’s regulator.
Michigan Technological University engineering professor Joshua M. Pearce, one of the editors of a forthcoming special issue of the journal HardwareX focusing on open source COVID-19 medical hardware, predicts that the U.S. Food and Drug Administration (FDA) will likely also waive some licensing requirements in the event of massive shortages.
“In the end I think it comes down to the Golden Rule: Do onto others as you would have them do onto you,” Pearce says. “I know I would be happy to have the option of even a partially tested open source ventilator if I had COVID19, needed it, and all the hospital systems were used.”
If volunteer medical device makers get past legal hurdles, they will also need to get in sync with patients and medical staff about what really works.
The final users’ needs have often been “a minor part of the decision-making process” in commercial device development, wrote University of Pisa bioengineer Carmelo De Maria and colleagues in a chapter on open-source medical devices in the Clinical Engineering Handbook.
“Sometimes those people don’t have any competence in medical devices and they risk creating confusion,” De Maria says.
Already, some members of the Open Source COVID19 Medical Supplies group on Facebook have weighed in with that kind of criticism. One wrote: “None of the mask designs I’ve seen people printing here will do anything to stop the virus.” Another group member, a healthcare worker, pooh-poohed a thread devoted to an automatic bag valve, writing: “There is no real-life scenario an automated Ambubag would be useful. Everyone designing these can turn their skills elsewhere.”
That feedback, visible to any potential contributors, might help steer the group toward more viable solutions. One recent post, for example, suggested concentrating amateur efforts on lower-tech devices aimed at less critical patients, to free up first-line hardware for the most critical patients.
A different issue is coordinating all the digital Good Samaritans. One recently formed group, called Helpful Engineering, reported having over 3000 registered volunteers as of 19 March, and over 11,000 people on Slack, the messaging platform. (And you, newly remote worker, thought your office Slack was getting noisy.)
The speed with which people can talk about, and even design something online may be tantalizing, but it might not reflect how fast the output can spread in the real world. In the Clinical Engineering Handbook, De Maria and colleagues write that the growing ease with which people can make their own medical hardware makes it even more important to create accompanying rules and methods for validating do-it-yourself devices.
De Maria helped build Ubora, a platform where makers can document the work they have done to show their device’s efficacy.
“Open Source can create a reliable prototype but [when] you want to go to the next level you need another type of approach that has to take your brilliant idea, do an experiment together with experts before going to the patients,” De Maria says.
Generating widely affordable, easily buildable devices that withstand rigorous testing and are legal to distribute, even with the speed of open source and goodwill and skills of thousands of volunteers, may not happen as quickly as we need it to in order to suppress this pandemic.
That doesn’t make the effort a waste. Think of all the engineers who were inspired by the story of Apollo 13’s improvised scrubbers, and the institutional knowledge NASA gained for future missions. If the lessons of the hardware push in response to today’s COVID-19 outbreak stay in the open, they will be useful in the longer term.
With that in mind, De Maria and colleagues are challenging open source hardware makers with a competition calling for European-compliant medical designs that will be well-documented using Ubora. The first deadline is 30 April and awards won’t be presented until June.
“We created the competition looking for a solution but in perspective,” De Maria says. Creating and validating systematic solutions will take months, not weeks.
While some smaller open source components have already received government approval for so-called “compassionate use” and spare parts such as those valves are welcome, it may be too late for them to make much of a difference in places still on the wrong side of the COVID-19 growth curve.
The real reward is saving lives in future pandemics.
Says Pearce: “I am operating under the assumption that… anything we do now will help for the next pandemic.”
IEEE Spectrum updated this story with quotes from De Maria.
There’s no shortage of information about the coronavirus pandemic: News sites cover every development, and sites like the Johns Hopkins map constantly update the number of global cases (246,276 as of this writing).
But for the most urgent questions, there seem to be no answers. Is it possible that I caught the virus when I went out today? Did I cross paths with someone who’s infected? How prevalent is the coronavirus in my local community? And if I’m feeling sick, where can I go to get tested or find treatment?
A group of doctors and engineers have come together to create an app that will answer such questions. Daniel Kraft, the U.S.-based physician who’s leading the charge, says his group has “gotten the green light” from the World Health Organization (WHO) to build the open-source app, and that it will be an official WHO app to help people around the world cope with COVID-19, the official name of the illness caused by the new coronavirus.
Supercomputer-aided development of possible antiviral therapies, rapid lab testing of other prospective treatments, and grants to develop new vaccine technologies: Coronavirus responses such as these may still be long shots when it comes to quickly containing the pandemic. But these long shots represent one possible hope for ultimately turning the tide and stopping the virus’s rapid development and spread,. (Another possible hope emerges from recent case reports out of China suggesting that many of the worst COVID-19 cases feature a hyperactive immune response called a “cytokine storm”—a physical phenomenon for which reliable tests and some effective therapies are currently available.)
As part of the direct attack on SARS-CoV-2—the virus that causes COVID-19—virtual and real-world tests of potential therapies represent a technology-focused angle for drug and vaccine research. And unlike traditional pharmaceutical R&D, where clinical trial timetables mean that drugs can take as long as 10 years to reach the marketplace, these accelerated efforts could yield results within about a year or so.
For instance, new supercomputer simulations have revealed a list of 77 potential so-called repurposed drugs targeted at this strain of coronavirus.
Researchers say the results are preliminary, and that they expect most initial findings will ultimately not work in lab tests. But they’re reaching the lab testing stage now because supercomputers have made it possible to sort through leagues of candidate therapies in days using simulations that previously took weeks or months.
Says Jeremy Smith, professor of biochemistry and cellular and molecular biology at the University of Tennessee, Knoxville, the genetic sequencing of the new coronavirus provided the clue that he and fellow researcher Micholas Dean Smith (no relation) used to move ahead in their research.
“They found it was related to the SARS virus,” Smith says. “It probably evolved from it. It’s a close cousin. It’s like a younger brother or sister.”
And since the proteins that SARS makes have been well studied, researchers had a veritable recipe book of very likely SARS-CoV-2 proteins that might be targets for potential drugs to destroy or disable.
“We know what makes up all the proteins now,” says Smith. “So you try and find drugs to stop the proteins from doing what the virus wants them to do.”
Smith said working with a short timetable of months not years means limiting the therapies available to be tested. There are, for starters, thousands of molecules naturally occurring in plants and microbes and have been part of human diets for many years. Meanwhile, other molecules have already been developed by pharmaceutical companies for other drugs for other conditions.
“Many of them are already approved by the regulatory agencies, such as the FDA in the U.S., which means their safety has already been tested—for another disease,” Smith says. “It should be much quicker to get the approval to use it on lots of people. That’s the first stage. If that doesn’t work, then we’d have to go and design a new one. Then you’d have your 10-15 years and $1 billion dollar of investment on average. Hopefully, that’d be shortened in this case. But you don’t know.”
According to Smith, the reason SARS-CoV-2 is called a coronavirus is because of the protein spikes on the outside of the virus make it look like the sun’s corona. Here is where the supercomputing came in.
Using Oak Ridge National Laboratory’s Summit supercomputer (currently ranked the world’s fastest at 0.2 peak exaflops), the two co-authors ran detailed simulations of the spikes in the presence of 9000 different compounds that could potentially be repurposed as COVID-19 drugs.
They ultimately ranked 8000 of those compounds from best to worst in terms of gumming up the coronavirus’s spikes—which would, ideally, stop it from infecting other cells.
Running this molecular dynamics sequence on standard computers might take a month. But on Summit, the computation took a day, Smith recalls.
He said they’re now working with the University of Tennessee Health Sciences Center in Memphis and as well as possibly other wet-lab partners to test some of the top-performing compounds from their simulations on real-world SARS-CoV-2 virus particles.
And because the supercomputer simulation can run so quickly, Smith said they’re considering taking the next step of the process.
“There’s some communication [with the wet lab]: ‘This compound works, this one doesn’t,’” he said. “‘We have a partial response from this one, a good response from that.’ And you would even use things like artificial intelligence at some point to correlate properties of compounds that are having effects.”
How soon before the scientists could take this next step?
“It could be next week, next month or next year,” he said, “Depending on the results.”
On another front, the Bill and Melinda Gates Foundation has underwritten both an “accelerator” fund for promising new COVID-19 treatments as well as an injector device for a SARS-CoV-2 vaccine scheduled to begin clinical trials next month. (The company developing the vaccine—Innovio Pharmaceuticals in Plymouth Meeting, Penn.—has announced an ambitious timetable in which it expects to see “one million doses” by the end of this year.)
One of the first announced funded projects by the “accelerator” (which has been co-funded by Wellcome in the U.K. and Mastercard) is a wet-lab test conducted by the Rega Institute for Medical Research in Leuven, Belgium.
Unlike the Summit supercomputer research, the Rega effort involves rapid chemical testing of 15,000 antiviral compounds (from other approved antiviral therapies) on the SARS-CoV-2 virus.
An official from the Gates Foundation, contacted by IEEE Spectrum, said they could not currently provide any further information about the research or the accelerator program; we were referred instead to a blog post by the Foundation’s CEO Mark Suzman.
“We’re optimistic about the progress that will be made with this new approach,” Suzman wrote. “Because we’ve seen what can come of similar co-operation and coordination in other parts of our work to combat epidemics.”
Here’s something that sounds decidedly useful right now: A home test kit that anyone could use to see if they have the coronavirus. A kit that’s nearly as easy to use as a pregnancy test, and that would give results in half an hour.
It doesn’t exist yet, but serial biotech entrepreneur Jonathan Rothberg is working on it.
At Brigham and Women’s Hospital in Boston, a potential COVID-19 patient can now drive in to the ambulance bay, roll down their window, and ask staff to swab their nose and throat.
Those swabs will be sent to a state lab for a real-time PCR test, which amplifies any viral genetic material so it can be compared to the new coronavirus, SARS-CoV-2. But this standard test must be carried out in a certified laboratory with trained technicians, takes 3 to 4 days to deliver results, and produces some false negatives.
In late January, as the novel coronavirus began spreading through China, computer scientists modeling the outbreak ranked Taiwan the region with the second highest risk of importation of the virus. The island sits just 130 km off the coast of mainland China and shuttles thousands of passengers to and from the mainland daily.
But so far Taiwan reports that it has largely mitigated the spread of the pathogen. Fewer than 50 cases of the coronavirus, which causes the disease COVID-19, had been confirmed on the island as of March 11. South Korea, by contrast, had confirmed nearly 8,000 cases.
Taiwan owes its success largely to the emergency implementation of big data analytics and new technologies, according to a recent report in the Journal of the American Medical Association (JAMA)authored by individuals in California and Taipei.
Taiwan officials from the beginning of the viral outbreak “did a very detailed mapping of who got it from whom,” and were able to stop a lot of transmission early, says Chih-Hung Jason Wang, director of the Center for Policy, Outcomes and Prevention at Stanford University, who co-authored the opinion article.
Notably, officials integrated Taiwan’s national health insurance database with its immigration and customs database. This enabled the government to track the 14-day travel histories and symptoms of its citizens, nearly all of whom have an identifying national health insurance (NHI) card. All hospitals, clinics and pharmacies were given access to this information for each patient.
Taiwan restricted entry for foreign travelers from the most affected regions, and for those allowed entry, officials tracked them with mobile technologies. Foreign visitors are asked to scan a QR code that takes them to an online health declaration form where they provide contact information and symptoms. People placed under quarantine are given government-issued mobile phones and monitored with calls and visits.
“They incentivized people to be truthful” on their health declaration forms, says Wang. “If you are placed in the high risk group, the government will help you get care. If you get sick by yourself, you’ll have to wander around the hospital trying to get help.”
Taiwan also relied on old-fashioned face-to-face check-ins. Households were grouped into wards, or sections, and a chief was named for each ward. “So [authorities] will say to the chief, ‘There’s a person under quarantine in your ward, why don’t you go check on them and bring them some food,’” says Wang. “In an epidemic, you have to be nice to people, otherwise they’ll hide their symptoms.”
To manage resources, Taiwanese officials used IT to estimate the region’s supply of masks, negative pressure isolation rooms, and other health provisions. They set price limits on masks and rationed them using individuals’ NHI cards and an online ordering mechanism. Soldiers were sent to work at mask factories to ramp up production.
Overseeing all the action is the National Health Command Center. “They set that up in a compound on the seventh floor of Taiwan’s Centers for Disease Control,” says Wang. “There are data analysts in there and reporters; it can host up to a hundred people 24/7.”
These actions are part of Taiwan’s emergency epidemic response plan, which it devised after the 2003 SARS outbreak in China. Under Taiwan’s Communicable Disease Control Act, in the event of a crisis, officials can activate the plan, giving the government powers it wouldn’t normally have.
Taiwanese officials activated the emergency plan January 20 and since then have implemented over 124 action items, according to the JAMA report.
Penalties for noncompliance with the temporary orders are steep. Profiteering off prevention products like masks, or spreading false information about COVID-19 can bring a penalty of years in jail and fines over a hundred thousand US dollars. One couple was fined USD $10,000 for breaking a 14-day quarantine rule. Three Hong Kong visitors who “disappeared for a week” were tracked down, fined USD $2,350 each, and transferred to designated quarters for medical isolation, according to the JAMA report.
Taiwan’s heavy-handed government actions might not go over well in a country such as the United States. But Wang says the measures, so far, have been well received in Taiwan, in part because they were planned ahead and implemented on a temporary basis.
He and his coauthors write that it is unclear “whether the intensive nature of these policies can be maintained until the end of the epidemic and continue to be well-received by the public.”
Taiwan’s emergency measures have probably not halted community-based transmission of COVID-19. Like the rest of the world, the number of officially confirmed cases in Taiwan is likely far fewer than the true number on the ground, since there are people who have the disease and don’t know it, or have such mild symptoms that they don’t seek care or get tested. “It’s impossible not to have more cases,” says Wang.
As the deadly new coronavirus permeates the planet, scientists are using genetic sequencing and an open-source software tool to track its transmission.
The software tool, called Nextstrain, can’t predict where the virus is going next. But it can tell us where new cases of the virus are coming from. That’s crucial information for health officials globally, who are trying to determine whether new cases are arriving in their countries through international travel, or being transmitted locally.
This type of analysis, called genomic epidemiology, “is extremely valuable to public health,” says James Hadfield, a computational scientist working on Nextstrain. “The sooner we can turn around this data, the better the response can be.”
The novel coronavirus, which causes the respiratory disease COVID-19, first emerged in December in China, where it has infected over 80,000 people. It has since spread to more than 85 countries [PDF], with the largest concentrations of cases so far in South Korea, Iran, and Italy. More than 250 cases had been confirmed in the United States at press time.
When DARPA launched its Pandemic Preparedness Platform (P3) program two years ago, the pandemic was theoretical. It seemed like a prudent idea to develop a quick response to emerging infectious diseases. Researchers working under the program sought ways to confer instant (but short-term) protection from a dangerous virus or bacteria.
Today, as the novel coronavirus causes a skyrocketing number of COVID-19 cases around the world, the researchers are racing to apply their experimental techniques to a true pandemic playing out in real time. “Right now, they have one shot on goal,” says DARPA program manager Amy Jenkins. “We’re really hoping it works.”
The P3 program’s plan was to start with a new pathogen and to “develop technology to deliver medical countermeasures in under 60 days—which was crazy, unheard of,” says Jenkins. The teams have proven they can meet this ambitious timeline in previous trials using the influenza and Zika viruses. Now they’re being asked to pull off the same feat with the new coronavirus, which more formally goes by the name SARS-CoV-2 and causes the illness known as COVID-19.
James Crowe, director of the Vanderbilt Vaccine Center at the Vanderbilt University Medical Center, leads one of the four P3 teams. He spoke with IEEE Spectrum from Italy, where he and his wife are currently traveling—and currently recovering from a serious illness. “We both got pretty sick,” Crowe says, “we’re pretty sure we already had the coronavirus.” For his wife, he says, it was the worst illness she’s had in three decades. They’re trying to get tested for the virus.
Ironically, Crowe is quite likely harboring the raw material that his team and others need to do their work. The DAPRA approach called for employing antibodies, the proteins that our bodies naturally use to fight infectious diseases, which remain in our bodies after an infection.
In the P3 program, the 60-day clock begins when a blood sample is taken from a person who has fully recovered from the disease of interest. Then the researchers screen that sample to find all the protective antibodies the person’s body has made to fight off the virus or bacteria. They use modeling and bioinformatics to choose the antibody that seems most effective at neutralizing the pathogen, and then determine the genetic sequence that codes for the creation of that particular antibody. That snippet of genetic code can then be manufactured quickly and at scale, and injected into people.
Jenkins says this approach is much faster than manufacturing the antibodies themselves. Once the genetic snippets are delivered by an injection, “your body becomes the bioreactor” that creates the antibodies, she says. The P3 program’s goal is to have protective levels of the antibodies circulating within 6 to 24 hours.
DARPA calls this a “firebreak” technology, because it can provide immediate immunity to medical personnel, first responders, and other vulnerable people. However, it wouldn’t create the permanent protection that vaccines provide. (Vaccines work by presenting the body with a safe form of the pathogen, thus giving the body a low-stakes opportunity to learn how to respond, which it remembers on future exposures.)
Robert Carnahan, who works with Crowe at the Vanderbilt Vaccine Center, explains that their method offers only temporary protection because the snippets of genetic code are messenger RNA, molecules that carry instructions for protein production. When the team’s specially designed mRNA is injected into the body, it’s taken up by cells (likely those in the liver) that churn out the needed antibodies. But eventually that RNA degrades, as do the antibodies that circulate through the blood stream.
“We haven’t taught the body how to make the antibody,” Carnahan says, so the protection isn’t permanent. To put it in terms of a folksy aphorism: “We haven’t taught the body to fish, we’ve just given it a fish.”
Carnahan says the team has been scrambling for weeks to build the tools that enable them to screen for the protective antibodies; since very little is known about this new coronavirus, “the toolkit is being built on the fly.” They were also waiting for a blood sample from a fully recovered U.S. patient. Now they have their first sample, and are hoping for more in the coming weeks, so the real work is beginning. “We are doggedly pursuing neutralizing antibodies,” he says. “Right now we have active pursuit going on.”
Jenkins says that all of the P3 groups (the others are Greg Semposki’s lab at Duke University, a small Vancouver company called AbCellera, and the big pharma company AstraZeneca) have made great strides in technologies that rapidly identify promising antibodies. In their earlier trials, the longer part of the process was manufacturing the mRNA and preparing for safety studies in animals. If the mRNA is intended for human use, the manufacturing and testing processes will be much slower because there will be many more regulatory hoops to jump through.
Crowe and Carnahan’s team has seen another project through to human trials; last year the lab worked with the biotech company Moderna to make mRNA that coded for antibodies that protected against the Chikungunya virus. “We showed it can be done,” Crowe says.
Moderna was involved in a related DARPA program known as ADEPT that has since ended. The company’s work on mRNA-based therapies has led it in another interesting direction—last week, the company made news with its announcement that it was testing an mRNA-based vaccine for the coronavirus. That vaccine works by delivering mRNA that instructs the body to make the “spike” protein that’s present on the surface of the coronavirus, thus provoking an immune response that the body will remember if it encounters the whole virus.
While Crowe’s team isn’t pursuing a vaccine, they are hedging their bets by considering both the manufacture of mRNA to code for the protective antibodies, and manufacturing the antibodies themselves. Crowe says that the latter process is typically slower (perhaps 18 to 24 months), but he’s talking with companies that are working on ways to speed it up. The direct injection of antibodies is a standard type of immunotherapy.
Crowe’s convalescence in Italy isn’t particularly relaxing. He’s working long days, trying to facilitate the shipment of samples to his lab from Asia, Europe, and the United States, and he’s talking to pharmaceutical companies that could manufacture whatever his lab comes up with. “We have to figure out a cooperative agreement for something we haven’t made yet,” he says, “but I expect that this week we’ll have agreements in place.”
Manufacturing agreements aren’t the end of the story, however. Even if everything goes perfectly for Crowe’s team and they have a potent antibody or mRNA ready for manufacture by the end of April, they’d have to get approval from the U.S. Food and Drug Administration. To get a therapy approved for human use typically takes years of studies on toxicity, stability, and efficacy. Crowe says that one possible shortcut is the FDA’s compassionate use program, which allows people to use unapproved drugs in certain life-threatening situations.
Gregory Poland, director of the Mayo Clinic Vaccine Research Group, says that mRNA-based prophylactics and vaccines are an important and interesting idea. But he notes that all talk of these possibilities should be considered “very forward-looking statements.” He has seen too many promising candidates fail after years of clinical trials to get excited about a new bright idea, he says.
Poland also says the compassionate use shortcut would likely only be relevant “if we’re facing a situation where something like Wuhan is happening in a major city in the U.S., and we have reason to believe that a new therapy would be efficacious and safe. Then that’s a possibility, but we’re not there yet,” he says. “We’d be looking for an unknown benefit and accepting an unknown risk.”
Yet the looming regulatory hurdles haven’t stopped the P3 researchers from sprinting. AbCellera, the Vancouver-based biotech company, tells IEEE Spectrum that its researchers have spent the last month looking at antibodies against the related SARS virus that emerged in 2002, and that they’re now beginning to study the coronavirus directly. “Our main discovery effort from a COVID-19 convalescent donor is about to be underway, and we will unleash our full discovery capabilities there,” a representative wrote in an email.
At Crowe’s lab, Carnahan notes that the present outbreak is the first one to benefit from new technologies that enable a rapid response. Carnahan points to the lab’s earlier work on the Zika virus:“In 78 days we went from a sample to a validated antibody that was highly protective. Will we be able to hit 78 days with the coronavirus? I don’t know, but that’s what we’re gunning for,” he says. “It’s possible in weeks and months, not years.”
Computer scientists tracking the deadly coronavirus epidemic have been working diligently to predict the virus’ next moves. The novel virus, which causes a respiratory illness dubbed COVID-19, has taken the lives of more than 2,100 people. It first emerged in December in the Chinese city of Wuhan, and has since infected more than 75,000 people, mostly in China. The numbers of new cases have begun to drop in China, but concern is growing over expanding outbreaks of COVID-19 in Singapore, Japan, South Korea, Hong Kong, and Thailand.
Alessandro Vespignani, a computer scientist at Northeastern University in Boston who has developed predictive models of the epidemic, spoke with IEEE Spectrum about computational efforts to thwart a global pandemic. His team has developed a tool, called EpiRisk, that estimates the probability that infected individuals will spread the disease to other areas of the world via travel. The tool also tracks the effectiveness of travel bans.
Publicly available skin cancer detection apps, such as SkinVision, use AI-based analysis to determine if a new or changing mole is a source of concern or nothing to worry about. Yet according to a new analysis of the scientific evidence behind those apps, there’s a lot to worry about.
Getting screened for sleep apnea often means spending a night in a special clinic hooked up to sensors that measure your brain activity, eye movement, and blood oxygen levels. But for long-term, more convenient monitoring of sleep apnea, a team of researchers has developed a wearable device that tracks a user’s breathing. The device, described in a study published 20 January in the IEEE Journal of Biomedical and Health Informatics, uses a unique combination of bioimpedance (a measurement of electrical signals passing through the body) and machine learning algorithms.
Scientists have just announced the completion of a lofty DARPA challenge to integrate 10 human organs-on-chips in an automated system to study how drugs work in the body. The technology provides an alternative to testing drugs on humans or other animals.
Referred to as the “Interrogator” by its developers, the system links up to ten different human organ chips—miniaturized microfluidic devices containing living cells that mimic the function of the organs they represent, such as intestine, heart or lung—and maintains their function for up to three weeks. In two experiments, the system successfully predicted how a human body metabolizes specific drugs.
“This is a wonderful technology for the field of organ-on-a-chip,” says Yu Shrike Zhang, a bioengineer at Harvard University Medical School and Brigham and Women’s Hospital in Boston, who was not involved in the research. A platform that automates the culturing, linking, and maintenance of multiple human organ-on-chips, all while inside a sterile incubator, “represents a great technological advancement,” says Zhang, who last year wrote about the promises and challenges of organ-on-a-chip systems for IEEE Spectrum.
In 2010, Ingber and colleagues reported the first human organ-on-a-chip, a human lung. Each chip, roughly the size of a computer memory stick, is composed of a clear polymer containing hollow channels: one channel is lined with endothelial cells, the same cells that line human blood vessels, and another hosts organ-specific cells, such as liver or kidney cells.
After creating numerous individual organ chips, Ingber received a 2012 DARPA grant to try to integrate 10 organs-on-chips and use them to study how drugs are absorbed and metabolized in the body.
Eight years and three prototypes later, the team succeeded. The most recent version of the platform took four years to develop, says Richard Novak, a senior staff engineer at the Wyss Institute who built the machine. Within that time, a whole year was needed to develop a user interface that biologists with no programming experience could easily operate.
“It enables a really complex experiment to be set up in two minutes,” says Novak.
The “Interrogator,” as Novak fondly calls it, consists of a robotic system that pipettes liquids—such as a blood substitute and/or a drug of choice—into the channels; a peristaltic pump to move those liquids through the microfluidic chips; custom software with an easy drag-n-drop interface; and a mobile microscope to monitor the chips and their connections without having to manually reach in and take out each chip for examination, as was done with older systems.
Best of all, says Novak, the whole machine fits into a standard laboratory incubator, which maintains living cells at constant temperature and light conditions.
Finally, the team was ready to interrogate the Interrogator. Could the system truly mimic the human body in a drug test? To find out, the scientists connected a human gut chip, liver chip, and kidney chip, then added nicotine to the gut chip to simulate a person orally swallowing the drug (such as if a person were chewing nicotine gum). The time it took the nicotine to reach each tissue, and the maximum nicotine concentrations in each tissue, closely matched levels previously measured in patients.
In a second test, the researchers linked liver, kidney, and bone marrow chips and administered cisplatin, a common chemotherapy drug. Once again, the drug was metabolized and cleared by the kidney and liver at levels that closely matched those measured in patients. Cells in the kidney chip even expressed the same biological markers of injury as a living kidney does during chemotherapy treatment.
“Compared against clinical studies, they matched up really nicely,” says Novak. The team is now using their linked organ chips to study the gut microbiome and influenza transmission. The Interrogator technology IP has been licensed by a Wyss Institute spin-off company, Boston-based Emulate, which Ingber founded.
The coffin that holds the mummified body of the ancient Egyptian Nesyamun, who lived around 1100 B.C., expresses the man’s desire for his voice to live on. Now, 3,000 years after his death, that wish has come true.
Using a 3D-printed replica of Nesyamun’s vocal tract and an electronic larynx, researchers in the UK have synthesized the dead man’s voice. The researchers described the feat today in the journal Scientific Reports.
“We’ve created sound for Nesyamun’s vocal tract exactly as it is positioned in his coffin,” says David Howard, head of the department of electrical engineering at Royal Holloway University of London, who coauthored the report. “We didn’t choose the sound. It is the sound the tract makes.”
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.