Theoretically, quantum computers theoretically can prove more powerful than any supercomputer. And recent moves from computer giants such as Google and pharmaceutical titans such as Roche now suggest drug discovery might prove to be quantum computing’s first killer app.
Whereas classical computers switch transistors either on or off to symbolize data as ones or zeroes, quantum computers use quantum bits, or qubits, that, because of the surreal nature of quantum physics, can be in a state of superposition where they are both 1 and 0 simultaneously.
Superposition lets one qubit essentially perform two calculations at once, and if two qubits are linked through a quantum effect known as entanglement, they can help perform 22 or four calculations simultaneously; three qubits, 23 or eight calculations; and so on. In theory, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the visible universe.
There’s a new player in the effort to quickly and effectively screen populations for COVID-19: This week, a Princeton spin-off company launches a coronavirus-screening app for businesses that takes two minutes and uses data from commercial wearable devices.
In early clinical tests, the tool was 90% accurate in predicting if a person was positive or negative for the virus—even if they had no noticeable symptoms. Current rapid diagnostic tools, such as temperature checks, are not effective at detecting and preventing the spread of COVID, according to federal authorities.
Artificial intelligence in healthcare is often a story of percentages. One 2017 study predicted AI could broadly improve patient outcomes by 30 to 40 percent. Which makes a manifold improvement in results particularly noteworthy.
In this case, according to one Israeli machine learning startup, AI has the potential to boost the success rate of in vitro fertilization (IVF) by as much as 3x compared to traditional methods. In other words, at least according to these results, couples struggling to conceive that use the right AI system could be multiple times more likely to get pregnant.
The Centers for Disease Control and Prevention defines assisted reproductive technology (ART) as the process of removing eggs from a woman’s ovaries, fertilizing it with sperm and then implanting it back in the body.
The overall success rate of traditional ART is less than 30%, according to a recent study in the journal Acta Informatica Medica.
But, says Daniella Gilboa, CEO of Tel Aviv, Israel-based AiVF—which provides an automated framework for fertility and IVF treatment—help may be on the way. (However, she also cautions against simply multiplying 3x with the 30% traditional ART success rate quoted above. “Since pregnancy is very much dependent on age and other factors, simple multiplication is not the way to compare the two methods,” Gilboa says.)
In the U.S. alone, 7.3 million women are battling infertility, according to a 2020 report from the American Society for Reproductive Medicine. In the U.S., 2.7 million IVF cycles are performed each year.
AiVF is using ML and computer vision technology to allow embryologists to discover which embryos have the most potential for success during intrauterine implantation. AiVF is working with eight facilities in clinical trials around the world, including in Israel, Europe and the United States. It plans to launch commercially in 2021.
Ron Maor, head of algorithm research at AiVF, says that AiVF has built its own “bespoke” layer on top of various off-the-shelf AI, ML and deep learning applications. These tools “handle the specific and often unusual aspects of embryo images, which are very different from most AI tasks,” Maor says.
AiVF’s ML technique involves creating time-lapse videos of developing embryos in an incubator. Over five days, the video shows the milestones of embryo development. Gilboa explains that previous methods yielded just one microscope image per day of the embryo compared with computer vision’s greater image-capturing success.
“By analyzing the video, you could dig out so many milestones and so many features the human eye cannot even detect,” Gilboa says. “Basically you train an algorithm on successful embryos, and you teach the algorithm what are successful embryos.”
Likely only one embryo out of 10 can be implanted in the uterus. Once a physician implants the embryo, the embryologist will know within 14 days whether the patient is pregnant, Gilboa says.
“As an embryologist I look at embryos, and I understand what happens to them,” Gilboa says. “If I learn on maybe thousands of embryos, the algorithm would learn on millions of embryos.”
As AiVF’s initial results suggest, computer vision and ML could potentially drive IVF’s prices down—in turn making it less expensive and burdensome for a woman to become pregnant.
“Once you have a digital embryologist, then you could set up clinics much easier,” Gilboa says. “Or each clinic could be much more scalable. So many more people could enjoy IVF and achieve their dream of having a child.”
Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.
The saddest fact about the coronavirus pandemic is certainly the deaths it has already caused and the many more deaths to come before the world gets the virus under at least as much control as, say, chicken pox or an ordinary flu.
The second-saddest fact about the pandemic is the economic and educational havoc it has wrought.
Perhaps the third-saddest fact is the unfortunate lack of agreement about the best strategies for living with the virus, which, at least in the U.S., is responsible for many of those deaths, and, arguably much of the havoc as well. It has roiled families as well as the presidential election, by politicizing the wearing of masks, the limits on gatherings, the openings and closings of restaurants and schools.
But yet another sad fact is that, as was said thousands of years ago, “there is nothing new under the sun,” and this too is nothing new; there is a shocking and unfortunate lack of widespread agreement about the best answers when it comes to many medical questions, even among doctors, because there is a shocking and unfortunate lack of evidence—and even respect for evidence—in the medical arena. That’s the contention of the authors of a rather prescient 2017 book, Unhealthy Politics: The Battle Over Evidence-Based Medicine,subtitled, how partisanship, polarization, and medical authority stand in the way of evidence-based medicine.
And so I’ve asked one of those authors to lay out the case that medicine isn’t nearly as evidence-based as we think it is and as it should be, and tell us what role that has played in the severity of the pandemic—and what we might do about it, both in the near pandemic future, and to improve American medicine overall.
Eric Patashnik is a professor of public policy and political science at Brown University and the director of its Master of Public Affairs Program. He joins us via Zoom.
Steven Cherry Eric, welcome to the podcast.
Eric Patashnik Thank you so much for having me. It’s a pleasure to be here.
Steven Cherry Eric, the book starts with a rather surprising fact about how little we know about an operation that’s performed millions of times in the U.S., which you call a sham procedure. What is the sham procedure and how is it possible that a procedure that’s known to be a sham can be performed millions of times?
Eric Patashnik Yeah, so the original motivation for my book, which is coauthored with Alan Gerber at Yale and Conor Dowling at the University of Mississippi, was a remarkable study published in the New England Journal of Medicine in 2002. And what the study found was that a very common surgical procedure, arthroscopy for osteoarthritis of the knee—which is an operation that many older folks who have arthritis get after they try more conservative treatments such as drugs or physical therapy—what it found was that this widely used surgical procedure worked no better at relieving joint pain or improving function than a sham operation—in other words, a placebo intervention in which the surgeon merely pretended to operate.
And this was a groundbreaking study. It received a tremendous amount of media coverage. And the three of us are social scientists were not physicians, but we were startled by the study. And we began asking very basic questions. How is it possible that a widely used surgery wouldn’t work any better than a fake operation? And so what we began to do was investigate the medical literature. And what we found was this surgery had diffused into widespread practice in advance of any rigorous evidence that it really worked. Essentially, a number of surgeons began performing this procedure and they found their patients said that they felt better, but there really wasn’t rigorous data.
And that’s not so surprising because it turns out medical procedures, such as the kinds of things that your internist might do for you when you’re ill, or even surgeries, often become widely used without rigorous trials. We have a Food and Drug Administration that looks at the efficacy of pharmaceuticals, but there’s no FDA for surgery, for example. And so this turns out to be actually a very, very wide problem.
And what we learned as we started digging into the case is that this was really illustrative of a much broader problem in health care. Indeed, some experts believe that less than half of the medical care that Americans receive is based on adequate scientific evidence about what works best. Many treatment options have never been compared head-to-head with alternatives. So, for example, let’s say you have a bad back and you know there are different ways to treat it. You could try a drug, you could try physical therapy, you could get spinal fusion surgery. What patients would like to know is which of these treatments is really going to be best for me. And often the answer is we just don’t know. The studies haven’t been done. We just lack that information.
Steven Cherry You say that people in one part of the country get four times as many hip replacements as those in another and not for any sound medical reason. So why are there these regional variations in treatments?
Eric Patashnik Yeah, so basically there’s no centralized process in the United States to investigate alternative treatments for common medical conditions and determining what works best. And scholars at the Dartmouth Institute have found that there are remarkable variations in practice. In other words, patients with the same medical condition might receive very, very different treatment options and that these variances in how patients with the same condition are treated are not driven by, for example, regional differences in disease or patient preferences or even clinical evidence.
It might just be, for example, that physicians in Cleveland use a particular drug, whereas those in Boston use a different one. And we don’t ever have a system to figure out expeditiously which one is best for patients. And as a result of that, it could be the case that some patients in one part of the country are receiving inferior treatments and we might have no way of discovering that in short order. That’s, I think, a major problem that a lot of patients don’t realize. Even if you see a doctor that you trust, even if you have a very good relationship with your physician, the treatment that you may be receiving might be quite different from other patients in other parts of the country that have the very same condition. There was, I think, a recognition that this was a big problem. As part of the Affordable Care Act, there was an agency created the Patient-Centered Outcomes Research Institute [PCORI], whose mission is to fund and disseminate research findings on the comparative effectiveness of different treatments and diagnostic tests and other options. And I think that’s a very good thing. But we’ve only made a limited amount of progress in generating the information we need to answer those basic questions.
Steven Cherry So to what extent is the problem that true evidence, especially via a large randomized double-blind controlled trials, is hard to come by? It’s expensive to do. It’s time-consuming. It’s rarely done. And if and when it’s done, it’s hardly ever replicated.
Eric Patashnik It’s true. There are there are major barriers and these studies can be expensive. But the information that comes from these comparative effectiveness studies is really a public good. It benefits all patients and payers and all stakeholders in the political system. So that’s one of the justifications for having a government role in subsidizing the production of these studies, because all of us would be better off if we had reliable information about what treatments work best and for whom and under what conditions. Individual physicians really don’t have a strong incentive to fund these studies or ability to fund the studies themselves. In fact, in the knee surgery case that I mentioned earlier, if it were not for the entrepreneurial initiative of a few physicians out of Texas, we might still not have those groundbreaking studies. It was really just because a few leaders decided that they really wanted to answer the question of whether this knee surgery worked. There was no system in place to ensure that those research holes are filled.
Steven Cherry Under our system, trials and other tests of efficacy are often in the hands of the large drug and medical device manufacturers. To the layperson, that sounds like putting the wolf in charge of determining the best way to build the chicken coop.
Eric Patashnik We do have a Food and Drug Administration that does a lot of tremendous work and they’re very valuable. And that’s also a benefit for the pharmaceutical industry. They, too, have an incentive to make sure that products that are sold to the public are seen as effective and trustworthy. So the industry itself does want a certain level of regulatory scrutiny. But the current FDA process, I think, while extremely valuable, has some limits. It’s often the case that drug companies are not required to study whether a new drug works better than, say, a cheaper alternative way of treating the same disease. If there’s medical and nonmedical options for treating a medical condition, say, for example, a surgery or a pharmaceutical agent, we have no process in place to ensure that those two options are forced to compete head-to-head. And so the FDA process, I think, is extremely valuable. And it does ensure that we find that drugs work better than a placebo, but we often don’t have what we really want to know, what patients want to know, which is which drug is best for me or should I be taking any drug.
We’ve also seen, I think over time, in part under industry pressure, a bit of a lowering of the evidentiary bar and even how we evaluate drugs. So oftentimes what the FDA is looking at is not answering the question that patients most want to know, which is, will this drug help me live longer? Will it help me improve my quality of life? Oftentimes, studies are looking at surrogate endpoints. For example, if I take this drug, will it lower my blood pressure or will it improve my cholesterol? And those surrogate outcomes, we hope, are correlated with the health outcomes that we really care about. But oftentimes the statistical causal relationship between them is much weaker than we would like it to be. And so sometimes we even have studies that are well done. But the questions that they’re answering are really not the ones that patients care most about.
Steven Cherry You would think that insurance companies would refuse to pay for a sham procedure or for a procedure that’s done four times in one region than another.
Eric Patashnik The insurance companies are in a difficult position because, of course, they would like to figure out what works best and conserve resources and not allocate them to low-value medical services. But in the United States, it’s very difficult for private insurers not to cover a medical treatment, for example, that is covered by the Medicare program. And the Medicare program really only looks at whether interventions are reasonable as judged essentially by physicians. The Medicare agency doesn’t really have the authority to examine whether a particular intervention is cost-effective or good value for money. And it’s also the case that individual insurers don’t have the resources to pay for the studies themselves, because, after all, if one insurer would spend hundreds of millions of dollars to answer the question, does this treatment A work better than treatment B? Well, once that study is done, the information would be available to other insurance companies, to their competitors. And so they don’t really have—the first insurance company doesn’t really have—an incentive to make that initial investment.
So, yes, payers would like this information, but individual insurance companies really don’t have a strong economic incentive to fund the studies. And it’s also difficult for them to deny coverage to treatments if the Medicare agency funds it.
The entire Medicare program, which was established in 1965, was, of course very controversial at the start. Many physicians and the AMA [American Medical Association] originally opposed Medicare. They were very concerned that government would be intruding on their clinical autonomy and second-guessing them. In exchange for essentially they’re buy-in, the government essentially said, look, we’re going to pay for health care for senior citizens, but we will defer to your professional judgment about the best way to treat patients. That’s not really government’s role. And, of course, Medicare has changed dramatically over the last decades. And I don’t want to say that Medicare doesn’t scrutinize treatments at all or make coverage decisions. It does. But that basic model of essentially deferring judgment or deferring decisions about coverage to physicians basically has remained the same.
And so we rely on physicians to exercise their best judgment to determine appropriate care. In many cases, that works very well. But if we have, for example, a breakdown of professional authority, if we have physicians in a particular practice area are widely using a surgical intervention, that doesn’t work, we don’t really have a good way of fixing that so quickly. And what we saw in that knee surgery case that I mentioned earlier was after that landmark New England Journal of Medicine study came out—and this is what was so disconcerting—a lot of the orthopedic surgeons that perform that surgery did not embrace it at all. They reacted extremely defensively. They attacked the study authors. They they made all sorts of arguments about what was wrong with the study.
And, of course, any study could have some flaws. And certainly every any study should be replicated. And we might not want to change practices based on a single study. But the overall behavior of the orthopedic community was one of essentially trying to push away evidence as opposed to embracing it as a way of learning what is best to treat their patients. In our research, we found that unfortunately, that practice has repeated itself over and over again.
There was another recent study just a couple of years ago called Orbita, which looked at millions of heart patients who had clogged arteries and they were receiving a stent inserted to reduce their chest pain. And this is another example of a very widely medical procedure. It’s expensive. It carries risks. It’s become the standard of care. And yet it’s diffused widely into practice on the basis of really little hard data. And the Orbita study was like the knee surgery finally done and it was a sham-controlled or a placebo-controlled trial in which some patients were randomized to receive a stent and others received no intervention at all beyond everyone in the trial receiving basic pharmaceutical drugs for cardiac disease. And what it found was that the patients who received a stent experienced no more improvement in chest pain or exercise tolerance—that was the other endpoint of the study—compared to patients who received a placebo procedure. And interventional cardiologists really saw that study as an attack on their specialty. And they lashed out at the trial and at the investigators. And it was another similar reaction of self-protective and defensive behavior of physicians that were not embracing the best available medical evidence.
Steven Cherry The book argues that the widespread disregard for scientific evidence or the lack of it is systematic within the medical community but it’s also political—that there’s little incentive for politicians to step in and demand medicine be more evidence-based. Is this physician pushback one of the reasons?
Eric Patashnik Yeah, I mean, I think what we’re talking about then is, essentially, as a society, we have a social contract. We delegate our authority to physicians and we give them the privilege of licensure and they can control who is a physician. They earn high salaries. They quite rightly are treated with great respect. And I should say, we think physicians are amazing people and we see our work as trying to help physicians for sure. But the way that social contract works is we rely on the medical community to self-regulate, to figure out if physicians are not practicing appropriate care, and to learn what would be best for patients.
And if that social contract is not working well, what we find in the book is it’s extremely difficult for government to do anything about it. Well, why is that? The main reason is when it comes to medical care, the public really trusts physicians. We did a bunch of public opinion surveys to try to understand how the public thinks about health care. And we found the bottom line of really scores of surveys that we did was when it comes to medicine, really the only actors that the American public trust are physicians. They don’t trust pharmaceutical firms. They don’t trust insurance companies, and they certainly don’t trust government.
Steven Cherry I mean, part of that trust is grounded in the fact that medicine is pretty arcane, complex, technical … People go to school for years and years and years and train in various ways. I mean, if my life depended on my having an opinion on which version of string theory was more likely to be true, I’d study up on string theory, but I wouldn’t get very far. And I think neither would most people.
Eric Patashnik Absolutely. It’s completely understandable and rational for ordinary Americans to look to doctors for advice about what works best. I certainly do in my own life and would steer people away from just Googling medical conditions at random. And you can find a lot of misleading information on the Web. But what we really do want at the end of the day is the best available evidence about what treatments work best. And then that is a sort of systematic evidence base, and then ideally physicians would be using that kind of data and then taking into account the specific medical conditions and backgrounds and preferences of individual patients. So we certainly think there’s a major role for clinical judgment in medicine. Medicine is always going to be both an art and a science. What we argue in the book is that the ratio of art to science has gotten out of whack, and we’re relying too little on rigorous evidence and too much on idiosyncratic decision making.
Steven Cherry So I take it your solution would involve an FDA for surgery and other medical procedures.
Eric Patashnik Well we certainly think that there should be much more rigorous scrutiny of procedures.
That’s really a big part of medicine. We would like to see PCORI, the Patient Centered Outcomes Research Institute, begin taking on some of those harder questions in medicine. And then we would also like to see, I think, the Medicare agency begin looking more rigorously at the added value of particular treatments.
And if there’s a treatment that is extraordinarily expensive and there’s no evidence that it works better than a cheaper alternative, we don’t necessarily think that patients shouldn’t get it. They should certainly be free to choose it. But perhaps, for example, the Medicare agency should only pay in reimbursement up to the amount of the cheaper, equally effective treatment. So we really need, I think, more tailored kinds of coverage and reimbursement policies, as well as more rigorous studies. It’s been difficult, however, to move the needle on these kinds of policy solutions. The agency that I mentioned, PCORI, was very, very cautious in its first decade. It was just reauthorized, which I think is terrific, but it was very controversial. When it was first created back in 2010, it was seen as a rationing agency. It got caught up in charges of death panels, along with a lot of controversy, as part of the ACA.
Even in the current crisis, we’ve lacked the kind of rigorous information we need to figure out, for example, what covid therapeutics would be most effective. There’s been a paucity of randomized control trials on those kinds of questions. Of course, we’ve been in the middle of a pandemic. It’s quite understandable that physicians have been trying to do the best they can with available therapies. But other countries like the UK have done better in getting rigorous studies going during the pandemic so we can get quick answers to these life and death questions.
Steven Cherry So fundamentally, these are data questions and data scientists are reinventing that sort of thing. Do you think that with electronic medical records and deep learning studies of procedures and outcomes and so forth, that even without a new federal agency, we could get better data just from the data that we have and don’t use properly?
Eric Patashnik I think that’s a fantastic question. My colleague Alan Gerber and I, the coauthor of the book, as I mentioned, there was that landmark Orbita study about the efficacy of heart stents. And we had a conference a couple of years ago at Yale where we brought the lead author of the study, along with other leading cardiologists and social scientists together to look at what had happened in that case and why we are so often struggling with questions of data. And one of the things that I think was most exciting about the conversation is there really is an opportunity, I believe, to connect medical researchers and data scientists and other kinds of scholars to figure out how we can tease out rigorous causal inferences about what treatments work best from observational data.
Now, there’s a long history in medicine of reaching conclusions on the basis of observational data and non-randomized controlled trials, where it turned out that some of our conclusions were wrong. For example, beliefs that hormone replacement therapy would reduce heart disease, or that aggressive treatments for breast cancer would be better than standard treatments of breast cancer. And it turned out that in both of those cases, that was incorrect after randomized controlled trials were done.
So for very good reason, I think some of the leading evidence-based medicine … physicians have been skeptical about learning through methods other than RCTs [randomized clinical trials]. But in recent years, I think there really have been some breakthroughs in data science and other kinds of. Techniques that do allow us to learn what kinds of treatments work best. I’m hopeful that that kind of partnership in the coming decades will accelerate our ability to learn. We’re certainly going to need more RCTs; we’re doing too few of them. But I think there are other methods of learning.
The COVID pandemic is going to provide a remarkable opportunity to learn about the efficacy of a wide range of treatments, because we kind of had a national experiment, particularly during the early part of the pandemic during March and April. Basically, Americans and people around the world just stopped going to the doctor for all sorts of conditions. And of course, some of that was problematic. There were people that had serious medical conditions. They were fearful of going to the emergency room. But there were also people who might have had knee pain and they otherwise would have gotten the orthopedic surgery. There were people who had a sore back and they put off a procedure or they didn’t get a colonoscopy or they didn’t get some kind of screening for another medical condition. And what we don’t know is what the health outcomes were of this dramatic change in people’s consumption of medical services during this period. We have a remarkable opportunity now to figure out which of those medical interventions really were necessary and that the fact that people didn’t get them or got them in significantly fewer quantities was actually really bad for people’s health. And that’s going to be crucial to learn. So we can make sure that people do get those vital treatments and which are those services turned out upon reflection, actually weren’t so necessary. And that even though people skip some of those things, they didn’t turn out to have any negative health outcomes at all or perhaps even escaped a cascade of further unnecessary treatments or overutilization. Certainly, nobody wishes that this had happened. But now that it did happen and we had this once in a century kind of dramatic shift in people’s consumption of health care services, we really need to do our best to learn from this experience and figure out what the consequences were.
Steven Cherry Well, Eric, I started this podcast with a Bible verse, and I’m going to switch from the early Christians to the ancient Greeks. You might feel like you’re pushing a rock up a hill like Sisyphus, but like the similarly punished Prometheus, it was in the cause of empathy and imparting knowledge. So I thank you for co-authoring the book and thanks for joining us today.
Eric Patashnik Thank you so much. It’s a real pleasure to be here.
Steven Cherry We’ve been speaking with Brown University professor Eric Patashnik about the thesis of his co-authored book, Unhealthy Politics: The Battle Over Evidence-Based Medicine: How partisanship, polarization, and medical authority stand in the way of evidence-based medicine.
Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.
This interview was recorded December 16, 2020 via Zoom using Adobe Audition. Our theme music is by Chad Crouch.
You can subscribe to Radio Spectrum on the Spectrum website, spectrum.ieee.org, or on Spotify, Apple, Google—wherever you get your podcasts. We welcome your feedback on the web or in social media.
,As the nasal swab probed higher into my nose, as if straining to pierce my brain, time slowed like molasses. Time stretched even longer as I waited three days for the COVID-19 test results.
That was in early August. Now it’s late December, and I’m currently on day four waiting for COVID-19 results for my three-year-old (who enjoyed the nasal swab even less than I did).
Both of us received a PCR test, the gold-standard for COVID-19 testing with a less-than-golden turnaround time of days to weeks. That’s far too slow to use testing as a method to contain the virus as we each wait our turn for a COVID-19 vaccine. Antigen tests, like the one approved by the FDA this week for at-home use, are fast but have sensitivity issues, so health authorities continue to emphasize the need for rapid, reliable tests.
Now, we may be on the cusp of their arrival.
A new wave of rapid molecular tests—which promise the sensitivity of a PCR test with the speed of an antigen test—have recently been validated and are moving from prototype toward FDA approval. In the last two weeks, studies published in the journals Science Advances and Cell detail ultrasensitive molecular COVID-19 tests based on gene-editing CRISPR technology.
The world desperately needs a portable, reliable COVID-19 test that can deliver immediate results. Now scientists at Stanford University say they hope that a new diagnostic assay will fill that void.
In a paper published last week in the journal PNAS, the scientists describe how they used electric fields and the genetic engineering technique CRISPR to build a microfluidic lab-on-a-chip that can detect the novel coronavirus. The test delivers results in about half an hour—a record for CRISPR-based assays, according to the authors—and uses a lower volume of scarce reagents, compared with other CRISPR-based tests in development.
“We’re showing that we have all the elements required to achieve a miniaturized and automated device with no moving parts,” says Juan Santiago, vice chair of mechanical engineering at Stanford, who led the research. He adds, however, that more work lies ahead before their test could be ready for the public.
COVID-19 has galvanized tech communities. The tens of billions we’re spending on vaccines, antivirals, tests, robots, and devices are transforming how we’ll respond to future outbreaks of infectious disease.
Using feedback, a standard tool in control engineering, we can manage our response to the pandemic for maximum survival while containing the damage to our economies By Greg Steward, Klaske van Heusden, and Guy A. Dumont
Diagnostic testing for COVID-19 infections is one of the linchpins of the global effort to combat the deadly pandemic. But the strategy normally used for that has a few downsides. For one, the nasopharyngeal swabbing required demands the services of a health care worker, who is then put at risk of contracting the disease. Also, the sample-taking procedure, which involves sticking a very long flexible swab through a nostril and into the nasopharynx at the back of the nose and throat, is so unpleasant that some people resist being tested. And because the samples must normally be sent to a distant lab for processing, it often takes hours to days for results to become available.
What the world desperately needs is a way to diagnose infections with the virus more quickly and easily. Breakthroughs here would allow for truly widespread testing and the identification of people with asymptomatic infections, who often inadvertently spread the disease.
Researchers across the world have been racing to develop better COVID-19 tests, and biochemists and molecular biologists have made significant progress in just a handful of months. But engineers, too, have been working on technologies that might provide what everyone so dearly wants: inexpensive tests that don’t require swabbing and that can be rapidly performed by anyone, anywhere—if not for this pandemic, then perhaps in time for the next one.
In the usualmethod for diagnosing COVID-19 infections, the nasopharyngeal swab sample is sent to a lab. There, technicians use a procedure called reverse-transcription polymerase chain reaction (RT-PCR) to check for the presence of the SARS-CoV-2 virus, which causes COVID-19. The processing requires first converting a characteristic segment of viral RNA into DNA and then “amplifying” that DNA through a sequence of biochemical tricks and heating and cooling cycles. The presence or absence of large quantities of amplified DNA reveals whether the virus was present in the original sample.
But there are faster ways to test for infections. Perhaps the best-known example of a rapid test for this coronavirus, at least in the United States, is one being used at the White House: a COVID-19 testing system from Abbott Laboratories, which received emergency-use authorization from the U.S. Food and Drug Administration in March. It uses a toaster-size instrument called ID NOW, which Abbott introduced in 2014 to detect influenza, strep A, and respiratory syncytial virus. The device employs a biochemical strategy to amplify viral RNA without the need for heating and cooling, which allows it to produce results from a swab sample in 13 minutes or less; that’s much faster than RT-PCR–based tests, which take an hour or more to run through the necessary series of thermal cycles.
The speedy results you can get with Abbott’s ID NOW COVID-19 test make it immensely attractive, but there has been considerable controversy about the reliability of those results. A study posted to a preprint server in May found that the ID NOW test missed a third of positive samples, although Abbott has stated that it misses only a few percent of positives in real-world situations.
Other companies have developed similarly compact systems that can detect the presence of genetic material specific to the SARS-CoV-2 virus on-site. In August, the U.K. government announced that it would be rolling out equipment devised by two such companies, DnaNudge and Oxford Nanopore Technologies, to labs, hospitals, and nursing homes. Both machines can be operated by someone who’s not a trained health professional, and both companies’ tests take under 90 minutes to produce results.
In another key development, the FDA has granted emergency-use authorization for several testing protocols that do away with the nasopharyngeal swab and rely instead on saliva. These include one called SalivaDirect from the Yale School of Public Health and another developed at Rutgers University that allows people to collect the sample at home. Such advances make it more likely that spit sampling will one day become the norm for tests based on RT-PCR.
It may prove impossible, though, to create a system that amplifies genetic material of the virus rapidly, with inexpensive portable equipment, and with sufficient sensitivity to detect the virus in saliva. Fortunately, there is a fundamentally different strategy that might check all these boxes: antigen-based tests.
Antigen-basedtests don’t look for genetic material specific to the virus. Instead, they use antibodies—large molecules that have just the right shape to bind with protein molecules specific to the virus, such as the now-famous spike protein that decorates the surface of SARS-CoV-2. The molecular targets of such antibodies are called antigens.
When people are infected, one part of their immune response is to manufacture antibodies to the virus. These molecules can also be mass produced and incorporated into test kits to be mixed with a sample of mucus or saliva. The tricky part is determining whether the antibodies in the test solution have indeed attached to the virus, which is what signals that the person being tested has an active infection.
Biochemists have experience with this challenge and have developed various antigen-based point-of-care tests for other viral diseases. At the time of this writing, the FDA has granted emergency-use authorization to four companies for such antigen-based tests for the COVID-19 virus. One of these, developed by Becton, Dickenson and Co., takes advantage of the company’s already-deployed Veritor handheld instruments and provides results in just 15 minutes.
But the world could still benefit from the ability to test people for this virus using simpler, cheaper equipment and by testing saliva, instead of requiring a swab. “The best situation would be like a pregnancy test,” says Angela Rasmussen, a virologist at Columbia University. “Spit on a stick or into a collection tube and have a clear result 5 minutes later.”
On 26 August, Abbott Laboratories obtained emergency-use authorization from the FDA for a test that is approaching that ideal, which it calls the BinaxNOW Ag Card. Unlike this company’s earlier rapid-testing system, this one requires no desktop analyzer. The antigen-based test is performed on a card that’s about the size of a credit card: A nasal swab is inserted into the card, a reagent is added, and 15 minutes later the technician reads the result. The only relevant device here is a smartphone.
That’s because Abbott has also developed a companion phone app, which it called Navica [PDF]. People who install the app can present their phones to the technician, who will then enter the results of their tests through the Navica app on his or her own phone. Those who test negative will soon have time-stamped QR codes on their phones that attest to that result, enabling schools and workplaces to check people’s statuses using the app’s verification feature.
The availability of Abbott’s BinaxNOW Ag Card, which the company expects to ship soon in large quantities (tens of millions each month) for a cost of only US $5 per test, portends a new phase of the pandemic, one in which it becomes much more straightforward to identify people infected with the COVID-19 virus. Going forward, we still require a test that is equally fast and inexpensive but that could work with saliva, so that anyone, anywhere could determine whether they’ve been infected. Here’s where some new technologies might be able to make a special contribution in the quest for such a test.
At the University of Minnesota, materials-science engineer Jian-Ping Wang had been working on a novel technique called magnetic-particle spectroscopy for detecting influenza, but at the end of March he made a big pivot, modifying his device to aid in the battle against COVID-19. Wang’s approach mixes the sample with magnetic nanoparticles coated with antibodies that target the SARS-CoV-2 virus. The challenge, as with any antigen-based technique, is figuring out whether the virus has indeed attached to them.
The clever scheme that Wang has worked out relies on the behavior of his magnetic nanoparticles when they’re subjected to an external magnetic field. Electrical engineers are well aware that the magnetization of iron-containing materials can change direction when a magnetic field is applied to them. For the 30-nanometer-diameter magnetic particles that Wang works with, that change in magnetization comes about by the physical rotation of the particle in solution. These particles are small and nimble enough to act like tiny compass needles, rotating into alignment with the applied magnetic field. If you subject them to a slowly oscillating magnetic field—say, one created inside a coil of wire—their orientations will flip back and forth in sync with the changes in the polarity of the applied field.
The surface of each magnetic particle holds more than one antibody, and the surface of each viral particle contains more than one antigenic protein. So when many of these two kinds of particles are present together in solution, they tend to link up into extended networks, with the virus acting as a sort of glue holding clusters of magnetic particles together.
Those clumps, being much larger than single magnetic particles, cannot rotate as quickly in solution, so they behave differently in an oscillating magnetic field. As the frequency of the applied field grows into the kilohertz range, the rotation of the particle clusters increasingly lags behind the applied field. You’d see something analogous if you placed a compass inside a coil and applied a current that oscillated in polarity faster than the needle could physically respond to the rapid changes in the magnetic field.
It’s difficult to measure directly how much the rotating magnetic particles lag the applied field. But for complicated reasons, when the lag is small, the oscillatory motions of the magnetic particles include frequencies that were not present in the applied field. Conversely, when the lag is substantial, those additional frequency components are much diminished. So the frequency content provides a simple measure of whether or not clusters formed. If they did form, it indicates that the target virus was present.
The detection of those changes in the frequency content is not difficult, requiring only a suitable pickup coil, an amplifier, and a computer to do some straightforward digital signal processing. Wang is using a benchtop system connected to a laptop now, but he envisions all this being done by a handheld device. After the test sample mixes with the magnetic nanoparticles for about 10 minutes, it would be placed in that device, which in a few seconds would provide a readout of whether the virus was present. “We’re screening different technologies to figure out the best way to create a low-cost and convenient [device] for the customer to use,” says Wang.
We can certainly hope that Wang will be able to lower the costs enough for his device to become a consumer product and that it will have sufficient sensitivity to detect the COVID-19 virus in saliva. But his isn’t the only possible solution on the table.
At the University of Pennsylvania’s Penn Center for Research on Coronavirus and Other Emerging Pathogens, Ping Wang (no relation to Jian-Ping Wang at the University of Minnesota) is working on yet another new way to detect tiny quantities of the SARS-CoV-2 virus. Wang calls the technique her group has pioneered a “microbubbling digital assay.” It also uses magnetic particles to which SARS-CoV-2–specific antibodies are attached. But these magnetic particles are much larger than those used in the other Wang’s magnetic-particle spectroscopy: Rather than being tens of nanometers in diameter, these are a few micrometers across. Her approach also employs nanometer-size particles of platinum that are attached to the same antibodies.
When both of these kinds of engineered particles are added to a solution containing SARS-CoV-2, the virus hooks up to both, sometimes linking a platinum nanoparticle on one side to a much larger magnetic particle on the other. The challenge is to figure out whether such a virus sandwich has formed. The tool Wang devised to determine that is a novel microchip. Being made of various polymers, it’s very different from the silicon chips found in electronic gadgets. But like electronic chips, it’s fabricated using lithography and physical vapor deposition.
Wang’s chip is about 3 millimeters on a side. It contains an array of tiny square depressions, each 14 micrometers wide and 7 micrometers deep, which is just large enough to hold one of those virus sandwiches. A magnet is placed beneath the chip, and hydrogen peroxide is added to the sample solution on top. The magnet draws the magnetic particles downward, and some land in the tiny depressions.
This chip can reveal the presence of a virus because platinum catalyzes the decomposition of hydrogen peroxide into water and oxygen. Any virus sandwich caught in a well will generate oxygen there. Virus sandwiches that land between wells don’t form bubbles for reasons that are currently unclear. But over time, the oxygen created inside a well forms a bubble, which is big enough to be seen with a smartphone camera using just a small amount of magnification. If there is no virus present in the sample, the platinum nanoparticles won’t be linked up with the magnetic particles, and no bubbles will be generated inside the wells.
To automate the task of bubble identification and counting during her work on this technique last year, Wang trained a neural network using hundreds of images. That allowed her to develop a smartphone app that is able to tally the number of bubbles on a chip in seconds.
Currently, Wang’s team is trying to miniaturize the equipment needed for this microbubbling assay, so that it could be carried out using a device attached to a smartphone. She’s now testing this approach using samples from COVID-19 patients and planning to apply for emergency-use authorization from the FDA if the system proves accurate enough.
It’s too soon to know whether either of these two new antigen-detection technologies will indeed provide a way to detect the COVID-19 virus with inexpensive equipment, in near real time, and with sufficient sensitivity to do so using a person’s saliva. With luck, one of them will meet that high bar. With even more luck both will, which would help enormously when it comes time to scale up production of the requisite devices to meet demand. One thing is for sure, though: If scientists and engineers are able to solve this technical challenge in the not-too-distant future, many people around the world will rapidly incorporate such tests into their daily lives.
This article appears in the October 2020 print issue as “AHere–and–NowCOVID–19Test.”
This week, a California-based company announced it will seek FDA clearance for a first-of-its-kind autism spectrum disorder (ASD) diagnostic tool. Cognoa’s technology uses artificial intelligence to make an ASD diagnosis within weeks of signs of concern—far faster than the current standard of care. If cleared by the FDA, it would be the first tool enabling primary care pediatricians to diagnose autism.
The approach is “innovative,” says Robin Goin-Kochel, a clinical autism researcher at Baylor College of Medicine and associate director for research at Texas Children’s Hospital’s Autism Center, who is not affiliated with Cognoa. The field absolutely needs a way to “minimize the time between first concerns about development or behavior and eventual ASD diagnosis,” she adds.
Cognoa’s tool is the latest application of AI to healthcare, a fast-moving field we’ve been tracking at IEEE Spectrum. In many situations, AI tools seek to replace doctors in the prediction or diagnosis of a condition. In this case, however, the application of AI could enable more doctors to make a diagnosis of autism, thereby opening a critical bottleneck in children’s healthcare.
After nearly six months of scrutinizing the smorgasbord of COVID-19 tests available globally, an independent diagnostics evaluation group has come to some conclusions: The world greatly needs rapid, affordable tests, but the quality of such tests varies widely. By contrast, slow tests that are sent off to laboratories—the kind broadly used in the U.S. and Europe—are performing well.
That’s according to the Foundation for Innovative New Diagnostics, or FIND, headquartered in Geneva, Switzerland. “People in low- and middle-income countries often do not have access to laboratories,” says Jilian Sacks, FIND’s COVID-19 Evaluation Programme Lead. “So the ability to have rapid tests, especially outside the hospital and in decentralized settings, is most critical moving forward. But we are seeing that there is a lot of variability in the performance of rapid tests,” she says.
Sacks’s organization in February took on the ambitious task of evaluating the performance of hundreds of COVID-19 tests that have been flooding the global market ever since the pandemic hit. FIND told Spectrum earlier this year that it would subject these tests to a slew of coronavirus samples and make the results public on its website. This week, we checked in to see how it’s going.
The most reliable tests, FIND has found, are molecular tests performed in a laboratory. These diagnostics are the go-to COVID-19 testing method in the U.S. and Europe. They identify the disease by looking for the virus’s genetic code using a tool called polymerase chain reaction, or PCR.
FIND has evaluated 23 molecular tests so far. All but two had a sensitivity of at least 95%, meaning the tests accurately detected positive samples 95% of the time. “We’re seeing that generally, the companies that we selected to evaluate have quite good performance with their assays,” so there are several good options from which ministries of health can choose, Sacks says.
But molecular tests are slow. PCR takes only a few hours to perform, but people are having to wait days to get results back due to the logistics of transporting samples to a centralized laboratory with PCR capability.
“Global demand has continued to soar, and though there may be many companies that should be able to meet that demand, it will require continued, unprecedented scale-up of their manufacturing and distribution,” Sacks says.
Rapid tests, on the other hand, provide results right away, on the spot, without having to send off a sample to a centralized lab. This allows people to make decisions immediately about whether or not to quarantine. And rapid tests are sometimes the only way to get tested in low-resource regions where access to centralized laboratories is scarce. But these tests are not as accurate as molecular tests.
FIND is evaluating 41 different rapid tests—5 that spot antigens and 36 that look for antibodies. Antigens are the parts of the virus that the immune system recognizes, and finding evidence of them means the virus is present. Antibodies are produced from healthy cells as part of the body’s immune response to a virus, and finding those means the virus is, or was, in the body, and that the immune system responded. Both types of tests can be performed on-the-spot because they are less complex and don’t require a lab.
FIND has released results for four of the rapid antibody tests it is evaluating. Only one of them—an antibody test made by BTNX—correctly identified positive cases over 90% of the time. The performance of some of the other tests is so low that they provide little value.
In addition to its own program, FIND is aggregating the independent evaluations of other organizations from around the world and making it available online. These data are sorely needed. Global regulators, such as the FDA, which would normally require a lot more data from diagnostic companies before allowing a test to be used commercially, have largely stepped out of the way. This has boosted innovation and hastened the availability of tests. But it also allows inaccurate tests onto the market, and shifts the burden of oversight to whomever wants to take on the job.
That’s the hole that FIND is trying to fill. “We have indeed heard from various ministries of health, that as companies are approaching them to offer their tests, they are looking to our website to see if we’ve evaluated the test and whether they feel comfortable with the results,” Sacks says.
It’s good to know that someone out there is testing the tests.
Researchers have been banking on millions of citizen-scientists around the world to help identify new treatments for COVID-19. Much of that work is being done through distributed computing projects that utilize the surplus processing power of PCs to carry out various compute-intensive tasks. One such project is [email protected], which helped model how the spike protein of SARS-CoV-2 binds with the ACE2 receptor of human cells to cause infection. Started at Stanford University in 2000, [email protected] is currently based at the Washington University School of Medicine in St. Louis; it undertakes research into various cancers, and neurological and infectious diseases by studying the movement of proteins.
Proteins are made up of a sequence of amino acids that fold into specific structural forms. A protein’s shape is critical in its ability to undertake its specific function. Viruses have proteins that enable them to suppress a host’s immune system, invade cells, and replicate.
Greg Bowman, director of [email protected], says, “We’re basically building maps of what these viral proteins can do… [The distributed computing network] is like having people around the globe jump in their cars and drive around their local neighborhoods and send us back their GPS coordinates at regular intervals. If we can develop detailed maps of these important viral proteins, we can identify the best drug compounds or antibodies to interfere with the virus and its ability to infect and spread.”
After Covid-19 was declared a global pandemic, [email protected]prioritized research related to the new virus. The number of devices running its software shot up from some 30,000 to over 4 million as a result. Tech behemoths such as Microsoft, Amazon, AMD, Cisco, and others have loaned computing power to [email protected] The European Organization for Nuclear Research (CERN) has freed up 10,000 CPU cores to add to the project, and the Spanish premier soccer league La Liga has chipped in with its supercomputer that is otherwise dedicated to fighting piracy.
“A big difference…is that the [email protected] distributed computing is…directly contributing to the design of new proteins… These calculations are trying to craft brand new proteins with new functions,” says Ian C. Haydon, science communications manager and former researcher at IPD. He adds that the [email protected] community, which comprises about 3.3 million instances of the software, has helped the research team come up with more than 2 million candidate antiviral proteins that recognize the coronavirus’s spike protein and bind very tightly to it. When that happens, the spike is no longer able to recognize or infect a human cell.
“At this point, we’ve tested more than 100,000 of what we think are the most promising options,” Haydon says. “We’re working with collaborators who were able to show that the best of these antiviral proteins…do keep the coronavirus from being able to infect human cells…. [What’s more,] they have a potency that looks at least as good if not better than the best known antibodies.”
There are many possible outcomes for this line of research, Haydon says. “Probably the fastest thing that could emerge… [is a] diagnostic…tool that would let you detect whether or not the virus is present.” Since this doesn’t have to go into a human body, the testing and approval process is likely to be quicker. “These proteins could [also] become a therapy that…slows down or blocks the virus from being able to replicate once it’s already in the human body… They may even be useful as prophylactic.”
You prick your finger or swab your nose and dab a tiny sample of the fluid onto a semiconductor chip. Slot that chip into an inexpensive, handheld reader and, within a minute or so, its small screen displays a list of results—you are negative for the new coronavirus, positive for antibodies. The likelihood of false results is extremely low. You are cleared to enter your company’s building or fly on a plane.
Since the outbreak of the COVID-19 pandemic, national and local governments have come to realize that although sheltering in place helps to “flatten the curve” of new cases, economic and other considerations mean we can’t stay home indefinitely. But if we are to regain some sense of normalcy, early detection of the novel coronavirus, SARS-CoV-2, will be essential to containing it—at least during the period between the easing of restrictions on gatherings and commercial activity and the development of a vaccine.
At this moment, the best diagnostic tests for SARS-CoV-2, based on real-time RT-PCR (rRT-PCR) assays, are sensitive, but they require expensive equipment and trained technicians. What’s more, there’s a relatively long turnaround time for getting an answer (at least 24 hours prep time after receipt of a sample, plus 6 hours to complete the actual test procedure).
Now a multidisciplinary group of researchers at the University of Minnesota, with expertise in magnetics, microbiology and electrical engineering, says it has developed a test that is relatively cheap, easy to use, and quick to deliver results. The group, led by Prof. Jian-Ping Wang, chair of the Department of Electrical & Computer Engineering, and Associate Professor Maxim C. Cheeran, from the Department of Veterinary Population Medicine, says it looks to have a commercially viable version of its prototype system available soon enough to help speed up the pace of pre-vaccine testing. IEEE Spectrum interviewed Wang, an IEEE Fellow, about his group’s portable testing system.
IEEE Spectrum: How would you describe your virus detection system in a sentence or two?
Wang: It’s a portable platform based on magnetic particle spectroscopy (MPS) that allows for rapid, sensitive detection of SARS-CoV-2 at the point of care. Eventually, it will be used for routine diagnosis in households.
Spectrum: Give us a quick primer on the science underlying the MPS testing technique.
Wang: MPS is a versatile platform for different bioassays that uses artificially designed magnetic nanoparticles that act as magnetic tracers when their surfaces are functionalized with test reagents such as antibodies, aptamers or peptides. Imagine them as tiny probes capturing target analytes from biofluid samples. For COVID-19 antigen detection, we functionalized our nanoparticles with polyclonal antibodies to two of the four structural proteins that are components of the coronavirus (nucleocapsid and spike proteins). The antibodies allow the nanoparticles to bind to epitopes, or receptor sites, on these particular proteins. The more binding that occurs, the greater the presence of the virus.
Spectrum: So, how does the MPS system answer the question “Does this person have coronavirus?”
Wang: The MPS platform’s job is to monitor and assess the real-time specific binding of the nanoparticles with these proteins. The quantity or concentration of the target analyte—in this case, the aforementioned proteins indicating the presence of coronavirus—directly affects the responses of the nanoparticles to the system’s magnetic field. The magnitude of the difference in their behavior before and after the addition of the biofluid sample tells the tale. Because the biological tissues and fluids are nonmagnetic, there is negligible magnetic background noise from the biological samples. As a result, this volumetric-based immunoassay tool not only uncomplicated, but also accurate and effective with minimal sample preparation.
Spectrum: Take us through the steps of a testing cycle using this device.
Wang: No problem. But let’s back up a bit and note that our group has developed the MagiCoil Android app that processes information from the device’s microcontroller in real-time. And crucially, it also guides users on how to conduct a test from start to finish. The user interacts with the MPS handheld device using their smart phone, which communicates with the device via Bluetooth. Testing results are securely transmitted to cloud storage and could be readily shared between a patient and clinicians.
To run a test, the user begins by inserting a testing vial into the device and waiting as the system collects a baseline signal for 10 seconds. Then the user adds a biofluid sample into vial and waits again as the antigen and antibody bind for 10 minutes. The system will automatically read the ending signal for 10 seconds, then display the results.
Spectrum: How did you know this would work?
Wang: This epitope binding of the nanoparticles and target analytes form viral protein clusters, similar to what occurred when we applied this technique to the detection of the H1N1 nucleoprotein reported in our previous work.
Spectrum: What makes your team sure there won’t be a shortage of testing vials containing the functionalized nanoparticles (a situation comparable to the current shortage of reagents for COVID testing)?
Wang: For each test we use only a microgram of nanoparticles and a nanogram of reagents (antibodies, RNA fragments). We are already collaborating with nanoparticle companies, who supply us with high quality iron nitride nanoparticles for this application. Our group has also been seeking collaborations with biotechnology companies to secure sources of chemical reagents.
Spectrum: What challenges has your team faced and how did you overcome them?
Wang: Over the past decade, there was very little attention paid to filling the need in the market for a low noise, easy-to-use, portable bioassay kit for the detection of viruses—not until we began working on detecting Influenza A Virus subtype H1N1 in past year. We have steadily grown the team, accumulated experience in this research area, and optimized the MPS platform. In addition to full-time researchers, the MPS and MagiCoil team has benefited from the efforts of many talented graduate and undergraduate students from different areas. This work has resulted in the current version of the MPS device and the accompanying app.
I feel proud of my students for forging ahead on a project because they see it as an important bit of research and never giving up. Since the outbreak of COVID-19 pandemic, we saw how critical it was for us to speed up work on our project so we could contribute to the fight against the virus by making our MPS device available as soon as possible.
Spectrum: Your team’s aim has been to get this in doctors’ offices so anyone could walk in, get tested, and walk out knowing whether they’ve contracted the coronavirus. But in places like New York City, people have been urged to stay away from hospitals and clinics unless they are experiencing acute symptoms. How important is it to go beyond the clinical setting to household use so even asymptomatic people know their status?
Wang: From the outset, we wanted to make it inexpensive and easy to use so untrained people could conduct tests at home or out in the field in remote areas that are the antithesis of clinical situations. It will let large portions of the population afford to get regular updates on whether they have contracted the virus. And because it is capable of transmitting test results collected from distant locations to centrally located data analysis units, governments can have real-time epidemiological data at their fingertips. This would also significantly reduce the costs associated with tracking the spread of a disease and help health authorities more quickly evaluate and refine their disease control protocols.
Spectrum: How long before the system is commercially available?
Wang: We have just transformed the benchtop version of the system into a handheld version [which, at 212- by-84-by-72-millimeters is about the size of an old-school brick cellphone]. We have carried out preliminary tests such as characterizing the minimum amount of magnetic nanoparticles detectable and the system’s overall antigen sensitivity. And we’re still homing in on the optimum concentration of antibody to be functionalized on the nanoparticles so they’ll be most effective.
We anticipate that clinical trials will take an additional 3 to 5 months. At that point, we will work with local companies in Minnesota to mass produce MPS devices. The University of Minnesota Office for Technology Commercialization has been helping us lay the groundwork for founding a startup company in order to accelerate the process of commercializing this handheld device the instant we receive the necessary government approvals.
Spectrum: You mentioned antigen sensitivity. How much virus must there be in a sample for the MPS system to detect it?
Wang: We are currently evaluating what is the lowest concentration of virus our test can detect. But based on our experience with the H1N1 flu virus, it will be less than 150 virus particles.
Spectrum: I know it’s hard to say definitively, but give us a ballpark figure for the eventual price of the MPS testing system.
Wang: Based on our first prototype MSP device, we foresee the unit price starting at roughly US $100; the MagiCoil app, which is already completed and available for download from the Google Play store, is free. Testing vials containing the functionalized magnetic nanoparticles targeting the coronavirus will cost between $2 and $5 each.
Eventually, we plan to make a second-generation MPS device that’s as small as today’s smart phones. That step will require expertise in the areas of microfluid channel design, printed microcoils, high moment magnetic nanoparticles, automatic biofluid sample loading and filtering, optimized circuit layouts, etc. We are open to collaborations with other groups (including, but certainly not limited to IEEE members) to make a better, lighter weight, sensitive, fully automatic MPS device.
As we write these words, several billion people, the majority of the world’s population, are confined to their homes or subject to physical-distancing policies in an attempt to contain one of the worst pandemics of modern times. Economic activity has plummeted, countless people are out of work, and entire industries have ground to a halt.
Quite understandably, a couple of questions are on everyone’s mind: What is the exit strategy? How will we know when it’s safe to implement it?
Around the globe, epidemiologists, statisticians, biologists, and health officials are grappling with these questions. Though engineering perspectives are uncommon in epidemiological modeling, we believe that in this case public officials could greatly benefit from one. Of course, the COVID-19 pandemic isn’t an obvious or typical engineering problem. But in its basic behavior it is an unstable, open-loop system. Left alone, it grows exponentially, as we have all been told repeatedly. However, there’s good news, too: Like many such systems, it can be stabilized effectively and efficiently by applying the principles of control theory, most notably the use of feedback.
Inspired by the important work of epidemiologists and others on the front lines of this global crisis, we have explored how feedback can help stabilize and diminish the rate of propagation of this deadly virus that now literally plagues us. We’ve drawn on proven engineering principles to come up with an approach that would offer policymakers concrete guidance, one that takes into account both medical and socioeconomic considerations. We relied on feedback-based mechanisms to devise a system that would bring the outbreak under control and then adeptly manage the longer-term caseload.
It is during this longer-term phase, the inevitable relaxing of physical distancing that is required for a functioning society, that the strengths of a response grounded in control theory are most crucial. Using one of the widely available computer models of the disease, we tested our proposal and found that it could help officials manage the enormous complexity of trade-offs and unknowns that they will face, while saving perhaps hundreds of thousands of lives.
Our goal here is to share some of our key findings and to engage a community of control experts in this vital and fascinating problem. Together, we can contribute vitally to the international efforts to manage this outbreak.
The COVID-19 pandemic is unlike any other recent disease outbreak for several reasons. One is that its basic reproduction number, or R0 (“R naught”), is relatively high. R0 is an indication of how many people, on average, an infected person will infect during the course of her illness. R0 is not a fixed number, depending as it does on such factors as the density of a community, the general health of its populace, its medical infrastructure and resources, and countless details of the community’s response. But a commonly cited R0 figure for ordinary seasonal influenza is 1.3, whereas a figure calculated for the experience in Wuhan, China, where COVID-19 is understood to have originated, is 2.6 [PDF]. Figures for some outbreaks in Italy range from 2.76 to 3.25 [PDF].
The goal of infectious-disease intervention is reducing the R0 to below 1, because such a value means that new infections are in decline and will eventually reach zero. But with the COVID-19 outbreak, the level of urgency is extraordinarily high due to the disease’s relatively high fatality rate. Fatality rates, too, are quite variable and depend on such factors as age, physical fitness, present pathologies, region, and access to health care. But in general they are much higher for COVID-19 than for ordinary influenza. A surprisingly large percentage of people who contract the disease develop a form of viral pneumonia that sometimes proves fatal. Many of those patients require artificial ventilation, and if their number exceeds the capacity of intensive care units to accommodate them, some number of them, perhaps a majority, will die.
For that reason, enormous worldwide efforts have focused on “flattening the curve” of infections against time. A high, sharp curve indicating a surge of infections in a short time period, as occurred in China, Italy, Spain, and elsewhere, means that the number of serious cases will swamp the ability of hospitals to treat them and result in mass fatalities. So to reduce the peak demand on health care, the first priority must be to bring the caseload under control. Once that’s done, the emphasis shifts to managing a long-term return to normalcy while minimizing both death rates and economic impact.
The two basic approaches to controlling the spread of disease are mitigation, which focuses on slowing but not necessarily stopping the spread, and suppression, which aims to reverse epidemic growth. For mitigation, R0 is reduced but remains greater than 1, while for suppression, R0 is smaller than 1. Both obviously require changing R0. Officials accomplish that by introducing social measures such as restricted travel, home confinement, social distancing, and so on. These restrictions are referred to as nonpharmaceutical interventions, or NPIs. What we are proposing is a systematically designed strategy, based on feedback, to change R0 through modulation of NPIs. In effect, the strategy alternates between suppression and mitigation in order to maintain the spread at a desired level.
It may sound straightforward, but there are many challenges. Some of them arise from the fact that COVID-19 is a very peculiar disease. Despite enormous efforts to characterize the virus, biologists still do not understand why some people experience fairly mild symptoms while others spiral into a massive, uncontrolled immune response and death. And no one can explain why, among fatalities, men predominate. Other mysteries include the disease’s long incubation period—up to 14 days between infection and symptoms—and even the question of whether a person can get re-infected.
These perplexities have helped bog down efforts to deal with the pandemic. As a recent Imperial College research paper [PDF] notes: “There are very large uncertainties around the transmission of this virus, the likely effectiveness of different policies, and the extent to which the population spontaneously adopts risk reducing behaviours.” Consider the long incubation time and apparent spreading of the virus before symptoms are experienced. These undoubtedly contributed to the relatively high R0 values, because people who were infectious continued to interact with others and transmitted the virus without being aware that they were doing so.
This lag before the onset of symptoms corresponds to time delay in control-system theory. It is notorious for introducing oscillations into closed-loop systems, particularly when combined with substantial uncertainty in the model itself.
In addition to delays, there are very significant uncertainties. Testing, for example, has been spotty in some countries, and that inconsistency has obscured the number of actual cases. That, in turn, made it impossible for officials to know the true level of contagion. Even NPIs are not immutable. The extent to which the public is complying with policies is never 100 percent and may not even be knowable with a high degree of accuracy; people may follow directives less strictly over time. Also, health care capacity can go up because of an increase in available beds due to capacity additions, or down because of a decrease due to a natural disaster.
The point is, a pandemic is a dynamic, fast-moving situation, and inadequate local attempts to monitor and control it can be disastrous. In the Spanish flu pandemic of 1918, cities took widely varying approaches to the lockdown and release of their citizens, with wildly varying results. Some recovered straightforwardly, others had rebound spikes larger than the initial outbreak, and still others had multiple outbreaks after the initial lockdown.
A commonly cited proposal for relaxing social-distancing measures is an on-off approach, where some restrictions are lifted when the number of new cases requiring intensive care is below a threshold and are put back into place when it exceeds a certain number. The research paper [PDF] by the Imperial College COVID-19 Response Team showed how such a strategy is “robust to uncertainty in both the reproduction number, R0, and in the severity of the virus” and offers “greater robustness to uncertainty than fixed duration interventions and can be adapted for regional use.”
This on-off approach is an example of the use of feedback, where the feedback variable is the number of cases in hospital intensive care units. A major drawback of this type of on-off control is that it can lead to oscillations—which, if this strategy is too aggressive, may overwhelm the capacity of the health care system to treat serious cases.
A major advantage of feedback here is that it lessens the impact of model uncertainty—meaning that if carefully designed, a strategy can be effective even if the models it is based on are not accurate. We do not yet have an accurate epidemiological model of COVID-19 and will likely not have one for at least several months, if at all. Furthermore, the physical distancing and confinement regimes that have been put into place are new, so we don’t really know yet exactly how effective they’ll be or even the extent to which people are complying with them.
In the absence of widespread immunity or vaccination, the only way to suppress the disease is total confinement—obviously not a viable long-term solution. A reasonable middle ground is to implement a feedback policy designed to keep R0 close to 1, with perhaps small oscillations on either side. In so doing we would maintain the critical caseload within the capacity of health care institutions while slowly and safely building immunity in our communities, and returning to normal social and economic conditions as quickly as is safely possible.
A key point is that the design of the policy be rigorous from a control-engineering viewpoint while remaining comprehensible to epidemiologists, policymakers, and others without deep knowledge of control theory. It should also be capable of generating restriction regimes that can be translated into practical public policies. If the tuning mechanism is too aggressive—for example, switching between full and zero social distancing—it will lead to severe oscillations and overwhelmed hospitals, and very likely frustration and social-distancing fatigue among the people who need to follow dramatically changing rules.
On the other hand, tuning that is too timid also courts fiasco. An example of such tuning might be a policy requiring a full month in which no new cases are recorded before officials relax restrictions. Such a hypercautious approach risks needlessly prolonging the pandemic’s economic devastation, creating a catastrophe of a different sort.
But a properly designed feedback-based policy that takes into account both dynamics and uncertainty can deliver a stable result while keeping the hospitalization rate within a desired approximate range. Furthermore, keeping the rate within such a range for a prolonged period allows a society to slowly and safely increase the percentage of people who have some sort of antibodies to the disease because they have either suffered it or they have been vaccinated—preferably the latter.
Eventually, as the percentage of the population who have suffered the disease and recovered from it becomes high enough, the number of susceptible people becomes small enough that the virus’s rate of spread is naturally lowered. This phenomenon is called herd immunity, and it is how pandemics have generally died out in the past. However, the question of whether or how such immunity to COVID-19 can be built up is still under investigation, which makes it particularly important to monitor and manage nonpharmaceutical interventions as a function of the actual spread of the virus and of hospitalization rates.
In designing our control system, we relied strongly on the fact that most nonpharmaceutical interventions are not binary, on-off quantities but instead can take on a range of values and can be implemented that way by policymakers. For example, stay-at-home directives can be applied disproportionately to specific groups of people who are particularly at risk because of age or a preexisting health condition. Then the number of people who are affected by the directive can be increased or decreased by simply changing the guidelines on who is “at risk.” We can similarly widen or narrow the definition of people who are designated “essential” and are therefore exempt from the directives. Public meetings, too, may be banned for n participants, where the value of n can be increased as things loosen up.
Restrictions on travel, too, can be quite variable. Full lockdown limits people to moving within the boundaries of their property. But as conditions improve, officials might grant access to businesses within a couple, and then perhaps a dozen, kilometers of their home, and so on. These are all obviously important levers.
To explore how such variability can save lives, we devised a series of scenarios, each indicative of a recovery strategy with a different level of feedback, and simulated the resulting policies against a commonly used infectious-disease computer model. We plotted the results in a series of graphs showing COVID-19 hospital cases as a function of time. Hospital occupancy may be a more reliable and tangible measure than total case count, which depends on extensive testing that many countries (such as the United States) do not have at the moment. Furthermore, hospital ICU bed occupancy or ventilator availability is arguably an important measure of the ability of the local health care system to treat those who are suffering from respiratory distress acute enough to require intensive care and perhaps assisted breathing.
The model we used was created by Jeffrey Kantor, professor of chemical engineering at the University of Notre Dame (Kantor’s model is available on GitHub). The model assumes we can suppress disease transmission to a very low level by choosing appropriate policy levers. Although worldwide experience with COVID-19 is still limited, at the time of this writing this assumption appears to be a realistic one.
To make the model more reflective of our current understanding of COVID-19, we added two types of uncertainty. We assumed different values of R0 to see how they affected outcomes. To consider how noncompliance with nonpharmaceutical interventions would affect results, we programmed for a range of effectiveness of these NPIs.
Our first, simplest simulation confirms what we all know by now, which is that not doing anything was not an option [Figure 1].
The large and lengthy peak well above the available bed capacity in the intensive care unit indicates a huge number of cases that will likely result in death. This is why, of course, most countries have put aggressive measures in place to flatten the curve.
So what do we do when the number of infections comes down? As this second simulation clearly shows [Figure 2], relaxing all restrictions when the number of infections has come down will only lead to a second surge in infections. Not only could this second surge overwhelm our hospitals, it could also lead to an even higher mortality rate than the first surge, as occurred repeatedly in several U.S. cities during the Spanish flu epidemic of 1918.
Now let’s consider the simple on-off approach to confinement, in which most of the usual restrictions on gatherings, travel, and social interaction are lifted entirely when the number of new ICU cases drops below a lower threshold, and then are put back into place when this number exceeds a higher threshold. In this case, the R0 swings sharply between two levels, a high above 2 and a low below 1, as shown in blue in the graph [Figure 3]. This approach leads to oscillations, and if it is applied too aggressively, the high points of these oscillations will exceed the health care system’s capacity to treat patients. Another likely problem with this approach has been labeled “social distance fatigue.” People become weary of the repeated changes to their routine—going back to work for a couple of weeks, then being told to stay at home for a few weeks, then given the all-clear to go back to work, and so on.
We can do better. For our third experiment, we developed a scenario in which we targeted 90 percent occupancy of hospital intensive care units. To achieve this, we designed a simple feedback-based policy using the principles of control systems theory.
When R0 is high, many restrictions are put into place. People are largely confined to their homes and services are limited to the bare minimum needed for society to function—utilities, police, sanitation, and food distribution, for example. Then, as conditions begin to improve, as revealed by our feedback measure of hospital-bed occupancy, other services are gradually phased in. Recovered people are allowed to move freely as they can no longer contract, or transmit, the virus. Perhaps people are allowed to visit restaurants within walking distance, some small businesses are allowed to reopen under certain conditions, or certain age groups are subject to less-stringent restrictions. Then geographical mobility might be loosened in other ways. The point is that restrictions are eased gradually, with each new gradation based carefully on feedback.
This strategy results in a stable response that maximizes the rate of recovery. Furthermore, the demand for hospital ICU beds never exceeds a threshold, thanks to a “set point” target below that threshold [Figure 4]. The health care capacity limit is never breached. In addition, note the general upward trend for the release of restrictions, as the number of recovered and immune people grows and nonpharmaceutical interventions are gradually phased out.
This simplified example shows how using feedback to modulate the restrictions imposed on a population to modify R0 leads to a policy that is robust. For example, early on in the outbreak, there will be a great deal of uncertainty about R0 because testing will still be spotty, and because an unknown number of people may have the disease without realizing it. That uncertainty will inevitably fuel a surge in initial cases. However, once the case count is stabilized by the initial restrictive regime, a policy based on feedback will prove very tolerant of variations in R0, as illustrated in Figure 5. As the graph shows, after a few months it doesn’t matter whether the R0 is 2 or 2.6 because the total case count stays well below the number of available hospital beds due to the use of feedback.
As an added benefit, using feedback makes the policy effective even in the face of likely degrees of noncompliance. In practice, what noncompliance means is that a given level of restrictions will result in an R0 that is slightly higher than expected, which in turn causes fluctuations in the number of people who are infected. Noncompliance might, for instance, result in the restrictions being 10 percent less effective than intended [Figure 6]. However, through feedback, the policy will automatically tighten to compensate.
In reality, various factors that the model treats as invariable, such as health care capacity, might be anything but. However, variations of this sort can typically be accommodated in a policy, for example by changing the threshold on ICU occupancy.
Clearly, tried-and-true principles of control theory, particularly feedback, can help officials plot more robust and optimal strategies as they attempt to deal with the devastating COVID-19 pandemic. But how to make officials aware of these powerful tools?
Imagine an online interactive tool offering detailed, specific guidance in plain language and aimed at public officials and others charged with mounting a response to the pandemic in their communities. The guidance would be based on strategies developed by a small group of control theorists, epidemiologists, and people with policy experience. The site could review the now-familiar initial response, in which nonessential workers are confined to their homes except for essential needs. Then the site could go on to give some guidance on how and when the tightest restrictions could be lifted.
The biggest challenge to the designers of this Web-based tool will be enabling nonspecialists to visualize how the various components of the epidemiological model interact with the various feedback policy options and model uncertainty. How exactly should the main feedback measure—likely some aspect of hospital or intensive care occupancy—be implemented? Which restrictions should be lifted in the first round of easing? How should they be eased in the first round? In the second round? While monitoring the feedback measure, how frequently should officials consider whether to implement another round of easing? Feedback will help officials determine when to time various phases of interventions. An interactive tool that could assess different policy approaches, illustrating what conditions must be in place to alleviate uncertainty and shrink the projected caseload, would be very valuable indeed.
Working with political officials, epidemiologists, and others, control engineers can systematically design policies that take these constraints and trade-offs into account. It comes down to this: In the many months of struggle ahead, such a collaboration could save countless lives.
Greg Stewart is vice president of data science for the agriculture technology startup Ecoation and an adjunct professor at the University of British Columbia, in Vancouver. A Fellow of the IEEE, he has led the research, development, and deployment of control and machine-learning technology in such applications as microalgae cultivation, large-scale data centers, automotive power-train control, and semiconductor fabrication. He is currently developing models and strategies for controlling the spread of pests and disease in agriculture.
Klaske van Heusden is a research associate at the University of British Columbia in Vancouver. An IEEE Senior Member, her research interests include modeling, prediction, and control, with applications in medical devices, mechatronics, and robotics. Lately she has been working on a robust and provably safe automated drug-delivery device for use in operating rooms.
Guy A. Dumont is a professor of electrical and computer engineering at the University of British Columbia in Vancouver and a principal investigator at BC Children’s Hospital Research Institute. An IEEE Fellow, he has 40 years of experience applying advanced control theory in the process industries, in particular pulp and paper and, for the last 20 years, in biomedical applications such as automated drug delivery for closed-loop control of anesthesia.
Global regulators have largely stepped out of diagnostics manufacturers’ way to enable them to quickly bring COVID-19 tests to the public. That has led to a deluge of testing options on the market, and in many cases, the reliability and accuracy of these tests is unclear.
That led us to wonder: Is anyone testing the tests?
We found one organization that’s on it. The Foundation for Innovative New Diagnostics, or FIND, headquartered in Geneva, Switzerland, is evaluating its way through a list of over 300 COVID-19 tests manufactured globally, and today published its first results.
FIND’s effort, which it is undertaking in collaboration with the World Health Organization (WHO), involves running thousands of coronavirus samples through the tests and comparing their performance against a gold standard. The organization will rank the tests based on sensitivity.
In late March, the FDA approved the use of Cepheid’s GeneXpert rapid molecular diagnostic machines to test for the new coronavirus. The automated modules—5000 of which are already installed in U.S. health facilities, while 18,000 are in operation in other countries—don’t require a lab facility or special training to operate. What’s more, they generate accurate results in about 45 minutes. The modules use disposable cartridges, pre-filled with the required chemicals that are channeled around test chambers using microfluidics.
While the cartridges had to be adapted to test for COVID-19, the microfluidic system itself dates back to the late 1990s, when Cepheid was cofounded in Silicon Valley by IEEE Fellow Kurt Petersen, a MEMS pioneer and winner of the 2019 IEEE Medal of Honor. (Cepheid went public in 2000 and Danaher acquired it in 2016.)
The system was just a prototype in September 2001, when letters containing anthrax spores began to arrive at the offices of U.S. senators and journalists. Cepheid won a set of competitions held by the U.S. Postal Service aimed at helping it prevent anthrax from getting into the mail system. To this day, Cepheid systems, attached to mail sorting machines, screen most mail in the U.S.
Since then, the company’s GeneXpert has been adapted to test for flu, strep, norovirus, chlamydia, tuberculosis, MRSA—and now, COVID-19.
Says Petersen: “The technology we developed is very powerful, because you can have all different kinds of cartridges for the instrument. Each cartridge can have a different design of its microfluidics and hold different chemicals, creating a PCR (polymerase chain reaction) specific to any DNA sequence you want, and so sensitive that you can detect just a few segments of that DNA in a milliliter of liquid.”
The GeneXpert test, like most COVID-19 tests to date, starts with a nasal sample taken with a swab. The person collecting the sample drops the swab into a liquid-filled specimen transfer tube. To start the test, liquid containing the sample is pipetted into a disposable test cartridge, and the cartridge is inserted into the test machine; this takes no special training. After this, the process is automatic.
Petersen explains how it works:
The test cartridge contains microfluidic channels; these are made out of plastic using high-precision injection molding. All the chemicals needed for the process are stored in chambers within the system. In the center of the cartridge, a rotary valve turns to open different pathways, while a tiny plunger—like a syringe–moves fluids in and out as needed.
So, the plunger pulls the sample into the center, the valve rotates, and the plunger pushes it into another region of the cartridge to do an operation on it.The system can do that multiple times, moving the sample to different regions with different chemicals, extracting RNA, mixing it with the reverse transcriptase that synthesizes complementary DNA that matches the RNA, and eventually pushing it into PCR reaction tube, where rapid heating and cooling speeds up the process of copying the DNA. Each new copy of the DNA gets a fluorescent molecule attached, which allows an optical system to determine whether or not the targeted gene sequence is in the sample.
Petersen, now an angel investor, says he’s gratified that a technology he worked to develop is being used to help address the pandemic. “The instrument hasn’t changed that much,” he says; “it’s pretty much what we designed 20 years ago.”
While Cepheid’s test for COVID-19 was the first approved in the U.S., Abbott has also received FDA approval for a five-minute test that runs on its ID Now analyzers. Roughly 18,000 of those are installed in U.S. healthcare facilities. According to a recent study on flu virus identification that compared the Cepheid and Abbott systems along with a similar technology from Roche, “the Cepheid test showed the best performance,” and was generally more sensitive. Such comparisons with respect to identifying COVID-19 are not yet available.
Months into the COVID-19 pandemic, the United States has finally moved from relying entirely on a single, flawed diagnostic test to having what may soon be an onslaught of testing options available from private entities. The U.S. Food and Drug Administration over the last three weeks has authorized the emergency use of more than 20 diagnostic tests for the novel coronavirus known as SARS-CoV-2.
Here’s something that sounds decidedly useful right now: A home test kit that anyone could use to see if they have the coronavirus. A kit that’s nearly as easy to use as a pregnancy test, and that would give results in half an hour.
It doesn’t exist yet, but serial biotech entrepreneur Jonathan Rothberg is working on it.
At Brigham and Women’s Hospital in Boston, a potential COVID-19 patient can now drive in to the ambulance bay, roll down their window, and ask staff to swab their nose and throat.
Those swabs will be sent to a state lab for a real-time PCR test, which amplifies any viral genetic material so it can be compared to the new coronavirus, SARS-CoV-2. But this standard test must be carried out in a certified laboratory with trained technicians, takes 3 to 4 days to deliver results, and produces some false negatives.
Publicly available skin cancer detection apps, such as SkinVision, use AI-based analysis to determine if a new or changing mole is a source of concern or nothing to worry about. Yet according to a new analysis of the scientific evidence behind those apps, there’s a lot to worry about.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.