All posts by Mark Anderson

Electron Hole, Meet Your Fractional Cousin

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/semiconductors/nanotechnology/electron-hole-meet-your-fractional-cousin

Researchers have designed a nano-electronic circuit that can tease into existence a strange new kind of quantum “particle.” Its existence confirms decades of speculation about the behavior of electronic circuits in very low temperatures and high magnetic fields—and opens the door for possible applications in next-generation quantum computers.

However, this quasiparticle carries only a fraction of an electron’s charge. It is, to be clear, not substantively an actual single particle but rather more likely an ensemble of electrons acting collectively in certain extreme quantum environments. The excitation does, in other significant ways, behave like a particle. Much like an “electron hole” in conventional semiconductors, this “anyon” acts as if it’s its own discrete entity with its own characteristic mass, charge and spin.

And, unlike the +1 charge of an electron hole, these newly studied anyons (whose name Nobel laureate Frank Wilczek jokingly coined after their seemingly “anything goes” nature) carry just one-third of the electron’s charge.

James Nakamura, postdoctoral researcher in the lab of Michael Manfra at Purdue University, said the quantum trajectories of the anyon are also curious. Its paths through the test circuit interact with other anyons—and indeed even with other quantum incarnations of itself moving through other elements of the circuit—and form interference patterns.

These interference patterns are analogous, Nakamura said, to the wavy patterns of ripples in a conventional laser interferometer. Except, instead of the patterns of light and darkness on a screen that a laser interferometer produces, this interferometer tracks anomalous shifts in conductance as parameters like gate voltage and magnetic field strength are slowly varied.

The circuit—cooled to 10 thousandths of a degree above absolute zero (10 milli-Kelvin) and immersed in a powerful magnetic field of 9 Tesla—exhibits discrete jumps in its conductance. Manfra, Nakamura, and co-authors infer from these observations the presence of the longhypothesized anyon.

The finding recalls Robert Millikan’s 1909 oil drop experiments that measured the electron’s fundamental charge. Only this time, the Manfra group discovered a quantum of charge that is only 33 percent of that contained by the seemingly indivisible electron.

The group, which published its finings in a recent issue of the journal Nature Physics, not only adduce the existence of these 1/3-charged anyons but they also track how the anyons evolve as they move through the interferometer.

“Quantum mechanical phase is a very subtle thing,” Nakamura said. “But there is a way you can see phases, and that’s through interference measurements… Electrons, since they’re quantum mechanical, have a phase. Also these quasiparticles have a phase. And that’s what we’re studying.”

Nakamura says the group’s experiment and these fractionally charged anyons may not have immediate applications for any quantum technologies yet devised. However, he said, slightly weaker magnetic fields with slightly different conditions are also expected to produce an anyon with 1/4 of an electron’s charge.

This quasiparticle has already been discussed as a possible fault-tolerant qubit for an advanced “topological” quantum computer that codes its quantum information in the anyon’s changing state as it interacts with itself and other anyons moving through a circuit.

But all of that would depend on future experiments that begin by first doing what Manfra, Nakamura, an co-authors did for the 1/3 anyon: observing the quasiparticle and proving that you can track it through the circuits of a nano-sized interferometer. Then it would be possible to discover what a universe composed of fractional charges can cook up.

Exclusive: Airborne Wind Energy Company Closes Shop, Opens Patents

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/renewables/exclusive-airborne-wind-energy-company-closes-shop-opens-patents

This week, a 13-year experiment in harnessing wind power using kites and modified gliders finally closes down for good. But the technology behind it is open-sourced and is being passed on to others in the field.

As of 10 September, the airborne wind energy (AWE) company Makani Technologies has officially announced its closure. A key investor, the energy company Shell, also released a statement to the press indicating that “given the current economic environment” it would not be developing any of Makani’s intellectual property either. Meanwhile, Makani’s parent company, X, Alphabet’s moonshot factory, has made a non-assertion pledge on Makani’s patent portfolio. That means anyone who wants to use Makani patents, designs, software, and research results can do so without fear of legal reprisal.

Makani’s story, recounted last year on this site, is now the subject of a 110-minute documentary called Pulling Power from the Sky—also free to view.

When she was emerging from graduate studies at MIT in 2009, Paula Echeverri (once Makani’s chief engineer) said the company was a compelling team to join, especially for a former aerospace engineering student.

“Energy kite design is not quite aircraft design and not quite wind turbine design,” she said.

The idea behind the company’s technology is to raise the altitude of the wind energy harvesting to hundreds of meters in the sky—where the winds are typically both stronger and more steady. Because a traditional windmill reaching anywhere approaching these heights would be impractical, Makani was looking into kites or gliders that could ascend to altitude first—fastened to the ground by a tether. Only then would the flyer begin harvesting energy from wind gusts.

Pulling Power recounts Makani’s story from its very earliest days, circa 2006, when kites like the ones kite surfers use were the wind energy harvester of choice. However, using kites also means drawing power out of the tug on the kite’s tether. Which, as revealed by the company’s early experiments, couldn’t compete with propellers on a glider plane.

What became the Makani basic flyer, the M600 Energy Kite, looked like an oversized hobbyist’s glider but with a bank of propellers across the wing. These props would first be used to loft the glider to its energy-harvesting altitude. Then the engine would shut off and the glider would ride the air currents—using the props as mini wind turbines.

According to a free 1,180-page ebook (Part 1Part 2Part 3The Energy Kite, which Makani is also releasing online, the company soon found a potentially profitable niche in operating offshore.

Just in terms of tonnage, AWE had a big advantage over traditional offshore wind farms. Wind turbines (in shallow water) fixed to the seabed might require 200 to 400 tons of metal for every megawatt of power the turbine generated. And floating deep-water turbines, anchored to seabed by cables, typically involve 800 tons or more per megawatt. Meanwhile, a Makani AWE platform—which can be anchored in even deeper water—weighed only 70 tons per rated megawatt of generating capacity.

Yet, according to the ebook, in real-world tests, Makani’s M600 proved difficult to fly at optimum speed. In high winds, it couldn’t fly fast enough to pull as much power out of the wind as the designers had hoped. In low winds, it often flew too fast. In all cases, the report says, the rotors just couldn’t operate at peak capacity through much of the flyer’s maneuvers. The upshot: The company had a photogenic oversized model airplane, but not the technology that’d give regular wind turbines a run for their money.

Don’t take Makani’s word for it, though, says Echeverri. Not only is the company releasing its patents into the wild, it’s also giving away its code baseflight logs, and a Makani flyer simulation tool called KiteFAST.

“I think that the physics and the technical aspects are still such that, in floating offshore wind, there’s a ton of opportunity for innovation,” says Echeverri.

One of the factors the Makani team didn’t anticipate in the company’s early years, she said, was how precipitously electricity prices would continue to dropleaving precious little room at the margins for new technologies like AWEs to blossom and grow.

“We’re thinking about the existing airborne wind industry,” Echeverri said. “For people working on the particular problems we’d been working on, we don’t want to bury those lessons. We also found this to be a really inspiring journey for us as engineers—a joyful journey… It is worthwhile to work on hard problems.”

Solar Closing in on “Practical” Hydrogen Production

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/renewables/solar-closing-in-on-practical-hydrogen-production

Israeli and Italian scientists have developed a renewable energy technology that converts solar energy to hydrogen fuel — and it’s reportedly at the threshold of “practical” viability.

The new solar tech would offer a sustainable way to turn water and sunlight into storable energy for fuel cells, whether that stored power feeds into the electrical grid or goes to fuel-cell powered trucks, trains, cars, ships, planes or industrial processes.

Think of this research as a sort of artificial photosynthesis, said Lilac Amirav, associate professor of chemistry at the Technion — Israel Institute of Technology in Haifa. (If it could be scaled up, the technology could eventually be the basis of “solar factories” in which arrays of solar collectors split water into stores of hydrogen fuel——as well as, for reasons discussed below, one or more other industrial chemicals.)

“We [start with] a semiconductor that’s very similar to what we have in solar panels,” says Amirav. But rather than taking the photovoltaic route of using sunlight to liberate a current of electrons, the reaction they’re studying harnesses sunlight to efficiently and cost-effectively peel off hydrogen from water molecules.

The big hurdle to date has been that hydrogen and oxygen just as readily recombine once they’re split apart—that is, unless a catalyst can be introduced to the reaction that shunts water’s two component elements away from one another.

Enter the rod-shaped nanoparticles Amirav and co-researchers have developed. The wand-like rods (50-60 nanometers long and just 4.5 nm in diameter) are all tipped with platinum spheres 2–3 nm in diameter, like nano-size marbles fastened onto the ends of drinking straws.

Since 2010, when the team first began publishing papers about such specially tuned nanorods, they’ve been tweaking the design to maximize its ability to extract as much hydrogen and excess energy as possible from “solar-to-chemical energy conversion.”

Which brings us back to those “other” industrial chemicals. Because creating molecular hydrogen out of water also yields oxygen, they realized they had to figure out what to do with that byproduct. “When you’re thinking about artificial photosynthesis, you care about hydrogen—because hydrogen’s a fuel,” says Amirav. “Oxygen is not such an interesting product. But that is the bottleneck of the process.”

There’s no getting around the fact that oxygen liberated from split water molecules carries energy away from the reaction, too. So, unless it’s harnessed, it ultimately represents just wasted solar energy—which means lost efficiency in the overall reaction.

So, the researchers added another reaction to the process. Not only does their platinum-tipped nanorod catalyst use solar energy to turn water into hydrogen, it also uses the liberated oxygen to convert the organic molecule benzylamine into the industrial chemical benzaldehyde (commonly used in dyes, flavoring extracts, and perfumes).

All told, the nanorods convert 4.2 percent of the energy of incoming sunlight into chemical bonds. Considering the energy in the hydrogen fuel alone, they convert 3.6 percent of sunlight energy into stored fuel.

These might seem like minuscule figures. But 3.6 percent is still considerably better than the 1-2 percent range that previous technologies had achieved. And according to the U.S. Department of Energy, 5-10 percent efficiency is all that’s needed to reach what the researchers call the “practical feasibility threshold” for solar hydrogen generation.

Between February and August of this year, Amirav and her colleagues published about the above innovations in the journals NanoEnergy and Chemistry Europe. They also recently presented their research at the fall virtual meeting of the American Chemical Society.

In their presentation, which hinted at future directions for their work, they teased further efficiency improvements courtesy of new new work with AI data mining experts.

“We are looking for alternative organic transformations,” says Amirav. This way, she and her collaborators hope, their solar factories can produce hydrogen fuel plus an array of other useful industrial byproducts. In the future, their artificial photosynthesis process could yield low-emission energy, plus some beneficial chemical extracts as a “practical” and “feasible” side-effect.

Artificial Antibodies Bolster Hope for Inhaled Coronavirus Treatment

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/the-human-os/biomedical/devices/covid19-aeronabs

IEEE COVID-19 coverage logo, link to landing page

Researchers have developed an inhalable artificial coronavirus antibody that they say “straitjackets” the SARS-CoV-2 virus and appears (in laboratory, albeit still non-clinical, tests) to impede its ability to infect.

These early tests, they say, suggest the new therapy is potent and powerful enough to merit clinical trials. The ultimate goal, the researchers insist, would be to make it directly available to consumers, possibly even as an over-the-counter home therapy or preventative measure. They say this inhalable COVID prophylaxis/treatment could ideally be ready for public release as soon as “a matter of months.”

The protein they’ve developed, called mNb6-tri (which they’ve dubbed “AeroNabs”), clamps efficiently and tenaciously atop the spike protein of the SARS-CoV-2 virus, hindering the coronavirus’s ability to infect human cells. And, so far as the researchers have been able to determine, once mNb6-tri locks on to the spike protein, it doesn’t readily come off. That appears to be great news for everyone but the coronavirus.

Spike proteins are the pointy spines poking out of the coronavirus that make the virus particle look like an inflated blowfish or a porcupine with its back up.

Spikes—to the world’s great chagrin and horror in 2020—are very good at latching onto a receptor (called ACE2) on many human cells, opening the door for this viral invader. Once inside a cell, the virus hotwires the cell’s machinery to kick out more viral particles, perpetuating a sometimes devastating, body-wide viral infection.

As it turns out, the antibodies of some mammals (llamas and camels, for instance) consist of simpler proteins called nanobodies. And nanobodies can be both an order of magnitude lighter than COVID and relatively easy to characterize and potentially even engineer on an atom-by-atom basis.

So, if nanobodies were to be pressed into service in the COVID-19 war, the thinking goes, one very potent target could be the invader’s spike protein. That element is such a fundamental component of the coronavirus’s infection protocol that if it were somehow rendered ineffective, the coronavirus’s progress through a human host might be stopped cold, or at least substantially hindered.

Aashish Manglik, assistant professor of pharmaceutical chemistry at the University of California San Francisco, uses nanobodies extensively in his research and has spent much of this year investigating whether his specialty could help fight the COVID-19 pandemic.

For this research, Manglik’s lab began by performing an experiment in natural selection—that is, they produced a randomized sequence of more than two billion nanobodies.

“We first created a pool of DNA sequences, where each strand of DNA codes for a unique nanobody,” Manglik says. After that, he recalls: “We put those DNA sequences into individual yeast cells — similar to baker’s yeast, like what we use for at-home cooking. But the cool thing here is we’ve engineered a method where each individual yeast has a unique nanobody tethered to its surface. So in a test tube full of many billions of yeast [cells], each individual yeast will have a different nanobody on its surface.”

Having done that, the team more or less went fishing, says co-researcher Peter Walter, professor of biochemistry and biophysics at UCSF.

Walter and his research group dropped in a bunch of purified coronavirus spike proteins to the mixture. And because they’d also added a magnetic particle to each spike, when they turned on a magnetic field, they were able to watch as the “fishing lures” rose to the surface with the nanobodies that bound most effectively to the spikes attached.

Some nanobodies held loosely to the spike proteins; others were bound more closely. After a “two- to three-week” whittling process, according to Manglik, the researchers narrowed the pool of two billion candidates down to 20 semi-finalist nanobodies that each appeared to optimize three competing factors. The ideal nanobody would need to: bind quickly and tightly to the spike, remain robust in the presence of heat and other degrading real-world factors, and be as “human-like” as possible. (This last point was necessary to head off any potential complications such as an overzealous human immune system counterproductively attacking the nanobody.)

Ultimately, they narrowed the nanobody field down to the best of the entire lot: What they called Nb6. Using a new imaging technique called cryogenic electron microscopy, they studied Nb6 at the atomic scale as it clamped on to the COVID-19 spike protein. The team made adjustments, they say, that improved Nb6’s binding affinity to the spike by another 500 times.

Then, because the spike protein is a trimer (containing three identical proteins), the researchers turned their modified nanobody (mNb6) into a tri-fold nanobody specifically engineered to fit the precise contours of the novel coronavirus’s trimer spike. In other words, they’d now created mNb6-tri.

“This dramatically increases the tightness of binding between this trimeric construct and the spike protein,” says Manglik. “When we did this, we found we got a 200,000-fold gain in the tightness of binding—such that once one of these [AeroNabs nanobodies] binds, it basically never comes off. And we’re at the limit of detection for the kind of instrumentation and the approaches we use to look at how long the molecule, once bound, stays on. We haven’t seen it come off. We think it’ll be bound to that spike protein for weeks if not months.”

Walter said that because the nanobodies are so small and so effective in seeking out and binding to coronavirus particles, he anticipates that one dose of AeroNabs particles per day could be sufficient for testing its effectiveness for COVID-19 prevention or treatment. Walter calls AeroNabs—if its safety and effectiveness can be substantiated in clinical trials—essentially “molecular PPE.”

For patients who already have SARS-CoV-2 in their lungs, Walter anticipates a nebulizer creating an AeroNabs mist, which the patient could inhale. “A few minutes once a day should be sufficient,” he said, to test whether AeroNabs could be effective for COVID-19 patients with serious symptoms.

And because a coronavirus infection often begins in the nasal airways, an AeroNabs nasal spray might also be tested as a preventative measure for front-line workers and other potentially exposed populations as well as people who are positive for the coronavirus but don’t yet show symptoms.

“AeroNabs is stable and easy to produce in industrial quantities at very low expense,” Walter says. “And it can be shipped around the world as a dry powder. So we really hope it will become broadly available to the developing world as well.”

On the question of AeroNabs’ safety, Walter and Manglik point to the precedent of another nanobody therapy—Caplacizumab for the blood disorder TTP (thrombotic thrombocytopenic purpura0—that was recently approved by the U.S. Food and Drug Administration.

Walter notes, too, that the name “nanobody” may be deceptive. AeroNabs, he points out, are not in fact nanoparticles or any such exotic creation of nanotechnology. The mNb6-tri protein consists, by design, almost entirely of the same amino acids that make up human proteins. Human airways and bloodstreams are awash in all kinds of proteins that are much like mNb6-tri wherever one might care to look. Only this particular protein has been engineered specifically to bind, as Manglik says, “absurdly tightly” to COVID’s primary tool for breaking and entering human cells.

For those reasons, the researchers expect toxicity concerns for AeroNabs to be low.

That said, they’re now seeking commercial partners to take on clinical trials to investigate questions of safety and effectiveness against the coronavirus in human patients in real-world clinical settings.

The paper describing the group’s AeroNabs research was published last week on the online pre-print server bioRxiv. Walter says that since then, the group has fielded interest from “quite a number of prestigious journals.”

“We are all hoping that an effective vaccine with lasting effects can be developed very fast,” Walter said. “But ramping up the production capacity and distribution capacity will take a while. And before that becomes available around the globe, we hope AeroNabs could act as a stopgap measure.”

Has the Summit Supercomputer Cracked COVID’s Code?

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/the-human-os/computing/hardware/has-the-summit-supercomputer-cracked-the-covid-code

IEEE COVID-19 coverage logo, link to landing page

A supercomputer-powered genetic study of COVID-19 patients has spawned a possible breakthrough into how the novel coronavirus causes disease—and points toward new potential therapies to treat its worst symptoms.

The genetic data mining research uncovered a common pattern of gene activity in the lungs of symptomatic COVID-19 patients, which when compared to gene activity in healthy control populations revealed a mechanism that appears to be a key weapon in the coronavirus’s arsenal.

The good news is there are already drugs—a few of which are already FDA-approved—aimed at some of these very same pathologies.

“We think we have a core mechanism that explains a lot of the symptoms where the virus ends up residing,” said Daniel Jacobson, chief scientist for computational systems biology at Oak Ridge National Labs in Oak Ridge, Tenn.

The mechanism, detailed in Jacobson’s group’s new paper in the journal eLife, centers around a compound the body produces to regulate blood pressure, called bradykinin. A healthy body produces small amounts of bradykinin to dilate blood vessels and make them more permeable. Which typically lowers blood pressure.

However, Jacobson said, lung fluid samples from COVID-19 patients consistently revealed over-expression of genes that produce bradykinin, while also under-expressing genes that would inhibit or break down bradykinin.

In other words, the new finding predicts a hyper-abundance of bradykinin in a coronavirus patient’s body at the points of infection, which can have well-known and sometimes deadly consequences. As Jacobson’s paper notes, extreme bradykinin levels in various organs can lead to dry coughs, myalgia, fatigue, nausea, vomiting, diarrhea, anorexia, headaches, decreased cognitive function, arrhythmia and sudden cardiac death. All of which have been associated with various manifestations of COVID-19.

The bradykinin genetic discovery ultimately came courtesy of Oak Ridge’s supercomputers Summit and Rhea, which crunched data sets representing some 17,000 genetic samples (from more than 1,000 patients) while comparing each of these samples to some 40,000 genes.

Summit, the world’s second fastest supercomputer as of June, ran some 2.5 billion correlation calculations across this data set. It took Summit one week to run these numbers, compared to months of compute time on a typical workstation or cluster.

Jacobson said that the genetic bradykinin connection the team made may have rendered COVID-19 a little less mysterious. “Understanding some of these fundamental principles gives us places to start,” he said. “It’s not as much of a black box anymore. We think we have good indications of the mechanisms. So now how do we attack those mechanisms to have better therapeutic outcomes?”

One of the most persistent and deadly outcomes of extreme COVID disease involves the lungs of patients filling with fluid, forcing the patient to fight for every breath. There, too, the mechanism and genetic pathway the researchers have uncovered could possibly explain what’s going on.

Because bradykinin makes blood vessels more permeable, lung tissue gets inundated with fluid that begins to make it swell. “You have two interconnected pathways, and the virus can tilt the balance to these two pathways with a catastrophic outcome,” Jacobson said. “The bradykinin cascade goes out control, and that allows fluid to leak out of the blood vessels, with immune cells infiltrating out. And you effectively have fluid pouring into your lungs.”

The presence of typically blood-borne immune cells in the lungs of some patients can, Jacobson said, also produce extreme inflammation and out-of-control immune responses, which have been observed in some coronavirus cases.

But another genetic tendency this work revealed was up-regulation in the production of hyaluronic acid. This compound is slimy to the touch. In fact, it’s the primary component in snail slime. And it has the remarkable property of being able to absorb 1000 times its own weight in water.

The team also discovered evidence of down-regulated genes in COVID patients that might otherwise have kept hyaluronic acid levels in check. So with fluid inundating the lungs and gels that absorb those fluids being over-produced as well, a coronavirus patient’s lung, Jacobson said, “fills up with a jello-like hydrogel.”

“One of the causes of death is people are basically suffocating,” Jacobson said. “And we may have found the mechanisms responsible for how this gets out of control, why all the fluid is leaking in, why you’re now producing all this hyaluronic acid—this gelatin-like substance—in your lung, and possibly why there are all these inflammatory responses.”

Jacobson’s group’s paper then highlights ten possible therapies developed for other conditions that might also address the coronavirus’s “bradykinin storm” problem. Potential therapies include compounds like icatibantdanazolstanozololecallantideberinertcinryze and haegarda, all of whose predicted effect is to reduce bradykinin levels in a patient. Even Vitamin D, whose observed deficiency in COVID-19 patients is also explained by the group’s research, could play a role in future COVID-19 therapies.

None of which, it’s important to stress, has yet been tested in clinical trials. But, Jacobson said, they’re already in touch with groups who are considering testing these new findings and recommended therapies.

“We have to get this message out,” Jacobson said. “We have started to be contacted by people. But … clinical partners and funding agencies who will hopefully support this work is the next step that needs to happen.”

Video Game Approved as Prescription Medicine

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/consumer-electronics/gaming/video-game-approved-as-prescription-medicine

This summer, a video game became available by prescription in the United States. This first-ever FDA-approved digital treatment builds on a tradition of gaming as a therapeutic tool that extends back more than a decade. Its game play gets good reviews, too.

As video games have expanded their reach beyond the dedicated platforms (like Nintendo consoles, Xboxes, and PlayStations) where they’d long thrived, to smartphones and tablets, therapeutic games have also made the leap. Some have ratcheted up their intentionally addictive game design—though, according to the developers, not in the service of ad clicks and in-game purchases but rather of self-improvement and therapeutic benchmarks.

On 15 June, the U.S. Food and Drug Administration announced its approval of a first-person racing game called EndeavorRx. Boston-based Akili Interactive Labs, maker of the game, says its racer was originally licensed from the lab of Adam Gazzaley, a neuroscientist at the University of California, San Francisco. The company touts four peer-reviewed studies (in PLOS One, The Lancet Digital Health, The Journal of Autism, and Developmental Disorders) as well as one paper in process as support for its claims that EndeavorRx significantly improves clinical markers of attention in patients with ADHD (attention deficit hyperactivity disorder).

“EndeavorRx looks and feels like a traditional game, but it’s very different,” says Matt Omernick, Akili cofounder and the company’s chief creative officer. “EndeavorRx uses a video-game experience to present specific sensory stimuli and simultaneous motor challenges designed to target and activate the prefrontal cortex of the brain…. As a child progresses in game play, the technology is continuously measuring their performance and using adaptive algorithms to adjust the difficulty and personalize the treatment experience for each individual.”

For Web developer and designer Craig Ferguson, anxiety and depression are both conditions that good video-game design can effectively treat. “Depression is a cycle of doing nothing, and then when you do nothing you feel worse,” he says. “If you do anything that is meaningful for you, that breaks you out of that cycle.”

Ferguson, back when he was a developer and designer at MIT’s Media Lab, worked with Chelsey Wilks, a psychology postdoc at Harvard. Wilks studies behavioral-activation therapy—a form of depression treatment that promotes activities for the depressive person that break depression’s vicious feedback loops.

“It’s a well-known fact that so many mobile games use all sorts of psychological tricks to get people to give them money,” he says. “So we wanted to use the same exact psychological tricks, but to trick [users] into doing something that’s good for them.”

For instance, he says, last year, the Journal of Medical Internet Research published a study he coauthored with other Media Lab researchers that used a Pokémon-like digital-pet-collection mobile game as a backdoor route to induce patients to self-report symptoms and behaviors. (Typically clinicians who rely on patients to self-report—say, their mood swings or their daily activities—via paper diaries or electronic journals sometimes have trouble with compliance.) Self-reporting using a specially designed mobile game, where they saw 86 percent compliance, outstripped self-reporting via e-journal (78 percent compliance) and paper journals (71 percent compliance).

The Guardians: Unite the Realms, developed by Ferguson and his colleagues, is now available to download for free on both Android and iOS devices.

Using games for physical therapy—especially therapies tied to specific diseases and conditions—sometimes requires a different approach than the increasingly tablet- and phone-based psychological-therapy games, says Pier Luca Lanzi, a professor of computer engineering at Milan Polytechnic.

Lanzi teaches video-game design and development and has been studying therapeutic gaming since 2016, when he first published a paper about developing effective “exergames” as forms of therapy.

Lanzi says physical-therapy games require special setups and controllers that often exclude the device-agnostic, touch-screen-only interface of tablets and smartphones. Earlier this year, he collaborated on a study that used a PC-based video game to accompany therapist-designed physical spaces—outfitted with balance boards, tiles, and roll pillows—for patients with juvenile idiopathic arthritis (JIA).

“The literature is massive in this field,” Lanzi says. “But from time to time, you have new sensors that come out, and as the sensor improves and becomes smaller, it can be worn. You can actually do much more.”

So although Lanzi and his collaborators’ current-generation JIA therapy game uses the discontinued Microsoft Kinect motion sensor, their paper notes the expanded game-therapy possibilities that newer sensing technologies like the rebranded Azure Kinect and Intel Real Sense Depth Camera can provide.

Of course, virtual reality also has applications in the therapy gaming field. However, Lanzi adds, not every therapy scenario will be ideal for VR. Physical rehabilitation, for instance, requires a patient to remain aware of his or her body and how it moves through space. VR tends to shut that sensory feedback down.

VR gaming, he said, shows much more promise as a form of cognitive rehabilitation—for, say, stroke survivors or autistic patients.

Whatever the form of screen, he said, therapy games can, at best, introduce a sense of fun to physical therapies that can otherwise be tedious or worse.

“In JIA, exercises are very painful for the joints,” Lanzi says. “They’re also very repetitive. So you try to find something that’s enjoyable. Something that makes the time pass by…. We try to make it feel like it’s not rehabilitation at all.”

This article appears in the August 2020 print issue as “Prescription-Strength Gaming.”

How Scientists Encoded “The Wizard of Oz” Into DNA

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/nanoclast/semiconductors/memory/dna-data-storage-method-sets-standard-for-highdensity-data-future

Journal Watch report logo, link to report landing page

Synthetic DNA as a high-density data storage medium has fascinated digital futurists for years. The entire internet could be coded into DNA strands that fit inside a shoebox, while the DNA molecule is so stable it can last tens of thousands or even hundreds of thousands of years. In 2013, for instance, scientists sequenced the entire genome of a 700,000 year-old horse fossil.

The trick to date has involved shoehorning vast sums of bytes—a data standard tailor-made for linear and sequential stores like RAM and hard drives—into wet, squiggly forests of nano-sized deoxyribonucleic spaghetti noodles. Translating one data format to the other has been anything but straightforward.

Enter William Press’s team at the University of Texas at Austin. They’ve pioneered a set of DNA data encoding and decoding algorithms that could jumpstart a new field of high-density, long-term data storage. Their work, reminiscent in its generative ambition of the landmark BB84 protocol that launched the field of quantum cryptography, could one day form the basis for a world of genomic data storage applications that come from reimagining information in terms of petabytes per gram.

Stephen Jones, a postdoc in Press’s group and co-author of the Proceedings of the National Academy of Sciences paper that describes their research, says it’s best to begin by understanding where data storage errors typically creep in. In traditional hard-drive and flash memory devices, bit-flips and erasures are the enemies of zeroes and ones.

“We have decades of beautiful work finding solutions to these two kinds of errors,” Jones said. “But DNA is fundamentally different.”

To make a workable DNA data storage standard, you need instead to worry about substitutions, insertions and deletions. The first is similar to a bit flip in which, say, an A nucleotide is substituted in the place where a T nucleotide used to be. (A, C, T and G and not 0 and 1 are the base language of DNA information.) The latter two classes of error represent cases, as the names suggest, where DNA base pairs are inserted or deleted from a strand.

Crucially, however, with DNA there is no reliable, inherent way of knowing that the strand you’re reading off contains any substitution, insertion or deletion errors. There’s no such thing as a countable and quantifiable DNA “memory register.” Every base pair is just another nucleotide in a long sequence. And together they all form just another strand of DNA.

The relative nature of DNA data storage, in fact, is a key to Press, Jones and co-authors’s HEDGES protocol (standing for Hash Encoded, Decoded by Greedy Exhaustive Search). No single isolated nucleotide in their protocol contains useable data. Rather, it’s the accumulation of sequences of nucleotides that provides a robust storage system that they predict could achieve DNA’s high-density potential while still enduring the eons.

The group used L. Frank Baum’s The Wizard of Oz, translated into Esperanto no less, as their sample data set to be stored. Synthetic DNA these days, Jones said, typically comes in strands of one-hundred or so base pairs. That’s the foundation of their “hard drive,” as it were.

So their protocol needed to be able to be chopped up into thousands or millions of little hundred-nucleotide sequences that each contained the information required to reassemble the source text—even with an unknown number of substitution, insertion and deletion errors thrown in for good measure.

Encoding The Wizard of Oz into DNA, then, involved passing the data through an “outer” coding layer and an “inner” coding layer. (Think of these steps as two separate algorithms in a complex cryptographic standard.)

The outer layer diagonalized the source data so that any given strand of DNA would contain shards of many portions of the message. The inner layer, HEDGES, then translates each bit into an A, C, T or G according to an algorithm that depends both on the zero or one value of that bit plus additional information about its place in the data stream as well as the data bits immediately preceding it.

Then, once Oz is translated into the language of nucleotides, it’s now ready to be written onto strands of synthetic DNA. Once encoded, the strands sat in storage where, Jones said, his job was to artificially age the genetic information—attempting to biochemically mutate the DNA strands and subjecting the sample to heat and cold damage.

“I beat the DNA up,” he said. “Then after we’d beaten it up, we saw if we could recover The Wizard of Oz. The answer was Yes. It showed how robust DNA is. We had to really work hard to beat it up. It might be easier to do if you have 10,000 years buried in the earth or out in outer space or something. But we had to really accelerate the process.”

Decoding the data from their DNA storage entailed first sequencing the Wizard of Oz genome and then translating that genetic data back into bits. Once they figure out which bits are “address” bits, they can string the remaining information bits back together into a single, concatenated data file. 

Fellow postdoc and fellow co-author John Hawkins said one of the most attractive features of their new protocol is how robust it is to technological and data format changes over the centuries to come.

“Reading DNA will never become obsolete,” he said. “Data surviving into the future is only half the problem. You still need to be able to read it on the other end. [But] DNA is uniquely future-proof on this front because we are made of it. As long as humans are made of DNA, we will always want machines around that can read it.”

Attention Rogue Drone Pilots: AI Can See You!

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/attention-rogue-drone-pilots-ai-can-see-you

The minute details of rogue drone’s movements in the air may unwittingly reveal the drone pilot’s location—possibly enabling authorities to bring the drone down before, say, it has the opportunity to disrupt air traffic or cause an accident. And it’s possible without requiring expensive arrays of radio triangulation and signal-location antennas.

So says a team of Israeli researchers who have trained an AI drone-tracking algorithm to reveal the drone operator’s whereabouts, with a better than 80 per cent accuracy level. They are now investigating whether the algorithm can also uncover the pilot’s level of expertise and even possibly their identity.

Gera Weiss—professor of computer science at Ben-Gurion University of the Negev in Beersheba, Israel—said the algorithm his team has developed partly relies on the specific terrain around an airport or other high-security location.

After testing neural nets including dense networks and convolutional neural networks, the researchers found that a kind of recurrent neural net called a “gated-recurrent unit” (GRU) network worked best for drone tracking. “Recurrent networks are good at this,” Weiss said. “They consider the sequenced reality of the data—not just in space but in time.”

So, he said, a security professional at an airport, for instance, would hire white-hat malefactors to launch a drone from various locations around the airport. The security team would then record the drone’s exact movements on airport radar systems.

Ultimately, the GRU algorithm would then train on this data—knowing in this case the pilot’s location and the peculiar details of the drone’s flight patterns.

Depending on the specific terrain at any given airport, a pilot operating a drone near a camouflaging patch of forest, for instance, might have an unobstructed view of the runway. But that location might also be a long distance away, possibly making the operator more prone to errors in precise tracking of the drone. Whereas a pilot operating nearer to the runway may not make those same tracking errors but may also have to contend with big blind spots because of their proximity to, say, a parking garage or control tower.

And in every case, he said, simple geometry could begin to reveal important clues about a pilot’s location, too. When a drone is far enough away, motion along a pilot’s line of sight can be harder for the pilot to detect than motion perpendicular to their line of sight. This also could become a significant factor in an AI algorithm working to discover pilot location from a particular drone flight pattern.

The sum total of these various terrain-specific and terrain-agnostic effects, then, could be a giant finger pointing to the operator. This AI application would also be unaffected by any relay towers or other signal spoofing mechanisms the pilot may have put in place.

Weiss said his group tested their drone tracking algorithm using Microsoft Research’s open source drone and autonomous vehicle simulator AirSim. The group presented their work-in-progress at the Fourth International Symposium on Cyber Security, Cryptology and Machine Learning at Ben-Gurion University earlier this month.

Their paper boasts a 73 per cent accuracy rate in discovering drone pilots’ locations. Weiss said that in the few weeks since publishing that result, they’ve now improved the accuracy rate to 83 per cent.

Now that the researchers have proved the algorithm’s concept, Weiss said, they’re hoping next to test it in real-world airport settings. “I’ve already been approached by people who have the flight permissions,” he said. “I am a university professor. I’m not a trained pilot. Now people that do have the facility to fly drones [can] run this physical experiment.”

Weiss said it’s as yet unclear how terrain-agnostic their algorithm is. Could a neural net trained on the terrain surrounding one airport then be effectively deployed at another airport—or another untrained region of the same airport?

Another open question, he said, involves whether the algorithm could also be reversed: Could drone flight patterns around an unmapped terrain then be used to discover features of the terrain?

Weiss said they hope to tackle these questions in future research, alongside possible applications of a series of recent findings that attempt to rate operator skill levels from motion tracking data.

One finding even goes so far as to pinpoint and classify idiosyncrasies in piloting skills—so perhaps repeat offenders might one day be spotted by the thumbprint their copter leaves on the sky.

AI Seeks ET: Machine Learning Powers Hunt for Life in the Solar System

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/aerospace/robotic-exploration/ai-seeks-et-machine-learning-life-solar-system

Can artificial intelligence help the search for life elsewhere in the solar system? NASA thinks the answer may be “yes”—and not just on Mars either.

A pilot AI system is now being tested for use on the ExoMars mission that is currently slated to launch in the summer or fall of 2022. The machine-learning algorithms being developed will help science teams decide how to test Martian soil samples to return only the most meaningful data.

For ExoMars, the AI system will only be used back on earth to analyze data gather by the ExoMars rover. But if the system proves to be as useful to the rovers as now suspected, a NASA mission to Saturn’s moon Titan (now scheduled for 2026 launch) could automate the scientific sleuthing process in the field. This mission will rely on the Dragonfly octocopter drone to fly from surface location to surface location through Titan’s dense atmosphere and drill for signs of life there.

The hunt for microbial life in another world’s soil, either as fossilized remnants or as present-day samples, is very challenging, says Eric Lyness, software lead of the NASA Goddard Planetary Environments Lab in Greenbelt, Md. There is of course no precedent to draw upon, because no one has yet succeeded in astrobiology’s holy grail quest.

But that doesn’t mean AI can’t provide substantial assistance. Lyness explained that for the past few years he’d been puzzling over how to automate portions of an exploratory mission’s geochemical investigation, wherever in the solar system the scientific craft may be.

Last year he decided to try machine learning. “So we got some interns,” he said. “People right out of college or in college, who have been studying machine learning. … And they did some amazing stuff. It turned into much more than we expected.” Lyness and his collaborators presented their scientific analysis algorithm at a geochemistry conference last month.

ExoMars’s rover—named Rosalind Franklin, after one of the co-discoverers of DNA—will be the first that can drill down to 2-meter depths, beyond where solar UV light might penetrate and kill any life forms. In other words, ExoMars will be the first Martian craft with the ability to reach soil depths where living soil bacteria could possibly be found.

“We could potentially find forms of life, microbes or other things like that,” Lyness said. However, he quickly added, very little conclusive evidence today exists to suggest that there’s present-day (microbial) life on Mars. (NASA’s Curiosity rover has sent back some inexplicable observations of both methane and molecular oxygen in the Martian atmosphere that could conceivably be a sign of microbial life forms, though non-biological processes could explain these anomalies too.)

Less controversially, the Rosalind Franklin rover’s drill could also turn up fossilized evidence of life in the Martian soil from earlier epochs when Mars was more hospitable.

NASA’s contribution to the joint Russian/European Space Agency ExoMars project is an instrument called a mass spectrometer that will be used to analyze soil samples from the drill cores. Here, Lyness said, is where AI could really provide a helping hand.

The spectrometer, which studies the mass distribution of ions in a sample of material, works by blasting the drilled soil sample with a laser and then mapping out the atomic masses of the various molecules and portions of molecules that the laser has liberated. The problem is any given mass spectrum could originate from any number of source compounds, minerals and components. Which always makes analyzing a mass spectrum a gigantic puzzle.

Lyness said his group is studying the mineral montmorillonite, a commonplace component of the Martian soil, to see the many ways it might reveal itself in a mass spectrum. Then his team sneaks in an organic compound with the montmorillonite sample to see how that changes the mass spectrometer output.

“It could take a long time to really break down a spectrum and understand why you’re seeing peaks at certain [masses] in the spectrum,” he said. “So anything you can do to point scientists into a direction that says, ‘Don’t worry, I know it’s not this kind of thing or that kind of thing,’ they can more quickly identify what’s in there.”

Lyness said the ExoMars mission will provide a fertile training ground for his team’s as-yet-unnamed AI algorithm. (He said he’s open to suggestions—though, please, no spoof Boaty McBoatface submissions need apply.)

Because the Dragonfly drone and possibly a future astrobiology mission to Jupiter’s moon Europa would be operating in much more hostile environments with much less opportunity for data transmission back and forth to Earth, automating a craft’s astrobiological exploration would be practically a requirement.

All of which points to a future in mid-2030s in which a nuclear-powered octocopter on a moon of Saturn flies from location to location to drill for evidence of life on this tantalizingly bio-possible world. And machine learning will help power the science.

“We should be researching how to make the science instruments smarter,” Lyness said. “If you can make it smarter at the source, especially for planetary exploration, it has huge payoffs.”

Making Blurry Faces Photorealistic Goes Only So Far

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/computing/software/making-blurry-faces-photorealistic-goes-only-so-far

One more trope of Hollywood spy movies has now taken at least a partial turn toward science-fact. You’ve seen it: To save the world, the secret agent occasionally needs a high-res picture of a human face recovered from a very blurry, grainy, or pixelated image.

Now, artificial intelligence has delivered a partial (though probably ultimately unhelpful) basket of goods for that fictional spy. It’s only partial because Claude Shannon’s Theory of Information Entropy always gets the last laugh. As a new algorithm now demonstrates for anyone to see, an AI-generated photorealistic face “upsampled” from the low-res original probably looks very little like the person our hero is racing the clock to track down. There may even be no resemblance at all.

Sorry, Mr. Bond and Ms. Faust. Those handful of pixels in the source image contain only so much information. Because, however convincingly the AI renders that imaginary face—and computer-generated faces can be quite uncanny these days—there’s no dodging the fact that the original image was, in fact, very information-sparse. No one, not even someone with a license to kill, gets to extract free information out of nowhere.

But that’s not the end of the story, says Cynthia Rudin, professor of computer science at Duke University in Durham, N.C. There may be other kinds of value to be extracted from the AI algorithm she and her colleagues have developed.

For starters, Rudin said, “We kind of proved that you can’t do facial recognition from blurry images because there are so many possibilities. So zoom and enhance, beyond a certain threshold level, cannot possibly exist.”

However, Rudin added that “PULSE,” the Python module her group developed, could have wide-ranging applications beyond just the possibly problematic “upsampling” of pixelated images of human faces. (Though it’d only be problematic if misused for facial recognition purposes. Rudin said there are no doubt any number of unexplored artistic and creative possibilities for PULSE, too.)

Rudin and four collaborators at Duke developed their Photo Upsampling via Latent Space Exploration algorithm (accepted for presentation at the 2020 Conference on Computer Vision and Pattern Recognition conference earlier this month) in response to a challenge.

“A lot of algorithms in the past have tried to recover the high-resolution image from the low-res/high-res pair,” Rudin said. But according to her, that’s probably the wrong approach. Most real-world applications of this upsampling problem would involve having access to only the low-res original image. That would be the starting point from which one would try to recreate the high-resolution equivalent of that low-res original.

“When we finally abandoned trying to come up with the ground truth, we then were able to take the low-res [picture] and try to construct many very good high-res images,” Rudin said.

So while PULSE looks beyond the failure point of facial recognition applications, she said, it may still find applications in fields that grapple with their own blurry images—among them, astronomy, medicine, microscopy, and satellite imagery.

Rudin cautions: So long as anyone using PULSE understands that it generates a broad range of possible images, any one of which could be the progenitor of the blurry image that’s available, PULSE has potential to give researchers a better understanding of a given imaginative space.

Say, for instance, an astronomer has a blurry image of a black hole. Coupled with an AI imaging tool that generates astronomical images, PULSE could render many possible astrophysical scenarios that might have yielded that low-res photograph.

At the moment, PULSE is optimized for human faces, because NVIDIA already developed an AI “generative adversarial network” (GAN) that creates photorealistic human faces. So the applications the PULSE team explored built atop NVIDIA’s StyleGAN algorithm.

In other words, PULSE, provides the sorting and exploring tools that sit atop the GAN that, on its own, mindlessly sprays out endless supplies of images of whatever it has been trained to make.

Rudin also sees a possible PULSE application in the field of architecture and design.

“There’s not a StyleGAN for that many other things right now,” she said. “It’d be nice to be able to generate rooms. If you had just a few pixels, it’d be nice to develop a full picture of a room. That would be cool. And that’s probably coming.

“Anytime you have that kind of generative modeling, you can use PULSE to search through that space,” she said.

And so long as searching through that space doesn’t involve a ticking timebomb set to detonate when it hits “00:00,” this PULSE may still ultimately open more doors than it blows off its hinges.

Ultraviolet revolution: Could “far-UV” light provide widespread, safe disinfection of airborne viruses?

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/biomedical/devices/ultraviolet-revolution-could-faruv-light-provide-widespread-safe-disinfection-of-airborne-viruses

A short-wavelength ultraviolet light technology, beyond the decades-old mercury lamp, may be the most promising breed of UV yet studied for killing airborne coronavirus and other viruses and bacteria. Two studies—one now under peer review, and one reporting preliminary results from an ongoing experiment—bolster the case that this UV is safe for human eyes and skin.

If the initial promise of so-called “far-UV” pans out, prepare to hear the words “krypton-chlorine excimer lamp” more widely later this year and into 2021. That’s because KrCl lamps produce 222-nanometer UV efficiently enough that they might soon be broadly deployed for disinfection of hospitals, doctors offices, grocery stores, office buildings, shopping centers, airports, trains, airplanes, public transit, elevators or potentially anywhere that people gather in indoor spaces.

Or not. All safety and efficacy findings for far-UV are, for all their promise, still preliminary. And far-UV ultimately stands or falls on its own terms, apart from the well-established UV-C technologies—whose effectiveness against viruses and bacteria are well known, although direct exposure to longer-wavelength UV-C is also known to be harmful to human skin and eyes.

That difference only increases the possible leap forward that far-UV light fixtures could represent with respect to taming the coronavirus pandemic, says David Brenner, director of Columbia University’s Center for Radiological Research.

Brenner and collaborators have been studying far-UV’s sterilizing effects against microbes and viruses for more than a decade, he said in an interview this week.

“All the research we’ve been doing over the years was obviously not about COVID-19 or SARS-CoV-2,” he said. “We were focused on influenza, which in some sense is the same story: airborne transmission. So we had anticipated that by the next flu season, these things would start to be installed.”

According to a new study produced by Brenner’s group that is currently undergoing peer review, a lamp emitting far-UV light, bathing a room in 222-nm ultraviolet light at levels beneath the current industry threshold limit, would inactivate 99.9 percent of coronaviruses in the air in under 25 minutes. (The group’s work examined far-UV’s effect on other airborne coronaviruses, which they say will very likely have the same response as the SARS-CoV-2 novel coronavirus. Nevertheless, Brenner said, they’re now extending their work to include the effect of far-UV on the novel coronavirus, too.)

Far-UV light has a shorter wavelength than traditional UV-C, which means it carries more energy per photon. That effectively also translates to a shorter distance traveled through human and animal tissue — according to Brenner’s colleague Manuela Buonanno, associate research scientist at the Center for Radiological Research and specialist in far-UV’s effects on biological systems.

Buonanno said that the proteins in tissue cut the intensity of a 222-nm far-UV beam in half after traveling just 0.3 micrometers. Compare that to traditional UV-C light (with a 254-nm wavelength), whose tissue penetration depth is ten times longer, at 3 µm.

While far-UV won’t make it through even a fraction of the 5-to-10-micrometer-thick layer of dead skin on a person’s body, at least some UV-C light could make it through to live skin cells, where that UV-C might either kill them or render them cancerous.

The same goes for the eyes; direct exposure to regular UV-C can cause eye damage. However, Buonanno said, “We have an ongoing study in which mice are exposed to [far-UV] light five days a week, eight hours a day.” The 96 (hairless) mice in the study are given regular exams to discover if their skin has reacted to the radiation or if their eyes have been adversely affected.

Brenner shared a recent preliminary report after 43 weeks of far-UV exposure for the mice. “We see no difference between any of the [animals in the] exposure and the control [groups],” the interim report says. The research team, he said, will continue their mouse study for at least another few months—until they’ve collected data based on 60 weeks of daily far-UV exposure to the mice.

Thus far, the mouse finding is consistent with most human safety studies of far-UV, Brenner said, such as a letter in a recent issue of the journal Photodermatology, Photoimmunology & Photomedicine. In this study, Scottish scientists controverted a previous finding from 2015 which claimed that far-UV light can cause harm to human skin.

The new study performed computer simulations which found that the harm the 2015 paper discovered was most likely caused by longer-wavelength (UV-C) light that was secondarily generated by the far-UV lamp, casting the old finding into doubt.

“What needs to be done with 222-nm light is to have a filter,” Brenner said. “The companies that I’m aware of all do that. Which basically blocks out any of the 240- and 250-nm-wavelength light.”

Buonanno said that threshold levels of longer-wavelength UV light have yet to be established. If a far-UV lamp can filter out 99.9 percent of longer and more damaging UV-C rays, will that be enough? Or could the 0.1 percent of the other UV light that gets through still cause harm?

“We do not know yet,” Buonanno said. “We are planning a study to measure safety after exposure to other wavelengths—say from 225 to 255 nm—to address this question in a more systematic way.”

Brenner said that while KrCl eximer lamps are currently the only game in town for generating far-UV light, he remains hopeful that far-UV LEDs might soon be developed. LEDs have become available for wavelengths as short as 300 nm and lately even 250 nm light. But, he said, UV LEDs that can efficiently generate 230- or even 220-nm light have not yet been invented.

The main reason, according to Brenner: “Up till now there hasn’t been a huge demand for 230-nm LEDs. There hasn’t been that much work on it. But I’m very much hoping that folks—maybe your readers—will take this on board.”

Used EV Batteries Could Power Tomorrow’s Solar Farms

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/batteries-storage/used-ev-batteries-could-power-tomorrows-solar-farms

As the number of electric vehicles on the world’s roads multiplies, a variety of used EV batteries will inevitably come into the marketplace. This, says a team of MIT researchers, could provide a golden opportunity for solar energy: Grid-scale renewable energy storage. This application, they find, can run efficiently on batteries that aren’t quite up to snuff for your Tesla or Chevy Bolt.

Slow, Steady Progress for Two U.S. Nuclear Power Projects

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energy/nuclear/slow-steady-progress-for-two-us-nuclear-power-projects

There are 53 nuclear reactors currently under construction around the world. Only two are in the United States, once the world’s leader in nuclear energy development. And those two reactors represent expansions of a preexisting two-reactor facility, Plant Vogtle in Waynesboro, Ga.

Separately, a company in Portland, Ore., called NuScale Power is now working with the U.S. Nuclear Regulatory Commission to develop a next-generation reactor built around a smaller-scale, modular design.

These two projects together represent the leading edge of commercial U.S. nuclear-fission reactor development today. The fact that there are only two raises questions about the direction of this once-booming energy sector. Is the United States redirecting its focus onto fusion and leaving fission behind? Or could a fission renaissance be yet to come?

Congress upped the U.S. Department of Energy’s nuclear fusion budget from US $564 million to $671 million for fiscal year 2020. And such companies as AGNI Energy in Washington state and Commonwealth Fusion Systems in Massachusetts (alongside Tokamak Energy and General Fusion in the United Kingdom and Canada, respectively) are courting venture capital for their multimillion-dollar visions of fusion’s bright future.

Meanwhile, in March, construction workers at the Vogtle fission plant hoisted a 680,000-kilogram steel-and-concrete structure to cap one of the containment vessels for the new AP1000 reactors. As John Kraft, a spokesperson for Georgia Power, explained, “The shield building is a unique feature of the AP1000 reactor design for Vogtle 3 and 4, providing an additional layer of safety around the containment vessel and nuclear reactor to protect the structure from any potential impacts.”

The AP1000 pressurized-water reactor, designed by Westinghouse, is a 21st-century “new” reactor. It’s been deployed in just two other places, in China, with two AP1000 reactors at the Sanmen Nuclear Power Station in Zheijang province and two at the Haiyang Nuclear Power Plant in Shandong province. According to the International Atomic Energy Agency (IAEA), the AP1000 reactors at these locations operate at 1,157 megawatts and 1,126 MW, respectively.

In 2005, the Nuclear Regulatory Commission (NRC) certified the AP1000 design, clearing the way for its sale and installation at these three sites more than a decade later. Last year, Dan Brouillette, the U.S. secretary of energy, wrote in a blog post: “The U.S. Department of Energy (DOE) is all in on new nuclear energy.”

NuScale’s modular design—with 12 smaller reactors, each operating at a projected 60 MW—met NRC Phase 4 approval at the end of last year. According to Diane Hughes, vice president of marketing and communications for NuScale, “This means that the technical review by the NRC is essentially complete and that the final design approval is expected on schedule by September 2020.”

NuScale’s first customer, the Utah Associated Municipal Power Systems, plans to install a power plant with NuScale reactors at the Idaho National Laboratory site in Idaho Falls. The plant, Hughes said, is “slated for operation by the mid-2020s based on the NRC’s approved design.”

The idea of harnessing multiple smaller reactors in a single design is not new, dating back as far as the 1940s. At the time, the economics of the smaller, modular design could not compete with bigger, individual reactors, says M.V. Ramana, a nuclear physicist and professor at the University of British Columbia’s School of Public Policy and Global Affairs.

“Nuclear power is unlike almost any other energy technology, in that it’s the one tech where the costs have gone up, not down, with experience,” he said. “The way to think about it is that the more experience we have with nuclear power, the more we learn about potential vulnerabilities that can lead to catastrophic accidents.”

However, Hughes of NuScale counters that, unlike the 54 competing small modular reactor designs that the IAEA has records of, NuScale is “the first ever small modular reactor technology to undergo…NRC design certification review.”

And in 2018, an interdisciplinary MIT report on nuclear energy found that NuScale’s reactor is “quite innovative in its design. It has virtually eliminated the need for active systems to accomplish safety functions, relying instead on a combination of passive systems and the inherent features of its geometry and materials.”

Of course, while the number of catastrophic nuclear accidents (such as Three Mile Island, Chernobyl, and Fukushima) is small for the amount of energy that nuclear power has generated over the past 70 years, Ramana adds, the cost of each accident is astronomical—sacrificing human lives and uprooting untold many more from disaster zones as well as requiring cleanups that cost hundreds of billions of dollars. “One every other decade is not good enough,” Ramana said.

This article appears in the June 2020 print issue as “Limited Progress for U.S. Nuclear.”

How Network Science Surfaced 81 Potential COVID-19 Therapies

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/the-human-os/computing/networks/network-science-potential-covid19-therapies

IEEE COVID-19 coverage logo, link to landing page

Researchers have harnessed the computational tools of network science to generate a list of 81 drugs used for other diseases that show promise in treating COVID-19. Some are already familiar—including the malaria and lupus treatments chloroquine and hydroxychloroquine—while many others are new, with no known clinical trials underway.

Since the concept was first proposed in 2007, network medicine has applied the science of interconnected relationships among large groups to networks of genes, proteins, interactions and other biomedical factors. Both Harvard and MIT Open Courseware today offer classes in network medicine, while cancer research in particular has experienced a proliferation of network medicine studies and experimental treatments.

Albert-László Barabási, distinguished university professor at Northeastern University in Boston, is generally considered the founder of both network medicine and modern network science. In a recent interview via email, Barabási said COVID-19 represents a tremendous opportunity for a still fledgling science.

“In many ways, the COVID offers a great test for us to marshal the set of highly predictive tools that we as a community [have developed] in the past two decades,” Barabási said.

Last month, Barabási and ten co-authors from Northeastern, Harvard and Brigham and Women’s Hospital in Boston published a pre-print paper proposing a network medicine-based framework for repurposing drugs as COVID-19 therapies. The paper has not been submitted for peer-review yet, says Deisy Morselli Gysi, a postdoctoral researcher at Northeastern’s Network Science Institute.

“The paper is not under review anywhere,” she said. “But we are planning of course to submit it once we have [laboratory] results.”

The 81 potential COVID-19 drugs their computational pipeline discovered, that is, are now being investigated in wet-lab studies.

The number-one COVID-19 drug their network-based models predicted was the AIDS-related protease inhibitor ritonavir. The U.S. Centers for Disease Control’s ClinicalTrials.gov website lists 108 active or recruiting trials (as of May 6) involving ritonavir, with a number of the current trials being for COVID-19 or related conditions.

However, the second-ranked potential COVID-19 drug their models surfaced was the antibacterial and anti-tuberculosis drug isoniazid. ClinicalTrials, again as of May 6, listed 65 active or recruiting studies for this drug — none of which, however, were for coronavirus. The third and fourth-ranked drugs (the antibiotic troleandomycin and cilostazol a drug for strokes and heart conditions) also have no current coronavirus-related clinical trials, according to ClinicalTrials.gov.

Barabási said the group’s study took its lead from a massively-collaborative paper from March 27 which identified 26 of the 29 proteins that make up the SARS-CoV-2 coronavirus particle. The study then identified 332 human proteins that bind to those 26 coronavirus proteins.

Barabási, Gysi and co-researchers then mapped those 332 proteins to the larger map of all human proteins and their interactions. This “interactome” (a molecular biology concept first proposed in 1999) tracks all possible interactions between proteins.

Of those 332 proteins that interact with the 26 known and studied coronavirus proteins, then, Barabási’s group found that 208 of them interact with one another. These 208 proteins form an interactive network, or what the group calls a “large connected component” (LCC). And a vast majority of these LCC proteins are expressed in the lung, which would explain why coronavirus manifests so frequently in the respiratory system: Coronavirus is made up of building blocks that each can chemically latch onto a network of interacting proteins, most of which are found in lung tissue.

However, the lung was not the only site in the body where Barabási and co-authors discovered coronavirus network-based activity. They also discovered several brain regions whose expressed proteins interact in large connected networks with coronavirus proteins. Meaning their model predicts coronavirus could manifest in brain tissue as well for some patients.

That’s important, Gysi said, because when their models made this prediction, no substantial reporting had yet emerged about neurological COVID-19 comorbidities. Today, though, it’s well-known that some patients experience a neurological-based loss of taste and smell, while others experience strokes at higher rates.

Brains and lungs aren’t the only possible hosts for the novel coronavirus. The group’s findings also indicate that coronavirus may manifest in some patients in reproductive organs, in the digestive system (colon, esophagus, pancreas), kidney, skin and the spleen (which could relate to immune system dysfunction seen in some patients).

Of course the first drug the FDA approved for emergency use specifically for COVID-19 is the protease inhibitor remdesivir. However Barabási and Gysi’s group did not surface that drug at all in their study.

This is for a good reason, Gysi explained. Remdesivir targets the SARS-CoV-2 virus specifically and not any interactions between the virus and the human body. So remdesivir would not have showed up on the map of their network science-based analysis, she said.

Barabási said his team is also investigating how network science can assist medical teams conducting contact tracing for COVID-19 patients.

“There is no question that the contact tracing algorithms will be network science based,” Barabási said.

Here Are the U.S. Regions Most Vulnerable to Solar Storms

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/the-smarter-grid/us-regions-most-vulnerable-solar-storms

A new study about solar-induced power outages in the U.S. electric grid finds that a few key regions—a portion of the American midwest and Eastern U.S. seaboard—appear to be more vulnerable than others. 

The good news is that a few preventative measures could drastically reduce the damage done when a solar storm hits Earth. Those include stockpiling electrical transformers in national strategic reserves.

Jeffrey Love is a research geophysicist at the U.S. Geological Survey (USGS) in Golden, Colorado and co-author of the new USGS solar geoelectric hazard study. He’s one of many voices in the worldwide geophysical community warning that geoelectric “perfect storms” will happen—it’s not a question of if, but when. Such storms can last between one and three days. 

Love explains that solar flares and other solar mass ejections that travel through space can slam into Earth’s atmosphere and generate powerful electric and magnetic fields. These magnetic storms can occasionally be intense enough to interfere with the operation of high-voltage electricity lines. 

Depending on the geology of a given region, the currents a geomagnetic storm induces in the power lines can destabilize the power grid’s operation and cause damage to (or even destroy) transformers. 

Surprise! 2020 Is Not the Year for Self-Driving Cars

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/transportation/self-driving/surprise-2020-is-not-the-year-for-selfdriving-cars

In March, because of the coronavirus, self-driving car companies, including Argo, Aurora, Cruise, Pony, and Waymo, suspended vehicle testing and operations that involved a human driver. Around the same time, Waymo and Ford released open data sets of information collected during autonomous-vehicle tests and challenged developers to use them to come up with faster and smarter self-driving algorithms.

These developments suggest the self-driving car industry still hopes to make meaningful progress on autonomous vehicles (AVs) this year. But the industry is undoubtedly slowed by the pandemic and facing a set of very hard problems that have gotten no easier to solve in the interim.

Five years ago, several companies including Nissan and Toyota promised self-driving cars in 2020. Lauren Isaac, the Denver-based director of business initiatives at the French self-driving vehicle company EasyMile, says AV hype was “at its peak” back then—and those predictions turned out to be far too rosy.

Now, Isaac says, many companies have turned their immediate attention away from developing fully autonomous Level 5 vehicles, which can operate in any conditions. Instead, the companies are focused on Level 4 automation, which refers to fully automated vehicles that operate within very specific geographical areas or weather conditions. “Today, pretty much all the technology developers are realizing that this is going to be a much more incremental process,” she says.

For example, EasyMile’s self-driving shuttles operate in airports, college campuses, and business parks. Isaac says the company’s shuttles are all Level 4. Unlike Level 3 autonomy (which relies on a driver behind the wheel as its backup), the backup driver in a Level 4 vehicle is the vehicle itself.

“We have levels of redundancy for this technology,” she says. “So with our driverless shuttles, we have multiple levels of braking systems, multiple levels of lidars. We have coverage for all systems looking at it from a lot of different angles.”

Another challenge: There’s no consensus on the fundamental question of how an AV looks at the world. Elon Musk has famously said that any AV manufacturer that uses lidar is “doomed.” A 2019 Cornell research paper seemed to bolster the Tesla CEO’s controversial claim by developing algorithms that can derive from stereo cameras 3D depth-perception capabilities that rival those of lidar.

However, open data sets have called lidar doomsayers into doubt, says Sam Abuelsamid, a Detroit-based principal analyst in mobility research at the industry consulting firm Navigant Research.

Abuelsamid highlighted a 2019 open data set from the AV company Aptiv, which the AI company Scale then analyzed using two independent sources: The first considered camera data only, while the second incorporated camera plus lidar data. The Scale team found camera-only (2D) data sometimes drew inaccurate “bounding boxes” around vehicles and made poorer predictions about where those vehicles would be going in the immediate future—one of the most important functions of any self-driving system.

“While 2D annotations may look superficially accurate, they often have deeper inaccuracies hiding beneath the surface,” software engineer Nathan Hayflick of Scale wrote in a company blog about the team’s Aptiv data set research. “Inaccurate data will harm the confidence of [machine learning] models whose outputs cascade down into the vehicle’s prediction and planning software.”

Abuelsamid says Scale’s analysis of Aptiv’s data brought home the importance of building AVs with redundant and complementary sensors—and shows why Musk’s dismissal of lidar may be too glib. “The [lidar] point cloud gives you precise distance to each point on that vehicle,” he says. “So you can now much more accurately calculate the trajectory of that vehicle. You have to have that to do proper prediction.”

So how soon might the industry deliver self-driving cars to the masses? Emmanouil Chaniotakis is a lecturer in transport modeling and machine learning at University College London. Earlier this year, he and two researchers at the Technical University of Munich published a comprehensive review of all the studies they could find on the future of shared autonomous vehicles (SAVs).

They found the predictions—for robo-taxis, AV ride-hailing services, and other autonomous car-sharing possibilities—to be all over the map. One forecast had shared autonomous vehicles driving just 20 percent of all miles driven in 2040, while another model forecast them handling 70 percent of all miles driven by 2035.

So autonomous vehicles (shared or not), by some measures at least, could still be many years out. And it’s worth remembering that previous predictions proved far too optimistic.

This article appears in the May 2020 print issue as “The Road Ahead for Self-Driving Cars.”

Can Quantum Computers Help Us Respond to the Coronavirus?

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/computing/hardware/can-quantum-computing-help-us-respond-to-the-coronavirus

D-Wave Systems has offered free cloud computing time on its quantum computer to COVID-19 researchers. The offer, unveiled last week, applies to work toward vaccines and therapies as well as epidemiology, supply distribution, hospital logistics, and diagnostics.

“We have opened up our service for free, unlimited use—for businesses, for governments, for researchers—working on solving problems associated with the pandemic,” said Alan Baratz, CEO of D-Wave, based in Burnaby, British Columbia. “We also recognize that many of these companies may not have experience with quantum computers. So we’ve also reached out to our customers and partners who do have experience with using our systems to ask if they would be willing to help.”

The free quantum computing consulting services D-Wave is arranging include quantum programming expertise in scientific computing as well as in planning, management, and operations for front-line workers.

“Brita Filter for Blood” Aims to Remove Harmful Cytokines for COVID-19 Patients

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/the-human-os/biomedical/devices/blood-filtration-tech-removes-harmful-cytokines-covid19-patients

In a number of critical cases of COVID-19, a hyper-vigilant immune response is triggered in patients that, if untreated, can itself prove fatal. Fortunately, some pharmaceutical treatments are available—although it’s not yet fully understood how well they might work in addressing the new coronavirus.

Meanwhile, a new blood filtration technology has successfully treated other, similar hyper-vigilant immune syndromes for people who underwent heart surgeries and the critically ill. Which could make it a possibly effective therapy (albeit still not FDA-approved) for some severe COVID-19 cases.

Inflammation, says Phillip Chan—an M.D./PhD and CEO of the New Jersey-based company CytoSorbents—is the body’s way of dealing with infection and injury. It’s why burns and sprained ankles turn red and swell up. “That’s the body’s way of bringing oxygen and nutrients to heal,” he said.

Companies Report a Rush of Electric Vehicle Battery Advances

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/transportation/efficiency/companies-report-rush-electric-vehicle-battery-advances

Electric vehicles have recently boasted impressive growth rates, more than doubling in market penetration every two years between 2014 and 2018. And batteries play a key role in EV performance and price. That’s why some companies are looking to new chemistries and battery technologies to sustain EV growth rates throughout the early 2020s.

Three recent developments suggest that executives are more than just hopeful. They are, in fact, already striking deals to acquire and commercialize new EV battery advances. And progress has been broad—the new developments concern the three main electrical components of a battery: its cathode, electrolyte, and anode.

TESLA’S BIG BETS Analysts think Tesla’s upcoming annual Battery Day (the company hadn’t yet set a date at press time) will hold special significance. Maria Chavez of Navigant Research in Boulder, Colo., expects to hear about at least three big advancements.

The first one (which Reuters reported in February) is that Tesla will develop batteries with cathodes made from lithium iron phosphate for its Model 3s. These LFP batteries—with “F” standing for “Fe,” the chemical symbol for iron—are reportedly free of cobalt, which is expensive and often mined using unethical practices. LFP batteries also have higher charge and discharge rates and longer lifetimes than conventional lithium-ion cells. “The downside is that they’re not very energy dense,” says Chavez.

To combat that, Tesla will reportedly switch from standard cylindrical cells to prism-shaped cells—the second bit of news Chavez expects to hear about. Stacking prisms versus cylinders would allow Tesla to fit more batteries into a given space.

A third development, Chavez says, may concern Tesla’s recent acquisition, Maxwell Technologies. Before being bought by Tesla in May of 2019, Maxwell specialized in making supercapacitors. Supercapacitors, which are essentially charged metal plates with proprietary materials in between, boost a device’s charge capacity and performance.

Supercapacitors are famous for pumping electrons into and out of a circuit at blindingly fast speeds. So an EV power train with a supercapacitor could quickly access stores of energy for instant acceleration and other power-hungry functions. On the flip side, the supercapacitor could also rapidly store incoming charge to be metered out to the lithium battery over longer stretches of time—which could both speed up quick charging and possibly extend battery life.

So could blending supercapacitors, prismatic cells, and lithium iron phosphate chemistry provide an outsize boost for Tesla’s EV performance specs? “The combination of all three things basically creates a battery that’s energy dense, low cost, faster-to-charge, and cobalt-free—which is the promise that Tesla has been making for a while now,” Chavez said.

SOLID-STATE DEALS Meanwhile, other companies are focused on improving both safety and performance of the flammable liquid electrolyte in conventional lithium batteries. In February, Mercedes-Benz announced a partnership with the Canadian utility Hydro-Québec to develop next-generation lithium batteries with a solid and nonflammable electrolyte. And a month prior, the Canadian utility announced a separate partnership with the University of Texas at Austin and lithium-ion battery pioneer John Goodenough, to commercialize a solid-state battery with a glass electrolyte.

“Hydro-Québec is the pioneer of solid-state batteries,” said Karim Zaghib, general director of the utility’s Center of Excellence in Transportation Electrification and Energy Storage. “We started doing research and development in [lithium] solid-state batteries…in 1995.”

Although Zaghib cannot disclose the specific electrolytes his lab will be working with Mercedes to develop, he says the utility is building on a track record of successful battery technology rollouts with companies including A123 Systems in the United States, Murata Manufacturing in Japan, and Blue Solutions in Canada.

STARTUP SURPRISE Lastly, Echion Technologies, a startup based in Cambridge, England, said in February that it had developed a new anode for high-capacity lithium batteries that could charge in just 6 minutes. (Not to be outdone, a team of researchers in Korea announced that same month that its own silicon anode would charge to 80 percent in 5 minutes.)

Echion CEO Jean de la Verpilliere—a former engineering Ph.D. student at the nearby University of Cambridge—says Echion’s proprietary “mixed niobium oxide” anode is compatible with conventional cathode and electrolyte technologies.

“That’s key to our business model, to be ‘drop-in,’ ” says de la Verpilliere, who employs several former Cambridge students and staff. “We want to bring innovation to anodes. But then we will be compatible with everything else in the battery.”

In the end, the winning combination for next-generation batteries may well include one or more breakthroughs from each category—cathode, anode, and electrolyte.

This article appears in the April 2020 print issue as “EV Batteries Shift Into High Gear.”

Could Supercomputers and Rapid Treatment Trials Slow Down Coronavirus?

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/the-human-os/biomedical/devices/could-supercomputers-and-rapid-treatment-trials-slow-down-coronavirus

Supercomputer-aided development of possible antiviral therapies, rapid lab testing of other prospective treatments, and grants to develop new vaccine technologies: Coronavirus responses such as these may still be long shots when it comes to quickly containing the pandemic. But these long shots represent one possible hope for ultimately turning the tide and stopping the virus’s rapid development and spread,. (Another possible hope emerges from recent case reports out of China suggesting that many of the worst COVID-19 cases feature a hyperactive immune response called a “cytokine storm”—a physical phenomenon for which reliable tests and some effective therapies are currently available.)

As part of the direct attack on SARS-CoV-2—the virus that causes COVID-19—virtual and real-world tests of potential therapies represent a technology-focused angle for drug and vaccine research. And unlike traditional pharmaceutical R&D, where clinical trial timetables mean that drugs can take as long as 10 years to reach the marketplace, these accelerated efforts could yield results within about a year or so.

For instance, new supercomputer simulations have revealed a list of 77 potential so-called repurposed drugs targeted at this strain of coronavirus.

Researchers say the results are preliminary, and that they expect most initial findings will ultimately not  work in lab tests. But they’re reaching the lab testing stage now because supercomputers have made it possible to sort through leagues of candidate therapies in days using simulations that previously took weeks or months.

Says Jeremy Smith, professor of biochemistry and cellular and molecular biology at the University of Tennessee, Knoxville, the genetic sequencing of the new coronavirus provided the clue that he and fellow researcher Micholas Dean Smith (no relation) used to move ahead in their research.

“They found it was related to the SARS virus,” Smith says. “It probably evolved from it. It’s a close cousin. It’s like a younger brother or sister.”

And since the proteins that SARS makes have been well studied, researchers had a veritable recipe book of very likely SARS-CoV-2 proteins that might be targets for potential drugs to destroy or disable.

“We know what makes up all the proteins now,” says Smith. “So you try and find drugs to stop the proteins from doing what the virus wants them to do.”

Smith said working with a short timetable of months not years means limiting the therapies available to be tested. There are, for starters, thousands of molecules naturally occurring in plants and microbes and have been part of human diets for many years. Meanwhile, other molecules have already been developed by pharmaceutical companies for other drugs for other conditions.

“Many of them are already approved by the regulatory agencies, such as the FDA in the U.S., which means their safety has already been tested—for another disease,” Smith says. “It should be much quicker to get the approval to use it on lots of people. That’s the first stage. If that doesn’t work, then we’d have to go and design a new one. Then you’d have your 10-15 years and $1 billion dollar of investment on average. Hopefully, that’d be shortened in this case. But you don’t know.”

According to Smith, the reason SARS-CoV-2 is called a coronavirus is because of the protein spikes on the outside of the virus make it look like the sun’s corona. Here is where the supercomputing came in.

Using Oak Ridge National Laboratory’s Summit supercomputer (currently ranked the world’s fastest at 0.2 peak exaflops), the two co-authors ran detailed simulations of the spikes in the presence of 9000 different compounds that could potentially be repurposed as COVID-19 drugs.

They ultimately ranked 8000 of those compounds from best to worst in terms of gumming up the coronavirus’s spikes—which would, ideally, stop it from infecting other cells.

Running this molecular dynamics sequence on standard computers might take a month. But on Summit, the computation took a day, Smith recalls.

He said they’re now working with the University of Tennessee Health Sciences Center in Memphis and as well as possibly other wet-lab partners to test some of the top-performing compounds from their simulations on real-world SARS-CoV-2 virus particles.

And because the supercomputer simulation can run so quickly, Smith said they’re considering taking the next step of the process.

“There’s some communication [with the wet lab]: ‘This compound works, this one doesn’t,’” he said. “‘We have a partial response from this one, a good response from that.’ And you would even use things like artificial intelligence at some point to correlate properties of compounds that are having effects.”

How soon before the scientists could take this next step?

“It could be next week, next month or next year,” he said, “Depending on the results.”

On another front, the Bill and Melinda Gates Foundation has underwritten both an “accelerator” fund for promising new COVID-19 treatments as well as an injector device for a SARS-CoV-2 vaccine scheduled to begin clinical trials next month. (The company developing the vaccine—Innovio Pharmaceuticals in Plymouth Meeting, Penn.—has announced an ambitious timetable in which it expects to see “one million doses” by the end of this year.)

One of the first announced funded projects by the “accelerator” (which has been co-funded by Wellcome in the U.K. and Mastercard) is a wet-lab test conducted by the Rega Institute for Medical Research in Leuven, Belgium.

Unlike the Summit supercomputer research, the Rega effort involves rapid chemical testing of 15,000 antiviral compounds (from other approved antiviral therapies) on the SARS-CoV-2 virus.

An official from the Gates Foundation, contacted by IEEE Spectrum, said they could not currently provide any further information about the research or the accelerator program; we were referred instead to a blog post by the Foundation’s CEO Mark Suzman.

“We’re optimistic about the progress that will be made with this new approach,” Suzman wrote. “Because we’ve seen what can come of similar co-operation and coordination in other parts of our work to combat epidemics.”