All posts by Mark Anderson

“Brita Filter for Blood” Aims to Remove Harmful Cytokines for COVID-19 Patients

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/the-human-os/biomedical/devices/blood-filtration-tech-removes-harmful-cytokines-covid19-patients

In a number of critical cases of COVID-19, a hyper-vigilant immune response is triggered in patients that, if untreated, can itself prove fatal. Fortunately, some pharmaceutical treatments are available—although it’s not yet fully understood how well they might work in addressing the new coronavirus.

Meanwhile, a new blood filtration technology has successfully treated other, similar hyper-vigilant immune syndromes for people who underwent heart surgeries and the critically ill. Which could make it a possibly effective therapy (albeit still not FDA-approved) for some severe COVID-19 cases.

Inflammation, says Phillip Chan—an M.D./PhD and CEO of the New Jersey-based company CytoSorbents—is the body’s way of dealing with infection and injury. It’s why burns and sprained ankles turn red and swell up. “That’s the body’s way of bringing oxygen and nutrients to heal,” he said.

Companies Report a Rush of Electric Vehicle Battery Advances

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/transportation/efficiency/companies-report-rush-electric-vehicle-battery-advances

Electric vehicles have recently boasted impressive growth rates, more than doubling in market penetration every two years between 2014 and 2018. And batteries play a key role in EV performance and price. That’s why some companies are looking to new chemistries and battery technologies to sustain EV growth rates throughout the early 2020s.

Three recent developments suggest that executives are more than just hopeful. They are, in fact, already striking deals to acquire and commercialize new EV battery advances. And progress has been broad—the new developments concern the three main electrical components of a battery: its cathode, electrolyte, and anode.

TESLA’S BIG BETS Analysts think Tesla’s upcoming annual Battery Day (the company hadn’t yet set a date at press time) will hold special significance. Maria Chavez of Navigant Research in Boulder, Colo., expects to hear about at least three big advancements.

The first one (which Reuters reported in February) is that Tesla will develop batteries with cathodes made from lithium iron phosphate for its Model 3s. These LFP batteries—with “F” standing for “Fe,” the chemical symbol for iron—are reportedly free of cobalt, which is expensive and often mined using unethical practices. LFP batteries also have higher charge and discharge rates and longer lifetimes than conventional lithium-ion cells. “The downside is that they’re not very energy dense,” says Chavez.

To combat that, Tesla will reportedly switch from standard cylindrical cells to prism-shaped cells—the second bit of news Chavez expects to hear about. Stacking prisms versus cylinders would allow Tesla to fit more batteries into a given space.

A third development, Chavez says, may concern Tesla’s recent acquisition, Maxwell Technologies. Before being bought by Tesla in May of 2019, Maxwell specialized in making supercapacitors. Supercapacitors, which are essentially charged metal plates with proprietary materials in between, boost a device’s charge capacity and performance.

Supercapacitors are famous for pumping electrons into and out of a circuit at blindingly fast speeds. So an EV power train with a supercapacitor could quickly access stores of energy for instant acceleration and other power-hungry functions. On the flip side, the supercapacitor could also rapidly store incoming charge to be metered out to the lithium battery over longer stretches of time—which could both speed up quick charging and possibly extend battery life.

So could blending supercapacitors, prismatic cells, and lithium iron phosphate chemistry provide an outsize boost for Tesla’s EV performance specs? “The combination of all three things basically creates a battery that’s energy dense, low cost, faster-to-charge, and cobalt-free—which is the promise that Tesla has been making for a while now,” Chavez said.

SOLID-STATE DEALS Meanwhile, other companies are focused on improving both safety and performance of the flammable liquid electrolyte in conventional lithium batteries. In February, Mercedes-Benz announced a partnership with the Canadian utility Hydro-Québec to develop next-generation lithium batteries with a solid and nonflammable electrolyte. And a month prior, the Canadian utility announced a separate partnership with the University of Texas at Austin and lithium-ion battery pioneer John Goodenough, to commercialize a solid-state battery with a glass electrolyte.

“Hydro-Québec is the pioneer of solid-state batteries,” said Karim Zaghib, general director of the utility’s Center of Excellence in Transportation Electrification and Energy Storage. “We started doing research and development in [lithium] solid-state batteries…in 1995.”

Although Zaghib cannot disclose the specific electrolytes his lab will be working with Mercedes to develop, he says the utility is building on a track record of successful battery technology rollouts with companies including A123 Systems in the United States, Murata Manufacturing in Japan, and Blue Solutions in Canada.

STARTUP SURPRISE Lastly, Echion Technologies, a startup based in Cambridge, England, said in February that it had developed a new anode for high-capacity lithium batteries that could charge in just 6 minutes. (Not to be outdone, a team of researchers in Korea announced that same month that its own silicon anode would charge to 80 percent in 5 minutes.)

Echion CEO Jean de la Verpilliere—a former engineering Ph.D. student at the nearby University of Cambridge—says Echion’s proprietary “mixed niobium oxide” anode is compatible with conventional cathode and electrolyte technologies.

“That’s key to our business model, to be ‘drop-in,’ ” says de la Verpilliere, who employs several former Cambridge students and staff. “We want to bring innovation to anodes. But then we will be compatible with everything else in the battery.”

In the end, the winning combination for next-generation batteries may well include one or more breakthroughs from each category—cathode, anode, and electrolyte.

This article appears in the April 2020 print issue as “EV Batteries Shift Into High Gear.”

Could Supercomputers and Rapid Treatment Trials Slow Down Coronavirus?

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/the-human-os/biomedical/devices/could-supercomputers-and-rapid-treatment-trials-slow-down-coronavirus

Supercomputer-aided development of possible antiviral therapies, rapid lab testing of other prospective treatments, and grants to develop new vaccine technologies: Coronavirus responses such as these may still be long shots when it comes to quickly containing the pandemic. But these long shots represent one possible hope for ultimately turning the tide and stopping the virus’s rapid development and spread,. (Another possible hope emerges from recent case reports out of China suggesting that many of the worst COVID-19 cases feature a hyperactive immune response called a “cytokine storm”—a physical phenomenon for which reliable tests and some effective therapies are currently available.)

As part of the direct attack on SARS-CoV-2—the virus that causes COVID-19—virtual and real-world tests of potential therapies represent a technology-focused angle for drug and vaccine research. And unlike traditional pharmaceutical R&D, where clinical trial timetables mean that drugs can take as long as 10 years to reach the marketplace, these accelerated efforts could yield results within about a year or so.

For instance, new supercomputer simulations have revealed a list of 77 potential so-called repurposed drugs targeted at this strain of coronavirus.

Researchers say the results are preliminary, and that they expect most initial findings will ultimately not  work in lab tests. But they’re reaching the lab testing stage now because supercomputers have made it possible to sort through leagues of candidate therapies in days using simulations that previously took weeks or months.

Says Jeremy Smith, professor of biochemistry and cellular and molecular biology at the University of Tennessee, Knoxville, the genetic sequencing of the new coronavirus provided the clue that he and fellow researcher Micholas Dean Smith (no relation) used to move ahead in their research.

“They found it was related to the SARS virus,” Smith says. “It probably evolved from it. It’s a close cousin. It’s like a younger brother or sister.”

And since the proteins that SARS makes have been well studied, researchers had a veritable recipe book of very likely SARS-CoV-2 proteins that might be targets for potential drugs to destroy or disable.

“We know what makes up all the proteins now,” says Smith. “So you try and find drugs to stop the proteins from doing what the virus wants them to do.”

Smith said working with a short timetable of months not years means limiting the therapies available to be tested. There are, for starters, thousands of molecules naturally occurring in plants and microbes and have been part of human diets for many years. Meanwhile, other molecules have already been developed by pharmaceutical companies for other drugs for other conditions.

“Many of them are already approved by the regulatory agencies, such as the FDA in the U.S., which means their safety has already been tested—for another disease,” Smith says. “It should be much quicker to get the approval to use it on lots of people. That’s the first stage. If that doesn’t work, then we’d have to go and design a new one. Then you’d have your 10-15 years and $1 billion dollar of investment on average. Hopefully, that’d be shortened in this case. But you don’t know.”

According to Smith, the reason SARS-CoV-2 is called a coronavirus is because of the protein spikes on the outside of the virus make it look like the sun’s corona. Here is where the supercomputing came in.

Using Oak Ridge National Laboratory’s Summit supercomputer (currently ranked the world’s fastest at 0.2 peak exaflops), the two co-authors ran detailed simulations of the spikes in the presence of 9000 different compounds that could potentially be repurposed as COVID-19 drugs.

They ultimately ranked 8000 of those compounds from best to worst in terms of gumming up the coronavirus’s spikes—which would, ideally, stop it from infecting other cells.

Running this molecular dynamics sequence on standard computers might take a month. But on Summit, the computation took a day, Smith recalls.

He said they’re now working with the University of Tennessee Health Sciences Center in Memphis and as well as possibly other wet-lab partners to test some of the top-performing compounds from their simulations on real-world SARS-CoV-2 virus particles.

And because the supercomputer simulation can run so quickly, Smith said they’re considering taking the next step of the process.

“There’s some communication [with the wet lab]: ‘This compound works, this one doesn’t,’” he said. “‘We have a partial response from this one, a good response from that.’ And you would even use things like artificial intelligence at some point to correlate properties of compounds that are having effects.”

How soon before the scientists could take this next step?

“It could be next week, next month or next year,” he said, “Depending on the results.”

On another front, the Bill and Melinda Gates Foundation has underwritten both an “accelerator” fund for promising new COVID-19 treatments as well as an injector device for a SARS-CoV-2 vaccine scheduled to begin clinical trials next month. (The company developing the vaccine—Innovio Pharmaceuticals in Plymouth Meeting, Penn.—has announced an ambitious timetable in which it expects to see “one million doses” by the end of this year.)

One of the first announced funded projects by the “accelerator” (which has been co-funded by Wellcome in the U.K. and Mastercard) is a wet-lab test conducted by the Rega Institute for Medical Research in Leuven, Belgium.

Unlike the Summit supercomputer research, the Rega effort involves rapid chemical testing of 15,000 antiviral compounds (from other approved antiviral therapies) on the SARS-CoV-2 virus.

An official from the Gates Foundation, contacted by IEEE Spectrum, said they could not currently provide any further information about the research or the accelerator program; we were referred instead to a blog post by the Foundation’s CEO Mark Suzman.

“We’re optimistic about the progress that will be made with this new approach,” Suzman wrote. “Because we’ve seen what can come of similar co-operation and coordination in other parts of our work to combat epidemics.”

AI Helps Scientists Discover Powerful New Antibiotic

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/the-human-os/artificial-intelligence/medical-ai/ai-discover-powerful-new-antibiotic-mit-news

Deep learning appears to be a powerful new tool in the war against antibiotic-resistant infections. One new algorithm discovered a drug that, in real-world lab tests, killed off a broad spectrum of deadly bacteria, including some antibiotic-resistant strains. The same algorithm has unearthed another eight candidates that show promise in computer-simulated tests.

How does one make an antibiotics-discovering neural network? The answer, counter-intuitively, is not to hold its hand and teach it the rules of biochemistry. Rather, a little like Google’s successful AlphaZero super-AI chess and Go programs, this group’s deep learning model must figure things out from scratch.

“We don’t have to tell the computer anything—we just give it a molecule and a property label, which in our case is, ‘Is it antibacterial?’” says researcher Jonathan Stokes, postdoctoral fellow at MIT’s Department of Biological Engineering. “Then the model on its own learns what molecular features are important, which molecular features are more strongly or more weakly predictive of antibiotic activity.”

And as the AlphaZero researchers found, once a good deep learning model gets going on a well-defined problem, without humans jumping in to teach it a bunch of rules, new frontiers sometimes open up.

Stokes said he and his co-authors from MIT, Harvard, and McMaster University in Hamilton, Ontario repurposed a deep learning algorithm designed to figure out chemical properties of molecules. The chemical property algorithm in this case outperformed other computer simulation programs in predicting, say, a simulated molecule’s solubility.

Stokes said the new research treated antibiotic efficacy as another chemical property for this same algorithm to predict.

The group trained its neural net on a database of more than 1,000 FDA-approved drugs and another group of natural compounds isolated from sources like plants or dirt. These 2,335 molecules all had well-known chemical structures and well-known antibiotic or non-antibiotic properties.

Once the model had been trained, they pointed it at a drug repurposing database of more than 6,000 compounds that have either been FDA approved as drugs or had at least begun the FDA approval process.

Stokes said the team was focused on two parameters in particular—antibiotic efficacy (which their deep learning algorithm determined) and chemical similarity to other known antibiotics (calculated by a well-known mathematical formula called the Tanimoto Score). They wanted to discover compounds in the Broad Institute’s Drug Repurposing Hub that were highly effective antibiotics. But they also wanted these potential antibiotics to be as chemically distant from any other known antibiotic as possible.

They desired the latter because chemical cousins of known antibiotics can also prove ineffective against antibiotic-resistant strains of infections.

And that is how the group lit on a drug they called halicin. Originally developed as an anti-diabetes drug, halicin seemed to be an antibiotic nothing like, for instance, the tetracycline family of antibiotics or the beta-lactam antibiotic group (of which penicillin is a member).

“It didn’t obviously fit into an existing antibiotic class,” he said. “You could kind of stretch your imagination and say, ‘Maybe it belongs to this class.’ But there was nothing obvious, nothing clean about it. And that was cool.”

So they tested halicin against known dangerous bacteria like E. coli. They also tested halicin as a cream against a skin infection grown on a lab mouse that is generally not treatable by any antibiotic today.

“We took halicin and applied it topically. We applied it periodically over the course of a day,” Stokes said. “Then we looked to see how many live Acinetobacter baumannii existed at the end of a day’s worth of treatment. And we found it had eradicated the infection.”

Bolstered by this success, the group then applied the model to a much broader virtual depository, the so-called ZINC 15 online database of more than 120 million molecules.

Again they searched for the intersection of an effective antibiotic that was also as chemically distinct as possible from known antibiotics. That’s how they turned up another eight candidates. None of these eight have, however, been tested in the lab as halicin has.

The group describes the entire process in a recent paper in the journal Cell.

Stokes said the group is now applying its deep learning model to discovering novel so-called narrow-spectrum antibiotics.

“We’re training new models to find antibiotics that are only active against a specific bacterial pathogen—and that do not have activity against the microbes living in your gut,” he said.

Plus, he said, narrow-spectrum antibiotics will be much less likely to trigger broad-spectrum antibiotic resistance. “Our current antibiotics are active against a ton of different bacteria. And the fact that they’re active against a ton of different bacteria promotes the dissemination of antibiotic resistance. So these narrow-spectrum therapies… will less strongly promote the rampant dissemination of resistance.”

Hydro-Québec to Commercialize Glass Battery Co-Developed by John Goodenough

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/batteries-storage/john-goodenough-glass-battery-news-hydroquebec

A rapid-charging and non-flammable battery developed in part by 2019 Nobel Prize winner John Goodenough has been licensed for development by the Canadian electric utility Hydro-Québec. The utility says it hopes to have the technology ready for one or more commercial partners in two years.

Hydro-Québec, according to Karim Zaghib, general director of the utility’s Center of Excellence in Transportation Electrification and Energy Storage, has been commercializing patents with Goodenough’s parent institution, the University of Texas at Austin, for the past 25 years.

3D Print Jobs Are More Accurate With Machine Learning

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/3d-print-jobs-news-accurate-machine-learning

Journal Watch report logo, link to report landing page

3D printing is already being used to produce electric bikes, chocolate bars, and even human skin. Now, a new AI algorithm that learns each printer’s imprecisions can tweak print jobs to ensure greater accuracy.

The engineers who developed the algorithm find it increases a 3D printer’s accuracy by up to 50 percent. That can make a big difference for high-precision industrial jobs, says Qiang Huang, an associate professor of industrial and systems engineering at the University of Southern California, who helped create it.

New Math Makes Scientists More Certain About Quantum Uncertainties

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/nanoclast/semiconductors/nanotechnology/new-math-makes-scientists-more-certain-about-quantum-uncertainties

Quantum measurements, at the core of next-generation technologies including quantum computing, quantum cryptography, and ultra-sensitive electronics, may face a new hurdle as system sensitivities brush up against Heisenberg’s Uncertainty Principle.

The practical Heisenberg limits in measuring some quantities up to the ultimate quantum sensitivity may be larger than expected—by a factor of pi. This new finding would, according to physicist Wojciech Górecki of the University of Warsaw in Poland, represent “an impediment compared to previous expectations.”

Górecki said he and his collaborators arrived at this theoretical limit by applying a branch of math known as Bayesian statistics to familiar quantum measurement problems.

The standard problem posed in many Intro to Quantum classes involves the push-and-pull conflict between measuring a particle’s position with high precision versus knowing that same particle’s momentum with high precision as well.

As Werner Heisenberg famously theorized in 1927, the product of uncertainties of these two observables can never dip below a very small number equal to Planck’s constant divided by four times pi (h/4π).

So, down at the quantum scale, there are always tradeoffs. Measuring a particle’s position with very high precision calls for sacrificing how precisely you can determine the speed and direction of its travel.

Yet, said Górecki, plenty of quantum scale measurements involve neither position nor momentum. For instance, some photonics instruments measure quantities like the phase of a wavefront versus the number of photons counted in a given energy range.

Górecki notes that canonical Heisenberg isn’t as much help here as is a related concept called the “Heisenberg limit.” The Heisenberg Limit, he says, delineates the smallest possible uncertainty in a measurement, given a set number of times a system is probed. “It is a natural consequence of Heisenberg’s uncertainty principle, interpreted in a slightly broader context,” says Górecki.

It was long believed that, with a hypothetical technology trying to discover phase as precisely as possible using only n photons, the Heisenberg Limit to the uncertainty in phase was 1/n. But no technology had been devised to prove that 1/n was the ultimate universal “Heisenberg Limit.”

There’s a good reason why. Górecki and colleagues report in a new paper in the journal Physical Review Letters that the Heisenberg Limit in this case scales as π/n instead of 1/n. In other words, the smallest measurable uncertainty is more than three times as much as previously believed. And so now we know that our observations of the universe are a little bit fuzzier than we imagined.

(To be clear, “n” here is not necessarily just the number of photons used in a measurement. It could also represent a number of other limits on the amount of resources expended in making a precision observation. The variable “n” here could also be, Górecki notes, the number of quantum gates in a measurement or the total time spent interrogating the system.)

Górecki says the new finding may not remain purely theoretical for too much longer. A 2007 experiment in precision phase measurement came within 56 percent of the new Heisenberg Limit.

“Our paper has attracted the interest of eminent researchers in the field of statistics, which find this idea worth spreading,” says Górecki. “Perhaps it would be possible to construct a simpler proof that could be included in standard textbooks.”

Extending Quantum Entanglement Across Town

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/nanoclast/computing/networks/taking-quantum-entanglement-across-town

A team of German researchers has stretched the distance quantum information can travel from stationary quantum memory to optical telecom pulse. The group’s new experiment transfers the information contained in a single quantum bit from an atomic state to a single photon, then sends it through some 20 kilometers of fiber optic cable.

This finding begins to extend the distance over which quantum systems (including quantum computers and quantum communications hubs) can be physically separated while still remaining connected. It also serves as a milestone on the road toward a so-called quantum repeater, which would broadly expand the footprint of quantum technologies toward regional, national, or even international connectivity.

“One of the grand goals is a quantum network, which then would link together different quantum computers,” says Harald Weinfurter, professor of physics at Ludwig Maximilian University of Munich, Germany. “And if we can establish entanglement between many such segments, then we can link all the segments together. And then link, in an efficient manner, two atoms over a really long distance.”

The researchers, who reported their findings in a recent issue of the journal Physical Review Letters, have only one-half of a full-fledged communications system from one stationary qubit to another.

Weinfurter noted that, to complete such a quantum communications channel, the team would have to also complete the process in reverse. So, the data in an individual qubit would be transferred to a photon, travel some distance, and then be transferred back to a single atom at the other end of the chain.

“At the end of the day it will work out; we are very positive,” says Weinfurter.

In the experiments reported in the paper, rubidium atoms were captured in a tabletop laser atom trap and cooled down to millionths of a degree above absolute zero. The researchers then picked out an individual atom from the rubidium atom cloud using optical tweezers (a focused laser beam that nudges atoms as if they’re being manipulated by physical tweezers).

They zapped the single atom, pushing it up to an excited state (actually two nearby energy states, separated by the spin state of the electron). The atom, in other words, exists in a quantum state that is both spin-up and spin-down versions of that excited state. When the atom decays, the polarization of the ensuing photon depends on the spin-up or -down nature of the excited state that produced it.

The next step involved trapping the polarized photon and converting it to a fiber optic S-band photon, which can travel through 20 km of fiber on average before being absorbed or attenuated. The researchers found they can preserve on average some 78 percent of the entanglement between the rubidium atom and the fiber optic photon.

The next challenge, says Weinfurter, is to build out the full atom-to-photon-to-atom quantum communication system within their lab. And then, from there, to actually physically separate one apparatus from another by roughly 20 km and try to make it all work.

As it happens, Weinfurter notes, the Max Planck Institute for Quantum Optics in Munich happens to be about 20 km from the team’s lab. “It’s good to have them that close.”

Then, as quantum computer makers today know all too well, the challenge of error correction will rear its head. As with quantum computer error correction, entanglement purification for a quantum communication system like this is not an easy challenge to overcome. But it’s a necessary step if present-day technologies are to be scaled up into a system that can transmit quantum entanglement from one stationary qubit to another at dozens or hundreds of kilometers distance or more.

“In the whole process, we lose the quality of the entangled states,” Weinfurter said. “Then we have to recover this. There are lots of proposals how to do this. But one has to implement it first.”

Quantum Entanglement Meets Superconductivity in Novel Experiment

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/nanoclast/computing/hardware/quantum-entanglement-superconductivity-technology-news-novel-experiment

Two mysterious components of quantum technology came together in a lab at Rice University in Houston recently. Quantum entanglement—the key to quantum computing—and quantum criticality—an essential ingredient for high-temperature superconductors—have now been linked in a single experiment.

The preliminary results suggest something approaching the same physics is behind these two essential but previously distinct quantum technologies. The temptation, then, is to imagine a future in which a sort of grand unified theory of entanglement and superconductivity might be developed, where breakthroughs in one field could be translated into the other.

Holding Light (Temporarily) in Place

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/holding-light-temporarily-in-place

Storing light beams—putting an ensemble of photons, traveling through a specially prepared material, into a virtual standstill—has come a step closer to reality with a new discovery involving microwaves.

The research finds that microwaves traveling through a particular configuration of ceramic aluminum oxide rods can be made to hold in place for several microseconds. If the optical or infrared equivalent of this technology can be fabricated, then Internet and computer communications networks (each carried by optical and infrared laser pulses) might gain a new versatile tool that enables temporary, stationary storage of a packet of photons.

Will China Attain Exascale Supercomputing in 2020?

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/computing/hardware/will-china-attain-exascale-supercomputing-in-2020

graphic link to special report landing page

To the supercomputer world, what separates “peta” from “exa” is more than just three orders of magnitude.

As measured in floating-point operations per second (a.k.a. FLOPS), one petaflop (1015 FLOPS) falls in the middle of what might be called commodity high-performance computing (HPC). In this domain, hardware is hardware, and what matters most is increasing processing speed as cost-effectively as possible.

Now the United States, China, Japan, and the European Union are all striving to reach the exaflop (1018 ) scale. The Chinese have claimed they will hit that mark in 2020. But they haven’t said so lately: Attempts to contact officials at the National Supercomputer Center, in Guangzhou; Tsinghua University, in Beijing; and Xi’an Jiaotong University yielded either no response or no comment.

It’s a fine question of when exactly the exascale barrier is deemed to have been broken—when a computer’s theoretical peak performance exceeds 1 exaflop or when its maximum real-world compute speed hits that mark. Indeed, the sheer volume of compute power is less important than it used to be.

“Now it’s more about customization, special-purpose systems,” says Bob Sorensen, vice president of research and technology with the HPC consulting firm Hyperion Research. “We’re starting to see almost a trend towards specialization of HPC hardware, as opposed to a drive towards a one-size-fits-all commodity” approach.

The United States’ exascale computing efforts, involving three separate machines, total US $1.8 billion for the hardware alone, says Jack Dongarra, a professor of electrical engineering and computer science at the University of Tennessee. He says exascale algorithms and applications may cost another $1.8 billion to develop.

And as for the electric bill, it’s still unclear exactly how many megawatts one of these machines might gulp down. One recent ballpark estimate puts the power consumption of a projected Chinese exaflop system at 65 megawatts. If the machine ran continuously for one year, the electricity bill alone would come to about $60 million.

Dongarra says he’s skeptical that any system, in China or anywhere else, will achieve one sustained exaflop anytime before 2021, or possibly even 2022. In the United States, he says, two exascale machines will be used for public research and development, including seismic analysis, weather and climate modeling, and AI research. The third will be reserved for national-security research, such as simulating nuclear weapons.

“The first one that’ll be deployed will be at Argonne [National Laboratory, near Chicago], an open-science lab. That goes by the name Aurora or, sometimes, A21,” Dongarra says. It will have Intel processors, with Cray developing the interconnecting fabric between the more than 200 cabinets projected to house the supercomputer. A21’s architecture will reportedly include Intel’s Optane memory modules, which represent a hybrid of DRAM and flash memory. Peak capacity for the machine should reach 1 exaflop when it’s deployed in 2021.

The other U.S. open-science machine, at Oak Ridge National Laboratory, in Tennessee, will be called Frontier and is projected to launch later in 2021 with a peak capacity in the neighborhood of 1.5 exaflops. Its AMD processors will be dispersed in more than 100 cabinets, with four graphics processing units for each CPU.

The third, El Capitan, will be operated out of Lawrence Livermore National Laboratory, in California. Its peak capacity is also projected to come in at 1.5 exaflops. Launching sometime in 2022, El Capitan will be restricted to users in the national security field.

China’s three announced exascale projects, Dongarra says, also each have their own configurations and hardware. In part because of President Trump’s China trade war, China will be developing its own processors and high-speed interconnects.

“China is very aggressive in high-performance computing,” Dongarra notes. “Back in 2001, the Top 500 list had no Chinese machines. Today they’re dominant.” As of June 2019, China had 219 of the world’s 500 fastest supercomputers, whereas the United States had 116. (Tally together the number of petaflops in each machine and the numbers come out a little different. In terms of performance, the United States has 38 percent of the world’s HPC resources, whereas China has 30 percent.)

China’s three exascale systems are all built around CPUs manufactured in China. They are to be based at the National University of Defense Technology, using a yet-to-be-announced CPU; the National Research Center of Parallel Computer Engineering and Technology, using a nonaccelerated ARM-based CPU; and the Chinese HPC company Sugon, using an AMD-licensed x 86 with accelerators from the Chinese company HyGon.

Japan’s future exascale machine, Fugaku, is being jointly developed by Fujitsu and Riken, using ARM architecture. And not to be left out, the EU also has exascale projects in the works, the most interesting of which centers on a European processor initiative, which Dongarra speculates may use the open-source RISC-V architecture.

All four of the major players—China, the United States, Japan, and the EU—have gone all-in on building out their own CPU and accelerator technologies, Sorensen says. “It’s a rebirth of interesting architectures,” he says. “There’s lots of innovation out there.”

Building a Quantum Computer From Off-the-Shelf Parts

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/computing/hardware/scalable-qubits-quantum-computer-news-silicon-wafer

A new technique for fabricating quantum bits in silicon carbide wafers could provide a scalable platform for future quantum computers. The quantum bits, to the surprise of the researchers, can even be fabricated from a commercial chip built for conventional computing.

The recipe was surprisingly simple: Buy a commercially available wafer of silicon carbide (a temperature-robust semiconductor used in electric vehicles, LED lights, solar cells, and 5G gear) and shoot an electron beam at it. The beam creates a deficiency in the wafer which behaves, essentially, as a single electron spin that can be manipulated electrically, magnetically, or optically.

Bon Voyage for the Autonomous Ship Mayflower

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energy/renewables/bon-voyage-for-the-autonomous-ship-mayflower

graphic link to special report landing page

In September a modern-day Mayflower will launch from Plymouth, a seaside town on the English Channel. And as its namesake did precisely 400 years earlier, this boat will set a course for the other side of the Atlantic.

Weather permitting, the 2020 voyage will follow the same course, but that’s about the only thing the two ships will have in common. Instead of carrying pilgrims intent on beginning a new life in the New World, this ship will be fully autonomous, with no crew or passengers on board. It will be powered in part by solar panels and a wind turbine at its stern. The boat has a backup electrical generator on board, although there are no plans to refuel the boat at sea if the generator backup runs dry.

The ship will cross the Atlantic to Plymouth, Mass., in 12 days instead of the 60 days of the 1620 voyage. It’ll be made of aluminum and composite materials. And it will measure 15 meters and weigh 5 metric tons—half as long and 1/36 as heavy as the original wooden boat. Just as a spacefaring mission would, the new Mayflower will contain science bays for experiments to measure oceanographic, climate, and meteorological data. And its trimaran design makes it look a little like a sleek, scaled-down, seagoing version of the Battlestar Galactica, from the TV series of the same name.

“It doesn’t conform to any specific class, regulations, or rules,” says Rachel Nicholls-Lee, the naval architect designing the boat. But because the International Organization for Standardization has a set of standards for oceangoing vessels under 24 meters in length, Nicholls-Lee is designing the boat as close to those specs as she can. Of course, without anyone on board, the new Mayflower also sets some of its own standards, too. For instance, the tightly waterproofed interior will barely have room for a human to crawl around in and to access its computer servers.

“It’s the best access we can have, really,” says Nicholls-Lee. “There won’t be room for people. So it is cozy. It’s doable, but it’s not somewhere you’d want to spend much time.” She adds that there’s just one meter between the waterline and the top of the boat’s hull. Atop the hull will also be a “sail fin” that juts up vertically to exploit wind power for propulsion, while a vertical turbine exploits it to generate electricity.

Nicholls-Lee’s architectural firm, Whiskerstay, based in Cornwall, England, provides the nautical design expertise for a team that also includes the project’s cofounders, Greg Cook and Brett Phaneuf. Cook (based in Chester, Conn.) and Phaneuf (based in Plymouth, England) jointly head up the marine exploration nonprofit Promare.

Phaneuf, who’s also the managing director of the Plymouth-based submersibles consulting company MSubs, says the idea for the autonomous Mayflower quadricentennial voyage came up at a Plymouth city council meeting in 2016. With the 400th anniversary of the original voyage fast approaching, Phaneuf said Plymouth city councillors were chatting about ideas for commemorating the historical event.

“Someone said, ‘We’re thinking about building a replica,’ ” Phaneuf says. “[I said], ‘That’s not the best idea. There’s already a replica in Massachusetts, and I grew up not far from it and Plymouth Rock.’ Instead of building something that is a 17th-century ship, we should build something that represents what the marine enterprise for the next 400 years is going to look like.” The town’s officials liked the idea, and they gave him the resources to start working on it.

The No. 1 challenge was clear from the start: “How do you get a ship across the ocean without sinking?” Phaneuf says. “The big issue isn’t automation, because automation is all around us in our daily lives. Much of the modern world is automated—people just don’t realize it. Reliability at sea is really the big challenge.”

But the team’s budget constrained its ability to tackle the reliability problem head-on. The ship will be at sea on its own with no crewed vessel tailing it. In fact, its designers are assuming that much of the Atlantic crossing will have to be done with spotty satellite communications at best.

Phaneuf says that the new Mayflower will have little competition in the autonomous sailing ship category. “There are lots of automated boats, ranging in size from less than one meter to about 10 meters,” he says. “But are they ships? Are they fully autonomous? Not really.” Not that the team’s Mayflower is going to be vastly larger than 10 meters in length. “We only have enough money to build a boat that’s 15 meters long,” he says. “Not big, by the ocean’s standards. And even if it was as big as an aircraft carrier, there’s a few of them at the bottom of the ocean from the years gone by.”

Cook, who consults with Phaneuf and the Mayflower project from a distance in his Connecticut office, says the 400-year-anniversary deadline was always on researchers’ minds.

“There are a lot of days when you think we’ll never get this done,” Cook says. “And you just keep your head down and power through it. You go over it, you go under it, you go around it, or you go through it. Because you’ve got to get it. And we will.”

When IEEE Spectrum contacted Cook in October, he was negotiating with the shipyard in Gdańsk, Poland, that’s building the new Mayflower’s hull. The yard needed plans executed to a level of detail that the team was not quite ready to provide. But parts of the boat needed to be completed promptly, so the day’s balancing act was already under way.

The next day’s challenge, Cook says, involved the output from the radar systems on the boat. The finest commercial radars in the world, he says, are worthless if they can’t output raw radar data—which the computers on the ship will need to process. So finding a radar system that represents the best mix of quality, affordability, and versatility with respect to output was another struggle.

Nicholls-Lee specializes in designing sustainable energy systems for boats, so she was up to the challenge of developing a boat that one day might not need to refuel. The ship will have 15 solar panels, each just 3 millimeters thick, which means they’ll follow the curve of the hull. “They’re very low profile; they’re not going to get washed off or anything like that,” Nicholls-Lee says. On a clear day, the panels could potentially generate some 2.5 kilowatts.

The sail fin is expected to propel the boat to its currently projected average cruising speed of 10 knots. When it operates just on electricity—to be stored in a half-ton battery bank in the hull—the Mayflower should make from 4 to 5 knots.

The ship’s eyes and ears sit near the stern, Nicholls-Lee says. Radar, cameras, lights, antennas, satellite-navigation equipment, and sonar pods will all be perched above the hull on a specially outfitted mast.

Nicholls-Lee says she’s been negotiating “with the AI team, who want the mast with all the equipment on it as high up as possible.” The mast really can’t be placed further forward on the boat, she says, because anything that’s closer to the bow gets the worst of the waves and the weather. And although the boat could keep moving if its sail fin snapped off, the loss of the mast would leave the Mayflower unable to navigate, leaving it more or less dead in the water.

The problem with putting the sensors behind the sail fin, Nicholls-Lee says, is that it means losing a fair portion of the field of view. That’s a trade-off the engineers are willing to work with if it helps to reduce their chances of being demasted by a particularly nasty wave or swell. In the worst case, in which the sail fin gets stuck in one position, blocking the radar, sonar, and cameras, the fin has an emergency clutch. Resorting to that clutch would deprive the ship of the wind’s propulsive power, but at least it wouldn’t blind the ship.

Behind all that hardware is the software, which of course ultimately does the piloting. IBM supplies the AI package, together with cloud computing power.

The 8 to 10 core team members are now adapting the hardware and software to the problem of transatlantic navigation, Phaneuf says. An example of what they’re tweaking is an element of the Mayflower’s software stack called the operational decision manager.

“It’s a thing that parses rules,” Phaneuf says. “It’s used in fiscal markets. It looks at bank swaps or credit card applications, making tens of thousands or millions of decisions, over and over again, all day. You put in a set of rules textually, and it keeps refining as you give it more input. So in this case we can put in all the collision regulations and all sorts of exceptions and alternative hypotheses that take into account when people don’t follow the rules.”

Eric Aquaronne, a cloud-and-AI strategist for IBM in Nice, France, says that ultimately the Mayflower’s software must output a perhaps deceptively simple set of decisions. “In the end, it has to decide, Do I go right, left, or change my speed?” Aquaronne says.

Yet within those options, at every instant during the boat’s voyage are hidden a whole universe of weather, sensor, and regulatory data, as well as communications with the IBM systems onshore that continue to train the AI algorithms. (The boat will sometimes lose the satellite connection, Phaneuf notes, at which point it is really on its own, running its AI inference algorithms locally.)

Today very little weather data is collected from the ocean’s surface, Phaneuf notes. A successful Mayflower voyage that gathered such data for months on end could therefore make a strong case for having more such autonomous ships out in the ocean.

“We can help refine weather models, and if you have more of these things out on the ocean, you could make weather prediction ever more resolute,” he says. But, he adds, “it’s the first voyage. So we’re trying not to go too crazy. I’m really just worried about getting across. I’m letting the other guys worry about the science packages. I’m mostly concerned with the ‘not sinking’ part now—and the ‘get there relatively close to where it’s supposed to be’ part. After that, the sky’s the limit.”

IBM Reveals “Staggering” New Battery Tech, Withholds Technical Details

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/environment/ibm-new-seawater-battery-technology

IBM lifted the veil this week on a new battery for EVs, consumer devices, and electric grid storage that it says could be built from minerals and compounds found in seawater. (By contrast, many present-day batteries must source precious minerals like cobalt from dangerous and exploitative political regimes.) The battery is also touted as being non-flammable and able to recharge 80 percent of its capacity in five minutes.

The battery’s specs are, says Donald Sadoway, MIT professor of materials chemistry, “staggering.” Some details are available in a Dec. 18 blog posted to IBM’s website. Yet, Sadoway adds, lacking any substantive data on the device, he has “no basis with which to be able to confirm or deny” the company’s claims. 

How Green Was My Data Center? (Not Very)

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/green-tech/conservation/how-green-was-my-data-center-not-very

A new data center industry report reveals that there’s a long way to go before anyone can claim the sector is in any way “green.” By one standard, just 12 percent of the data centers the report’s authors surveyed are either markedly efficient, or sustainable or, yes, green.

The report, assembled by officials at the IT company SuperMicro, considered what it called “power effectiveness” as a standard for judging a data center’s sustainability. (Michael McNerney, SuperMicro vice president of marketing and network security, noted that the survey did not factor in a center’s power usage effectiveness or PUE score. That figure is calculated by dividing the facility’s total energy use by the amount of energy consumed by its IT equipment.)

By the report’s standards, a data center’s power effectiveness was based on a two primary factors: its power density per rack (higher numbers are better) and its reliance on ambient air cooling.

“Instead of just calling your HVAC vendor to get a new air conditioning [unit], there are ways to run these data centers a little hotter and reduce the power cooling cost,” McNerney said. “Systems are becoming more efficient, more reliable, and can run at higher temperatures.”

The report notes that a data center achieves an additional 4–5 percent energy savings for every 0.56 degree Celcius (1 degree Fahrenheit) by which it allows its server inlet temperatures to increase.

So a small data center that had previously air conditioned its servers down to between 21- and 24 ºC would stand to save more than US $6000 per rack in annual operating expenses by letting the mercury climb to between 25- and 28 ºC. The savings climbs further, to more than $12,500 per rack data center operators allow the operating temperature to rise as high as 32º C.

Another 10- to 20-percent savings off the top could be extracted, McNerney said, by consolidating power and cooling infrastructure. This can be achieved with a single central cooling unit and a single power supply serving the entire center, rather than individual power supplies and fans dedicated to each rack or row.

The final component in assessing a data center’s green score, McNerney said, had more mitigating factors pulling in multiple directions.

“If you leave a server in place longer, you [generate] less e-waste,” McNerney said. “The flipside of that is old servers aren’t necessarily more efficient.”

Of course, refreshing only certain components of a server—for instance, its CPU cores, whose speed and efficiencies tend to increase faster than the rest of a server blade—can be part of the answer.

On average, survey respondents refreshed their servers every 4.1 years. The Intel x86 architecture roadmap calls for generational efficiency and speed improvements every two to two-and-a-half years. On the other hand, power supplies, cooling, storage and of course a server’s physical chassis don’t need refreshing nearly as promptly.

McNerney says many survey respondents found that software licensing costs, which are typically priced per core, are a key driver of server update timing. (A faster core can of course provide a better return on every software license dollar than can an older, slower core.)

“We really wanted to highlight that data centers going down this green route can actually save money—operation cost, acquisition cost,” he said. “We need to move this from a corporate charity discussion to ‘Clean up the data center and save a bunch of money while you’re doing it.’”

The SuperMicro report gathered 1,362 survey responses from a worldwide assortment of data center operators and affiliated IT professionals. The respondents were mostly companies whose data centers were based in North America (79 percent), although 32 percent and 22 percent also operated data centers in Europe/Africa and Asia, respectively.

AI and the future of work: The prospects for tomorrow’s jobs

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-and-the-future-of-work-the-prospects-for-tomorrows-jobs

AI experts gathered at MIT last week, with the aim of predicting the role artificial intelligence will play in the future of work. Will it be the enemy of the human worker? Will it prove to be a savior? Or will it be just another innovation—like electricity or the internet?

As IEEE Spectrum previously reported, this conference (“AI and the Future of Work Congress”), held at MIT’s Kresge Auditorium, offered sometimes pessimistic outlooks on the job- and industry-destroying path that AI and automation seems to be taking: Self-driving technology will put truck drivers out of work; smart law clerk algorithms will put paralegals out of work; robots will (continue to) put factory and warehouse workers out of work.

Andrew McAfee, co-director of MIT’s Initiative on the Digital Economy, said even just in the past couple years, he’s noticed a shift in the public’s perception of AI. “I remember from previous versions of this conference, it felt like we had to make the case that we’re living in a period of accelerating change and that AI’s going to have a big impact,” he said. “Nobody had to make that case today.”

Elisabeth Reynolds, executive director of MIT’s Task Force on the Work of the Future, noted that following the path of least resistance is not a viable way forward. “If we do nothing, we’re in trouble,” she said. “The future will not take care of itself. We have to do something about it.”

Panelists and speakers spoke about championing productive uses of AI in the workplace, which ultimately benefit both employees and customers.

As one example, Zeynep Ton, professor at MIT Sloan School of Management, highlighted retailer Sam’s Club’s recent rollout of a program called Sam’s Garage. Previously customers shopping for tires for their car spent somewhere between 30 and 45 minutes with a Sam’s Club associate paging through manuals and looking up specs on websites.

But with an AI algorithm, they were able to cut that spec hunting time down to 2.2 minutes. “Now instead of wasting their time trying to figure out the different tires, they can field the different options and talk about which one would work best [for the customer],” she said. “This is a great example of solving a real problem, including [enhancing] the experience of the associate as well as the customer.”

“We think of it as an AI-first world that’s coming,” said Scott Prevost, VP of engineering at Adobe. Prevost said AI agents in Adobe’s software will behave something like a creative assistant or intern who will take care of more mundane tasks for you.

Prevost cited an internal survey of Adobe customers that found 74 percent of respondents’ time was spent doing repetitive work—the kind that might be automated by an AI script or smart agent.

“It used to be you’d have the resources to work on three ideas [for a creative pitch or presentation],” Prevost said. “But if the AI can do a lot of the production work, then you can have 10 or 100. Which means you can actually explore some of the further out ideas. It’s also lowering the bar for everyday people to create really compelling output.”

In addition to changing the nature of work, noted a number of speakers at the event, AI is also directly transforming the workforce.

Jacob Hsu, CEO of the recruitment company Catalyte spoke about using AI as a job placement tool. The company seeks to fill myriad positions including auto mechanics, baristas, and office workers—with its sights on candidates including young people and mid-career job changers. To find them, it advertises on Craigslist, social media, and traditional media.

The prospects who sign up with Catalyte take a battery of tests. The company’s AI algorithms then match each prospect’s skills with the field best suited for their talents.

“We want to be like the Harry Potter Sorting Hat,” Hsu said.

Guillermo Miranda, IBM’s global head of corporate social responsibility, said IBM has increasingly been hiring based not on credentials but on skills. For instance, he said, as much as 50 per cent of the company’s new hires in some divisions do not have a traditional four-year college degree. “As a company, we need to be much more clear about hiring by skills,” he said. “It takes discipline. It takes conviction. It takes a little bit of enforcing with H.R. by the business leaders. But if you hire by skills, it works.”

Ardine Williams, Amazon’s VP of workforce development, said the e-commerce giant has been experimenting with developing skills of the employees at its warehouses (a.k.a. fulfillment centers) with an eye toward putting them in a position to get higher-paying work with other companies.

She described an agreement Amazon had made in its Dallas fulfillment center with aircraft maker Sikorsky, which had been experiencing a shortage of skilled workers for its nearby factory. So Amazon offered to its employees a free certification training to seek higher-paying work at Sikorsky.

“I do that because now I have an attraction mechanism—like a G.I. Bill,” Williams said. The program is also only available for employees who have worked at least a year with Amazon. So their program offers medium-term job retention, while ultimately moving workers up the wage ladder.

Radha Basu, CEO of AI data company iMerit, said her firm aggressively hires from the pool of women and under-resourced minority communities in the U.S. and India. The company specializes in turning unstructured data (e.g. video or audio feeds) into tagged and annotated data for machine learning, natural language processing, or computer vision applications.

“There is a motivation with these young people to learn these things,” she said. “It comes with no baggage.”

Alastair Fitzpayne, executive director of The Aspen Institute’s Future of Work Initiative, said the future of work ultimately means, in bottom-line terms, the future of human capital. “We have an R&D tax credit,” he said. “We’ve had it for decades. It provides credit for companies that make new investment in research and development. But we have nothing on the human capital side that’s analogous.”

So a company that’s making a big investment in worker training does it on their own dime, without any of the tax benefits that they might accrue if they, say, spent it on new equipment or new technology. Fitzpayne said a simple tweak to the R&D tax credit could make a big difference by incentivizing new investment programs in worker training. Which still means Amazon’s pre-existing worker training programs—for a company that already famously pays no taxes—would not count.

“We need a different way of developing new technologies,” said Daron Acemoglu, MIT Institute Professor of Economics. He pointed to the clean energy sector as an example. First a consensus around the problem needs to emerge. Then a broadly agreed-upon set of goals and measurements needs to be developed (e.g., that AI and automation would, for instance, create at least X new jobs for every Y jobs that it eliminates).

Then it just needs to be implemented.

“We need to build a consensus that, along the path we’re following at the moment, there are going to be increasing problems for labor,” Acemoglu said. “We need a mindset change. That it is not just about minimizing costs or maximizing tax benefits, but really worrying about what kind of society we’re creating and what kind of environment we’re creating if we keep on just automating and [eliminating] good jobs.”

AI and the Future of Work: The Economic Impacts of Artificial Intelligence

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-and-the-future-of-work-the-economic-impact-of-artificial-intelligence

This week at MIT, academics and industry officials compared notes, studies, and predictions about AI and the future of work. During the discussions, an insurance company executive shared details about one AI program that rolled out at his firm earlier this year. A chatbot the company introduced, the executive said, now handles 150,000 calls per month.

Later in the day, a panelist—David Fanning, founder of PBS’s Frontline—remarked that this statistic is emblematic of broader fears he saw when reporting a new Frontline documentary about AI. “People are scared,” Fanning said of the public’s AI anxiety.

Fanning was part of a daylong symposium  about AI’s economic consequences—good, bad, and otherwise—convened by MIT’s Task Force on the Work of the Future.

AI and the Future of Work: What to look out for

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/computing/it/ai-and-the-future-of-work-what-to-look-out-for

The robots have come for our jobs. This is the fear that artificial intelligence increasingly stokes with both the tech and policy elite and the public at large. But how worried should we really be?

To consider what impact AI will have on employment, a conference at MIT titled The Future of Work is convening this week, bringing together some leading thinkers and analysts. In advance of the conference, IEEE Spectrum talked with R. David Edelman, director of MIT’s Project on Technology, Economy & National Security about his take on AI’s coming role.

Edelman says he’s lately seen AI-related worry (mixed with economic anxiety) ramp up in much the same way that cybersecurity worries began ratcheting up ten or fifteen years ago.

“Increasingly, issues like the implications of AI on the future of work are table stakes for understanding our economic future and our ability to deliver prosperity for Americans,” he says. “That’s why you’re seeing broad interest, not just from the Department of Labor, not just from the Council of Economic Advisors, but across the government and, in turn, across society.”

Before coming to MIT, Edelman worked in the White House from 2010-’17 under a number of titles, including Special Assistant to the President on Economic and Technology Policy. Edelman also organizes a related conference in the spring at MIT, the MIT AI Policy Congress.

At this week’s Future of Work conference though, Edelman say he’ll be keeping his ears open for a number of issues that he thinks are not quite on everyone’s radar yet. But they may be soon.

For starters, Edelman says, not enough attention in mainstream conversations concerns the boundary between AI-controlled systems and human-controlled ones.

“We need to figure out when people are comfortable handing decisions over to robots, and when they’re not,” he says. “There is a yawning gap between the percentage of Americans who are willing to turn their lives over to autopilot on a plane, and the percentage of Americans willing to turn their lives over to autopilot on a Tesla.”

Which, to be clear, Edelman is not saying represents any sort of unfounded fear. Just that public discussion over self-driving or self-piloting systems is very either/or. Either a self-driving system is seen as 100 percent reliable for all situations, or it’s seen as immature and never to be used.

Second, not enough attention has yet been devoted to the question of metrics we can put in place to understand when an AI system has earned public trust and when it has not.

AI systems are, Edelman points out, only as reliable as the data that created them. So questions about racial and socioeconomic bias in, for instance, AI hiring algorithms are entirely appropriate.“Claims about AI-driven hiring are careening evermore quickly forward,” Edelman says. “There seems to be a disconnect. I’m eager to know, Are we in a place where we need to pump the brakes on AI-influenced hiring? Or do we have some models in a technical or legal context that can give us the confidence we lack today that these systems won’t create a second source of bias?”

A third area of the conversation that Edelman says deserves more media and policy attention is the question of what industries does AI threaten most. While there’s been discussion about jobs that have been put in the AI cross hairs, less discussed, he says, is the bias inherent in the question itself.

A 2017 study by Yale University and the University of Oxford’s Future of Humanity Institute surveyed AI experts for their predictions about, in part, the gravest threats AI poses to jobs and economic prosperity. Edelman points out that the industry professionals surveyed all tipped their hands a bit in the survey: The very last profession AI researchers said would ever be automated was—surprise, surprise—AI researchers.

“Everyone believes that their job will be the last job to be automated, because it’s too complex for machines to possibly master,” Edelman says.

“It’s time we make sure we’re appropriately challenging this consensus that the only sort of preparation we need to do is for the lowest-wage and lowest-skilled jobs,” he says. “Because it may well be that what we think of as good middle-income jobs, maybe even requiring some education, might be displaced or have major skills within them displaced.”

Last is the belief that AI’s effect on industries will be to eliminate jobs and only to eliminate jobs. When, Edelman says, the evidence suggests any such threats could be more nuanced.

AI may indeed eliminate some categories of jobs but may also spawn hybrid jobs that incorporate the new technology into an old format. As was the case with the rollout of electricity at the turn of the 20th century, new fields of study spring up too. Electrical engineers weren’t really needed before electricity became something more than a parlor curiosity, after all. Could AI engineering one day be a field unto its own? (With, surely, its own categories of jobs and academic fields of study and professional membership organizations?)

“We should be doing the hard and technical and often untechnical and unglamorous work of designing systems to earn our trust,” he says. “Humanity has been spectacularly unsuccessful in placing technological genies back in bottles. … We’re at the vanguard of a revolution in teaching technology how to play nice with humans. But that’s gonna be a lot of work. Because it’s got a lot to learn.”

Supercomputers Simulate Solar Flares to Help Physicists Understand Magnetic Reconnection

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/aerospace/astrophysics/supercomputers-simulate-solar-flares-to-explore-magnetic-reconnection

New supercomputer simulations have successfully modeled a mysterious process believed to produce some of the hottest and most dangerous solar flares—flares that can disrupt satellites and telecommunications networks, cause power outages, and otherwise wreak havoc on the grid. And what researchers have learned may also help physicists design more efficient nuclear fusion reactors.

In the past, solar physicists have had to get creative when trying to understand and predict flares and solar storms. It’s difficult, to put it mildly, to simulate the surface of the sun in a lab. Doing so would involve creating and then containing an extended region of dense plasma with extremely high temperatures (between thousands of degrees and one million degrees Celsius) as well as strong magnetic fields (of up to 100 Tesla).

However, a team of researchers based in the United States and France developed a supercomputer simulation (originally run on Oak Ridge National Lab’s recently retired Titan machine) that successfully modeled a key part of a mysterious process that produces solar flares. The group presented its results last month at the annual meeting of the American Physical Society’s (APS) Plasma Physics division, in Fort Lauderdale, Fla.

Alphabet’s Makani Tests Wind Energy Kites in the North Sea

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/energywise/energy/renewables/alphabets-makani-tests-wind-energy-kites-in-the-north-sea

The idea is simple: Send kites or tethered drones hundreds of meters up in the sky to generate electricity from the persistent winds aloft. With such technologies, it might even be possible to produce wind energy around the clock. However, the engineering required to realize this vision is still very much a work in progress.

Dozens of companies and researchers devoted to developing technologies that produce wind power while adrift high in the sky gathered at a conference in Glasgow, Scotland last week. They presented studies, experiments, field tests, and simulations describing the efficiency and cost-effectiveness of various technologies collectively described as airborne wind energy (AWE).

In August, Alameda, Calif.-based Makani Technologies ran demonstration flights of its airborne wind turbines—which the company calls energy kites—in the North Sea, some 10 kilometers off the coast of Norway. According to Makani CEO Fort Felker, the North Sea tests consisted of a launch and “landing” test for the flyer followed by a flight test, in which the kite stayed aloft for an hour in “robust crosswind(s).” The flights were the first offshore tests of the company’s kite-and-buoy setup. The company has, however, been conducting onshore flights of various incarnations of their energy kites in California and Hawaii.