All posts by Jeremy Hsu

Paper Cards and Digital Codes Target Vaccination Chaos

Post Syndicated from Jeremy Hsu original

Other than Israel and the United Arab Emirates, the COVID-19 vaccine rollout around the world has had a rocky start. Some like Canada and the European Union have suffered stumbles so far. Others like the United States have been in a state of chaos since the vaccines were first approved at the end of last year. Many in the U.S. have scrambled online for ephemeral appointment slots, while pharmacies desperately look to use up leftover doses before they expire.

One low-tech solution may help: An MIT-led coalition has unveiled an augmented vaccination card that works with or without user apps to help enable a much smoother vaccination process for everyone.

The simplest form of the vaccination card would include QR codes that can be applied as stickers to existing cards already distributed by the U.S. Centers for Disease Control. Such codes would contain encrypted digital information necessary for each stage of a person’s vaccination process that can be scanned by the relevant authorities to check the person’s status—but would also avoid storing personally identifiable information in central databases in an effort to respect individual privacy.

“We should start sending these unique vaccination cards to everybody right now, like a mail-in ballot or census form,” said Ramesh Raskar, an associate professor of media arts and sciences at the MIT Media Lab.

New and Hardened Quantum Crypto System Notches “Milestone” Open-Air Test

Post Syndicated from Jeremy Hsu original

Quantum cryptography remains the ostensibly impervious technology for which new hacks and potential vulnerabilities continue to be uncovered—perhaps ultimately calling into question its alleged unhackability. 

Yet now comes word of a successful Chinese open-air test of a new generation quantum crypto system that allows for untrustworthy receiving stations. 

The previous iteration of quantum cryptography did not have this feature. Then again, the previous iteration did have successful ground and satellite experiments, establishing quantum links to transmit secret messages over hundreds of kilometers.

The present new and more secure quantum crypto system is, perhaps not surprisingly, much more challenging to implement.  

It creates its secret cryptographic keys based on the quantum interference of single photons—light particles—that are made to be indistinguishable despite being generated by independent lasers. As long as sender and receiver use trusted laser sources, they can securely communicate with each other regardless of whether they trust the detectors performing the measurements of the photons’ quantum interference.

Because the cryptographic key can be securely transmitted even in the case of a potentially hacked receiver, the new method is called measurement-device independent quantum key distribution (MDI-QKD).

“It is not far-fetched to say that MDI-QKD could be the de-facto [quantum cryptography] protocol in future quantum networks, be it in terrestrial networks or across satellite communications.” says Charles Ci Wen Lim, an assistant professor in electrical and computer engineering and principal investigator at the Centre for Quantum Technologies at the National University of Singapore; he was not involved in the recent experiment.

In their unprecedented demonstration, Chinese researchers figured out how to overcome many of the experimental challenges of implementing MDI-QKD in the open atmosphere. Their paper detailing the experiment was published last month in the journal Physical Review Letters.

Such an experimental system bodes well for future demonstrations involving quantum links between ground stations and experimental quantum communication satellites such as China’s Micius, says Qiang Zhang, a professor of physics at the University of Science and Technology of China and an author of the recent paper.

The experiment demonstrated quantum interference between photons even in the face of atmospheric turbulence. Such turbulence is typically stronger across horizontal rather than vertical distances. The fact that the present experiment traverses horizontal distances bodes well for future ground-to-satellite systems. And the 19.2-kilometer distance involved in the demonstration already exceeds that of the thickest part of the Earth’s atmosphere.

To cross so much open air, the Chinese researchers developed an adaptive optics system similar to the technology that helps prevent atmospheric disturbances from interfering with astronomers’ telescope observations.

Even MDI-QKD is not 100 percent secure—it remains vulnerable to hacking based on attackers compromising the quantum key-generating lasers. Still, the MDI-QKD security scheme offers, Zhang claims, “near perfect information theoretical security.” It’s entirely secure, in other words, in theory. 

The remaining security vulnerabilities on the photon source side can be “pretty well taken care of by solid countermeasures,” Lim says. He and his colleagues at the National University of Singapore described one possible countermeasure in the form of a “quantum optical fuse” that can limit the input power of untrusted photon sources. Their paper was recently accepted for presentation during the QCRYPT 2020 conference.

All in, Lim says, the Chinese team’s “experiment demonstrated that adaptive optics will be essential in ensuring that MDI-QKD works properly over urban free-space channels, and it represents an important step towards deploying MDI-QKD over satellite channels.” From his outside perspective, he described the Chinese team’s work as a “milestone experiment for MDI-QKD.”

Quantum Computers Will Speed Up the Internet’s Most Important Algorithm

Post Syndicated from Jeremy Hsu original

The fast Fourier transform (FFT) is the unsung digital workhorse of modern life. It’s a clever mathematical shortcut that makes possible the many signals in our device-connected world. Every minute of every video stream, for instance, entails computing some hundreds of FFTs. The FFT’s importance to practically every data-processing application in the digital age explains why some researchers have begun exploring how quantum computing can run the FFT algorithm more efficiently still.

“The fast Fourier transform is an important algorithm that’s had lots of applications in the classical world,” says Ian Walmsley, physicist at Imperial College London. “It also has many applications in the quantum domain. [So] it’s important to figure out effective ways to be able to implement it.”

The first proposed killer app for quantum computers—finding a number’s prime factors—was discovered by mathematician Peter Shor at AT&T Bell Laboratories in 1994. Shor’s algorithm scales up its factorization of numbers more efficiently and rapidly than any classical computer anyone could ever design. And at the heart of Shor’s phenomenal quantum engine is a subroutine called—you guessed it—the quantum Fourier transform (QFT).

Here is where the terminology gets a little out of hand. There is the QFT at the center of Shor’s algorithm, and then there is the QFFT—the quantum fast Fourier transform. They represent different computations that produce different results, although both are based on the same core mathematical concept, known as the discrete Fourier transform.

The QFT is poised to find technological applications first, though neither appears destined to become the new FFT. Instead, QFT and QFFT seem more likely to power a new generation of quantum applications.

The quantum circuit for QFFT is just one part of a much bigger puzzle that, once complete, will lay the foundation for future quantum algorithms, according to researchers at the Tokyo University of Science. The QFFT algorithm would process a single stream of data at the same speed as a classical FFT. However, the QFFT’s strength comes not from processing a single stream of data on its own but rather multiple data streams at once. The quantum paradox that makes this possible, called superposition, allows a single group of quantum bits (qubits) to encode multiple states of information simultaneously. So, by representing multiple streams of data, the QFFT appears poised to deliver faster performance and to enable power-saving information processing.

The Tokyo researchers’ quantum-circuit design uses qubits efficiently without producing so-called garbage bits, which can interfere with quantum computations. One of their next big steps involves developing quantum random-access memory for preprocessing large amounts of data. They laid out their QFFT blueprints in a recent issue of the journal ­Quantum Information Processing.

“QFFT and our arithmetic operations in the paper demonstrate their power only when used as subroutines in combination with other parts,” says Ryo Asaka, a physics graduate student at Tokyo University of Science and lead author on the study.

Greg Kuperberg, a mathematician at the University of California, Davis, says the Japanese group’s work provides a scaffolding for future quantum algorithms. However, he adds, “it’s not destined by itself to be a magical solution to anything. It’s trundling out the equipment for somebody else’s magic show.”

It is also unclear how well the proposed QFFT would perform when running on a quantum computer under real-world constraints, says Imperial’s Walmsley. But he suggested it might benefit from running on one kind of quantum computer versus another (for example, a ­magneto-optical trap versus nitrogen vacancies in diamond) and could eventually become a specialized coprocessor in a quantum-classical hybrid computing system.

University of Warsaw physicist Magdalena Stobińska, a main coordinator for the European Commission’s AppQInfo project—which will train young researchers in quantum information processing starting in 2021—notes that one main topic involves developing new quantum algorithms such as the QFFT.

“The real value of this work lies in proposing a different data encoding for computing the [FFT] on quantum hardware,” she says, “and showing that such out-of-box thinking can lead to new classes of quantum algorithms.”

This article appears in the January 2021 print issue as “A Quantum Speedup for the Fast Fourier Transform.”

Photonic Quantum Computer Displays “Supremacy” Over Supercomputers

Post Syndicated from Jeremy Hsu original

When Google unveiled its unprecedented demonstration of quantum computational advantage over classical computing in late 2019, Chinese researchers reported progress toward their own milestone demonstration.

Just over a year later, the Chinese team has shown a different but no less exciting path forward for quantum computing by achieving the second known demonstration of quantum computational advantage—also known as quantum supremacy.

This new finding does more than just confirm that quantum computers can solve certain problems faster than conventional supercomputers. (That’s the “supremacy” part.) It also adds a surprising twist: It now appears possible to achieve phenomenal computational feats with lasers and nonlinear crystals. Photonic systems have sometimes been considered more of a bridge technology between quantum devices than the core of a quantum computer itself. 

No longer. 

Quantum Memory Milestone Boosts Quantum Internet Future

Post Syndicated from Jeremy Hsu original

An array of cesium atoms just 2.5 centimeters long has demonstrated a record level of storage-and-retrieval efficiency for quantum memory. It’s a pivotal step on the road to eventually building large-scale quantum communication networks spanning entire continents.

Intel Creating Cryptographic Codes That Quantum Computers Can’t Crack

Post Syndicated from Jeremy Hsu original

Journal Watch report logo, link to report landing page

The world will need a new generation of cryptographic algorithms once quantum computing becomes powerful enough to crack the codes that protect everyone’s digital privacy. An Intel team has created an improved version of such a quantum-resistant cryptographic algorithm that could work more efficiently on the smart home and industrial devices making up the Internet of Things.

The Intel team’s Bit-flipping Key Encapsulation (BIKE) provides a way to create a shared secret that encrypts sensitive information exchanged between two devices. The encryption process requires computationally complex operations involving mathematical problems that could strain the hardware of many Internet of Things (IoT) devices. But Intel researchers figured out how to create a hardware accelerator that enables the BIKE software to run efficiently on less powerful hardware.

“Software execution of BIKE, especially on lightweight IoT devices, is latency and power intensive,” says Manoj Sastry, principal engineer at Intel. “The BIKE hardware accelerator proposed in this paper shows feasibility for IoT-class devices.”

Intel has been developing BIKE as one possible quantum-resistant algorithm among the many being currently evaluated by the U.S. National Institute of Standards and Technology. This latest version of BIKE was presented in a paper [PDF] during the IEEE International Conference on Quantum Computing and Engineering on 13 October 2020. 

BIKE securely establishes a shared secret between two devices through a three-step process, says Santosh Ghosh, a research scientist at Intel and coauthor on the paper. First, the host device creates a public-private key pair and sends the public key to the client. Second, the client sends an encrypted message using the public key to the host. And third, the host decodes the encrypted message through a BIKE decode procedure using the private key. “Of these three steps, BIKE decode is the most compute intensive operation,” Ghosh explains.

The improved version of BIKE takes advantage of a new decoder that requires less computing power. Testing showed that this enabled the computation of a single BIKE decode operation in 1.3 million cycles at 110 MHz on an Intel Arria 10 FPGA in 12 milliseconds, which is fairly competitive compared to other options.

BIKE is well suited for applications where IoT devices are used for encapsulation and a more capable device takes the role of host to generate the keys and perform the decapsulation procedure,” Ghosh says.

This also represents the first hardware implementation of BIKE suitable for Level 5 keys and ciphertexts, with level 5 representing the highest level of security as defined by the U.S. National Institute of Standards and Technology. Each higher level of security requires bigger keys and ciphertexts—the encrypted forms of data that would look unintelligible to prying eyes—which in turn require more compute-intensive operations. 

The team was previously focused on BIKE implementations suitable for the lower security levels of 1 and 3, which meant the public hardware implementation submitted to NIST as reference did not support level 5, says Rafael Misoczki, coauthor on the paper who was formerly at Intel and is now a cryptography engineer at Google.

The latest hardware implementation for BIKE takes the security up a couple of notches.

“Our BIKE decoder supports keys and ciphertexts for Level 5 which provides security equivalent to AES-256,” says Andrew Reinders, security researcher at Intel and coauthor on the paper. “This means a quantum computer needs 2128 operations to break it.”

The latest version of the BIKE hardware accelerator has a design that offers additional security against side-channel attacks in which attackers attempt to exploit information about the power consumption or even timing of software processes. For example, differential power analysis attacks can track the power consumption patterns associated with running certain computational tasks to reveal some of the underlying computations.

In theory, such attacks against BIKE might attempt to target a small block of the secret being shared between two devices. That block size depends upon how many secret bits work together in underlying sub-operations, Reinders says. Because a BIKE block size is 1-bit, a BIKE hardware design that processed a single secret bit at a time would be highly vulnerable to such a differential power analysis attack.

But the new BIKE hardware accelerator offers protection against such attacks because it performs all the computations for the 128-bits of secret in parallel, which makes it difficult to single out power consumption patterns associated with individual computations.

The BIKE hardware accelerator also has protection against timing attacks, because its decoder always runs for a fixed number of rounds that are each the same amount of time.

BIKE was previously chosen as one of eight alternates in the third round of NIST’s Post-Quantum Cryptography Standardization Process that is currently narrowing down the best candidates to replace modern cryptography standards. Seven other algorithms were selected as finalists, making for a total of 15 algorithms remaining under consideration from a starting group of 69 submissions.

The Intel team has submitted their revised version of BIKE for the third round of the NIST challenge and are currently awaiting comments. If all goes well, the federal agency plans to release the initial standard for quantum-resistant cryptography in 2022.

Measuring Progress in the ‘Noisy’ Era of Quantum Computing

Post Syndicated from Jeremy Hsu original

Measuring the progress of quantum computers can prove tricky in the era of “noisy” quantum computing technology. One concept, known as “quantum volume,” has become a favored measure among companies such as IBM and Honeywell. But not every company or researcher agrees on its usefulness as a yardstick in quantum computing.

A Simple Neural Network Upgrade Boosts AI Performance

Post Syndicated from Jeremy Hsu original

A huge number of machine learning applications could receive a performance upgrade, thanks to a relatively minor modification to their underlying neural networks. 

If you are a developer creating a new machine learning application, you typically build on top of a existing neural network architecture, one that is already tuned for the kind of problem you are trying to solve—creating your own architecture from scratch is a difficult job that’s typically more trouble than it’s worth. Even with an existing architecture in hand, reengineering it for better performance is no small task. But one team has come up with new neural network module that can boost AI performance when plugged into four of the most widely used architectures.

Critically, the research funded by the U.S. National Science Foundation and Army Research Office achieves this performance boost through the new module without requiring much of an increase in computing power. It’s part of a broader project by North Carolina State University researchers to rethink the architecture of the neural networks involved in modern AI’s deep learning capabilities. 

“At the macro level, we try to redesign the entire neural network as a whole,” says Tianfu Wu, an electrical and computer engineer at North Carolina State University in Raleigh. “Then we try to focus on the specific components of the neural network.”

Wu and his colleagues presented their work (PDF) on the new neural network component, or module, named Attentive Normalization, at the virtual version of the 16th European Conference on Computer Vision in August. They have also released the code so that other researchers can plug the module into their own deep learning models.

In preliminary testing, the group found that the new module improved performance in four mainstream neural network architectures: ResNets, DenseNets, MobileNetsV2 and AOGNets. The researchers checked the upgraded networks’ performances against two industry benchmarks for testing visual object recognition and classification, including ImageNet-1000 and MS-COCO 2017. For example, the new module boosted the top-1 accuracy in the ImageNet-1000 benchmark by between 0.5 percent and 2.7 percent. This may seem small, but it can make a significant difference in practice, not least because of the large scale of many machine learning deployments. 

Altogether, the diverse array of architectures are suitable for performing AI-driven tasks on both large computing systems and mobile devices with more limited computing power. But the most noticeable improvement in performance came in the neural network architectures suited for mobile platforms such as smartphones.

The key to the team’s success came from combining two neural network modules that usually operate separately. “In order to make a neural network more powerful or easier to train, feature normalization and feature attention are probably two of the most important components,” Wu says.

The feature normalization module helps to make sure that no single subset of the data used to train a neural network outweighs the other subsets in shaping the deep learning model. By comparing neural network training to driving a car on a dark road, Wu describes feature normalization as the car’s suspension system smoothing out the jolts from any bumps in the road.

By comparison, the feature attention module helps to focus on certain features in the training data that could better achieve the learning task at hand. Going back to the car analogy for training neural networks, the feature attention module represents the vehicle headlights showing what to look out for on the dark road ahead.

After scrutinizing both modules, the researchers realized that certain sub-processes in both modules overlap in the shared goal of re-calibrating certain features in the training data. That provided a natural integration point for combining feature normalization and feature attention in the new module. “We want to see different micro components in neural architecture that can be and should be integrated together to make them more effective,” Wu says. 

Wu and his colleagues also designed the new module so that it could perform the re-calibration task in a more dynamic and adaptive way than the standard modules. That may offer benefits when it comes to transfer learning—taking AI trained on one set of data to perform a given task and applying it to new data for a related task (for example, in a face recognition application, developers typically start with a network that’s good at identifying what objects in a camera’s view are faces, and then train it to recognize specific people).

The new module represents just one small part of the North Carolina State group’s vision for redesigning modern AI. For example, the researchers are trying to develop interpretable AI systems that allow humans to better understand the logic of AI decisions—a not insignificant problem for deep learning models based on neural networks. As one possible step toward that goal, Wu and his colleagues previously developed a framework for building deep neural networks based on a compositional grammar system.

Meanwhile, Wu still sees many other opportunities for fine-tuning smaller parts of neural networks without requiring a complete overhaul of the main architecture.

“There are so many other components in deep neural networks,” Wu says. “We probably also can take a similar angle and try to look at whether there are natural integration points to put them together, or try to redesign them in better a form.”

Contact Tracing Apps Struggle to Be Both Effective and Private

Post Syndicated from Jeremy Hsu original

In June, as the coronavirus swept across the United States, Paloma Beamer spent hours each day helping her university plan for a September reopening. Beamer, an associate professor of public health at the University of Arizona, was helping to test a mobile app that would notify users if they crossed paths with confirmed COVID-19 patients.

A number of such “contact tracing” apps have recently had their trials by fire, and many of the developers readily admit that the technology has not yet proven that it can slow the spread of the virus. But that caveat has not stopped national governments and local communities from using the apps.

“Right now, in Arizona, we’re in the full-blown pandemic phase,” Beamer said, speaking in June, well before the new-case count had peaked. “And even manual contact tracing is very limited here—we need whatever tool we can get right now to curb our epidemic.”

Traditionally, tracers would ask newly diagnosed patients to list the people they’d spent time with recently, then ask those people to provide contacts of their own. Such legwork has helped to control other infectiousdisease outbreaks, such as syphilis in the United States and Ebola in West Africa. However, while these methods can extinguish the first spark or the last embers of an epidemic, they’re no good in the wildfire stage, when the caseload expands exponentially.

That’s the reason to automate the job. Digital contact tracing may also jog fuzzy memories by dredging up relevant information on where a patient has been, and with whom. Some technologies can go further by automatically alerting people who have been in close proximity to a patient and thus may need to get tested or go into isolation. Speedy notification is particularly important during the COVID-19 pandemic, given that asymptomatic people seem capable of transmitting the virus.

Automatic alerts may sound great, but there are “limited real-world use cases” and “limited evidence for their effectiveness,” says Joseph Ali, associate director for global programs at the Johns Hopkins Berman Institute of Bioethics and coauthor of the book Digital Contact Tracing for Pandemic Response, published in May. Rushed deployment of unproven technologies runs the risk of misidentifying moments of exposure that in fact never happened—false positives—and missing moments that did happen, or false negatives.

Some governments have embraced these apps; others have struggled with the decision. The United Kingdom, for example, initially spent millions developing an app that would collect data and send it to a centralized data storage system run by the National Health Service. But privacy advocates raised concern about the system, and in June the government announced that it would abandon that effort and switch to a less-centralized alternative built on technology from the tech giants Apple and Google.

The U.K.’s indecision shows how the choice of strategy revolves around privacy trade-offs. Some countries have staked everything on effectiveness and nothing on privacy.

Wuhan, the Chinese city at the heart of the pandemic, squashed the virus, eased the lockdown, then saw a small resurgence of the contagion in May. Public-health authorities went all out: They tested the entire population of 11 million and instituted the tracking of each person’s movements. Would-be customers could enter a shop only by having their temperature taken and exchanging personal bar codes, displayed on their phones, with the shop’s own identifying barcode. They then had to repeat the exchange upon leaving. That way, if anyone in the shop ended up testing positive, the authorities would be able to find whoever was in the same place at the same time, test those people, and, if necessary, quarantine them.

It worked. As of mid-July, Wuhan was reporting that no new cases of the virus had been recorded for 50 consecutive days. But such a gargantuan effort is not always an option. In many parts of the world, most people will willingly participate only if they trust in the system.

This may prove especially challenging in the United States, where early apps rolled out by states such as Utah and North Dakota failed to catch on. Making matters even more awkward, an independent security analysis found that the North Dakota app violated its own privacy policy by sharing location data with the company Foursquare.

In an online survey of Americans conducted by Avira, a security software company, 71 percent of respondents said they don’t plan to use a COVID contact-tracing app. Respondents cited privacy as their main concern. In a telling contrast, 32 percent said they would trust apps from Google and Apple to keep their data secure and private—but just 14 percent said they would trust apps from the government.

One shining example of effective digital contact tracing is South Korea, which built a centralized system that scrutinized patients’ movements, identified people who had been in contact with patients, and used apps to monitor people under quarantine. To date, South Korea has successfully contained its COVID-19 outbreaks without closing national borders or imposing local lockdowns.

The South Korean government’s system gave contact tracers access to multiple information sources, including footage from security cameras, GPS data from mobile phones, and credit card transaction data, says Uichin Lee, an associate professor of industrial and systems engineering at the Korea Advanced Institute of Science and Technology (KAIST), in Daejeon. “This system helps them to quickly identify hot spots and close contacts,” Lee says.

But South Korea’s system also publicly shares patients’ contact-trace data—including pseudonymized information on demographics, infection information, and travel logs. This approach raises serious privacy concerns, as Lee and his colleagues outlined in the journal Frontiers in Public Health. The travel logs alone could enable observers to infer where a patient lives and works.

By comparison, public-health authorities in Europe and the United States have shied away from publicly sharing such patient data. There’s also a middle way: A person’s phone may store data identifying people or location, and it may be left to the owner of the phone whether to share that information with public-health officials.

And then there’s the radical idea of not storing such data at all. That’s the approach taken by the Google/Apple Exposure Notification (GAEN) system. As these tech giants own, respectively, the Android and iOS smartphone standards, the GAEN system enables independent developers to build apps that can run on either standard. The system records Bluetooth transmissions between phones in close proximity to one another, and stores that data as anonymized beacons on each phone for a limited time. If one phone user tests positive for COVID-19 and enters that positive status in a mobile app built upon GAEN, the system will alert other phone users who have been in close proximity within the potentially infectious time period.

To protect user privacy, the system does all these things without ever recording the exact location of such encounters. It also limits the reported exposure time for each encounter to 5-minute increments, with a maximum possible total of 30 minutes. That constraint makes it more difficult for users to guess the source of their exposure.

The GAEN system also appeals to those wary of increased surveillance in the name of public health. Germany, Italy, and Switzerland have already deployed exposure-notification apps based on GAEN, and other countries will likely follow. In the United States, Virginia was the first to introduce one.

“If you collect identifying information along with Bluetooth data, it could potentially lead to new forms of surveillance,” says Tina White, founder and executive director of the nonprofit COVID Watch and a Ph.D. candidate at the Stanford Institute for Human-Centered Artificial Intelligence. “And that’s exactly what we don’t want to see.” COVID Watch is working with the University of Arizona on a privacy-centric app based on the GAEN system. Preliminary testing involving two phones placed at different indoor locations has ramped up to more real-life campus scenarios inside classrooms, dining halls, and the Cat Tran student shuttle, followed by a campuswide rollout in mid-August.

There’s one big hitch: Repurposing Bluetooth from its original communication function poses serious technical difficulties. At Trinity College Dublin, researchers found that Bluetooth can perform poorly on the crucial task of proximity detection when a phone is in the presence of reflective metal surfaces. In one experiment on a commuter bus, a Swiss COVID-19 app built on the GAEN system failed to trigger exposure notifications even though the phones were within 2 meters (a little over 6 feet) of each other for 15 minutes.

“Public transport, which seems kind of mundane, is actually one of the core use cases for contact-tracing apps, but it’s also a terrible radio environment,” says Douglas Leith, a professor of computer science and statistics at Trinity College. “All our measurements suggest that it probably won’t work on buses and trains.”

Another problem is the variation in antenna configuration over the thousands of Android phone models. Engineers must calibrate the software to make up for any loss in signal strength, Leith explains. And although Bluetooth-signal “chirps” require minimal power, simply listening for such chirps requires that the main phone processors be turned on, which can quickly drain battery power unless the apps are restricted to short listening periods.

Beyond the technical challenges faced by Bluetooth-based apps, all contact-tracing apps suffer from the same general problem: Unless a certain percentage of the population installs an app, it can’t do its work. People won’t opt in unless they believe in the public-health strategy behind an app and in the personal advantages they can hope to gain from it. Making that sale has been tough. In Germany, which has had some of the best results of any country in containing the virus, only 41 percent of the population has said it was willing to download what is known as the Corona-Warn-App.

Some researchers point to a University of Oxford study that modeled the coronavirus’s spread through a simulated city of 1 million people; it found that 60 percent adoption is needed to stop the pandemic and keep countries out of lockdown (although the study suggested that lower rates of adoption could still prove helpful).

The tech giants are making widespread adoption easier by deploying an app-less Exposure Notifications Express function for iOS and Android devices. If a phone user opts in, the phone begins listening for nearby Bluetooth beacons from other phones. And later, if a stored Bluetooth beacon proves to be a match for someone confirmed to be positive for COVID-19, the system will prompt the user to download an exposure-notification app for more information.

Bluetooth is not the only way forward. The COVID Safe Paths project led by the nonprofit PathCheck Foundation, an MIT spin-off, has been developing and fielding a mobile app that uses GPS location data instead. The GPS approach provides more location data than Bluetooth does, in exchange for less user privacy. But Safe Paths also aims to build a Bluetooth-based version with the GAEN system. “We have been mostly agnostic to the technology we want to use,” says Abhishek Singh, a Ph.D. candidate in machine learning at the MIT Media Lab and a member of Safe Paths.

“It matters less what’s actually happening in the back end, and more about communication and perception,” says Kyle Towle, a member of the technology team at Safe Paths and former senior director of cloud technology at PayPal. The crucial component, he says, is the “appeal to our community members to gain that trust in the first place.”

The best path to success may come from ample preparation. South Korea’s experience with an outbreak of Middle East respiratory syndrome (MERS) in 2015 prompted the government to update national laws and lay the bureaucratic and technological foundations for an efficient contact-tracing system. The resulting public-private partnership enables human contact tracers to pull together digital data on a suspected or confirmed case’s travel history within 10 minutes.

“A country like South Korea, maybe because they went through this before with other viruses five years ago, really got a head start, and they didn’t mess around,” says Marc Zissman, associate head of the cybersecurity and information sciences division at MIT Lincoln Laboratory. The lab is among those in the PACT (Private Automated Contact Tracing) project, which is testing GAEN’s Bluetooth-based app performance.

Zissman says that developing digital contact tracing during a pandemic is like building a plane and flying it at the same time while also measuring how well everything works. “In a perfect world, something like this would have taken a couple years to implement,” he says. “There just isn’t the time, so instead what’s happening is people are doing the best they can, and making the best engineering judgments they can, with the data they have and the time that they have.”

This article appears in the October 2020 print issue as “The Dilemma of Contact-Tracing Apps.”

Can AI Make Bluetooth Contract Tracing Better?

Post Syndicated from Jeremy Hsu original

IEEE COVID-19 coverage logo, link to landing page

During the COVID-19 pandemic, digital contact tracing apps based on the Bluetooth technology found in smartphones have been deployed by various countries despite the fact that Bluetooth’s baseline performance as a proximity detector remains mostly a mystery.

That is why the U.S. National Institute of Standards and Technology organized a months-long event that leveraged the talents of AI researchers around the world to help evaluate and potentially improve upon that baseline Bluetooth performance for helping detect when smartphone users are standing too close to one another.

The appropriately named Too Close for Too Long (TC4TL) Challenge has yielded a mixed bag for anyone looking to be optimistic about the performance of Bluetooth-based contact tracing and exposure notification apps. The challenge attracted a diverse array of AI research teams from around the world who showed how machine learning might help boost proximity detection by analyzing the patterns in Bluetooth signals and data from other phone sensors. But the teams’ testing results presented during a final evaluation workshop held on August 28 also showed that Bluetooth’s capability alone in detecting nearby phones is shaky at best.

“We showed that if you hold both phones in hand, you’re going to get relatively better proximity detection results on this pilot dataset,” says Omid Sadjadi, a computer scientist at the National Institute of Standards and Technology. “When it comes to everyday scenarios where you put your phones in your pocket or your purse or in other other carrier states and in other locations, that’s when the performance of this proximity detection technology seems to start to degrade.”

Bluetooth Low Energy (BLE) technology was not originally designed to use Bluetooth signals from phones to accurately estimate the distance between phones. But the technology has been thrown into the breach to help hold the line for contact tracing during the pandemic. The main reason why many countries have gravitated toward Bluetooth-based apps is that they generally represent a more privacy-preserving option compared to using location-based technologies such as GPS.

Given the highly improvised nature of this Bluetooth-based solution, it made sense for the U.S. National Institute of Standards and Technology (NIST) to assist in evaluating the technology’s performance. NIST has previously helped establish testing benchmarks and international standards for evaluating widely-used modern technologies such as online search engines and facial recognition. And so the agency was more than willing to step up again when asked by researchers working on digital contact tracing technologies through the MIT PACT project.

But repeating the same evaluation process during a global public health emergency proved far from easy. NIST found itself condensing the typical full-year cycle for the evaluation challenge down into just five months starting in April and ending in August. “It was a tight timeline and could have been stressful at times,” Sadjadi says. “Hopefully we were able to make it easier for the participating teams.” 

But the sped-up schedule did put pressure on the research teams that signed up. A total of 14 groups hailing from six continents registered at the beginning, including seven teams from academia and seven teams from industry. Just four teams ended up meeting the challenge deadline: Contact-Tracing-Project from the Hong Kong University of Science and Technology, LCD from the National University of Córdoba in Argentina, PathCheck representing the MIT-based PathCheck Foundation, and the MITRE team representing the U.S. nonprofit MITRE Corporation.

The challenge did not specifically test Bluetooth-based app frameworks such as the Google Apple Exposure Notification (GAEN) protocol that are currently used in exposure notification or contact tracing apps. Instead, the challenge focused on evaluating whether teams’ machine learning models could improve on the process of detecting a phone’s proximity based on the combination on Bluetooth signal information and data from other common phone sensors such as accelerometers, gyroscopes, and magnetometers.

To provide the training data necessary for the teams’ machine learning models, MITRE Corporation and MIT Lincoln Laboratory staff members helped collect data from pairs of phones held at certain distances and heights near one another. They also included data from different scenarios such as both people holding the phones in their hands, as well as one or both people having the phones in their pockets. The latter is important given how Bluetooth signals can be weakened or deflected by a number of different materials.

“If you’re collecting data for the purpose of training and evaluating automated proximity detection technologies, you need to consider all possible scenarios and phone carriage states that could happen in everyday conditions, whether people are moving around and going shopping, or in nursing homes, or they’re sitting in a classroom or they are sitting at their desk at their work organization,” Sadjadi says. 

One unexpected hiccup occurred when the original NIST development data set—based on 10 different recording sessions with MIT Lincoln Laboratory researchers holding phone pairs in different positions—led to the classic “overfitting” problem where machine learning performance is tuned too specifically to the conditions in a particular data set. The machine learning models were able to identify specific recording sessions by using air pressure information from the altitude sensors of the iPhones. That gave the models a performance boost in phone proximity detection for that specific training data set, but their performance could fall when faced with new data in real-world situations.

Luckily, one of the teams participating in the challenge reported the issue to NIST when it noticed its machine learning model prioritizing data from the altitude sensors. Once Sadjadi and his colleagues figured out what happened, they enlisted the help of the MITRE Corporation to collect new data based on the same data collection protocol and released the new training data set within a few days.

The team results on the final TC4TL leaderboard reflect the machine learning models’ performances based on the new training data set. But NIST still included a second table below the leaderboard results showing how the overfitted models performed on the original training data set. Such results are presented as a normalized decision cost function (NDCF) that represents proximity detection performance when accounting for the combination of false negatives (failing to detect a nearby phone) and false positives (falsely saying a nearby phone has been detected).

If the machine learning models only performed as accurately as flipping a coin on those binary yes-or-no questions about false positives and false negatives, their NDCF values would be 1. The fact that most of the machine learning models seemed to get values significantly below 1 represents a good sign for the promise of applying machine learning to boosting digital contact tracing down the line.

However, it’s still unclear what these normalized DCF values would actually mean for a person’s infection risk in real life. For future evaluations, the NIST team may focus on figuring out the best way to weight both the false positive and false negative error measures. “The next question is ‘What’s the relative importance of false-positives and false-negatives?’” Sadjadi explains. “How can the metric be adjusted to better correlate with realistic conditions?” 

It’s also hard to tell which specific machine learning models perform the best for enhancing phone proximity detection. The four teams ended up trying out a variety of different approaches without necessarily finding the most optimal method. Still, Sadjadi seemed encouraged by the fact that even these early results suggest that machine learning can improve upon the baseline performance of Bluetooth signal detection alone.

“We hope that in the future the participants use our datasets and our metrics to drive the errors down further,” Sadjadi says. “But these results are far better than random.”

The fact that the baseline performance of Bluetooth signal detection for detecting nearby phones still seems quite weak may not bode well for many of the current digital contact tracing efforts using Bluetooth-based apps—especially given the much higher error rates for situations when one or both phones is in someone’s pocket or purse. But Sadjadi suggests that current Bluetooth-based apps could still provide some value for overwhelmed public health systems and humans doing manual contact tracing.

“It seems like we’re not there yet when you consider everyday scenarios and real-life scenarios,” Sadjadi says. “But again, even in case of not so strong performance, it can still be useful, and it can probably still be used to augment manual contact tracing, because as humans we don’t remember exactly who we were in contact with or where we were.”

Many future challenges remain before researchers can deliver enhanced Bluetooth-based proximity detection and a possible performance boost from machine learning. For example, Bluetooth-based proximity detection could likely become more accurate if phones spent more time listening for Bluetooth chirps from nearby phones, but tech companies such as Google and Apple have currently limited that listening time period in the interest of preserving phone battery life.

The NIST team is also thinking about how to collect more training data for what comes next beyond the TC4TL Challenge. Some groups such as the MIT Lincoln Laboratory have been testing the use of robots to conduct phone data collection sessions, which could improve the reliability of accurately-reported distances and other factors involved in tests. That may be useful for collecting training data. But Sadjadi believes that it would still be best to use humans in collecting the data used for the test data sets that measure machine learning models’ performances, so that the conditions match real life as closely as possible.

“This is not the first pandemic and it does not seem to be the last one,” Sadjadi says. “And given how important contact tracing is—either manual or digital contact tracing—for this kind of pandemic and health crisis, the next TC4TL challenge cycle is definitely going to be longer.”

These Underwater Drones Use Water Temperature Differences To Recharge

Post Syndicated from Jeremy Hsu original

Yi Chao likes to describe himself as an “armchair oceanographer” because he got incredibly seasick the one time he spent a week aboard a ship. So it’s maybe not surprising that the former NASA scientist has a vision for promoting remote study of the ocean on a grand scale by enabling underwater drones to recharge on the go using his company’s energy-harvesting technology.

Many of the robotic gliders and floating sensor stations currently monitoring the world’s oceans are effectively treated as disposable devices because the research community has a limited number of both ships and funding to retrieve drones after they’ve accomplished their mission of beaming data back home. That’s not only a waste of money, but may also contribute to a growing assortment of abandoned lithium-ion batteries polluting the ocean with their leaking toxic materials—a decidedly unsustainable approach to studying the secrets of the underwater world.

“Our goal is to deploy our energy harvesting system to use renewable energy to power those robots,” says Chao, president and CEO of the startup Seatrec. “We’re going to save one battery at a time, so hopefully we’re going to not to dispose more toxic batteries in the ocean.”

Chao’s California-based startup claims that its SL1 Thermal Energy Harvesting System can already help save researchers money equivalent to an order of magnitude reduction in the cost of using robotic probes for oceanographic data collection. The startup is working on adapting its system to work with autonomous underwater gliders. And it has partnered with defense giant Northrop Grumman to develop an underwater recharging station for oceangoing drones that incorporates Northrop Grumman’s self-insulating electrical connector capable of operating while the powered electrical contacts are submerged.

Seatrec’s energy-harvesting system works by taking advantage of how certain substances transition from solid-to-liquid phase and liquid-to-gas phase when they heat up. The company’s technology harnesses the pressure changes that result from such phase changes in order to generate electricity. 

To make the phase changes happen, Seatrec’s solution taps the temperature differences between warmer water at the ocean surface and colder water at the ocean depths. Even a relatively simple robotic probe can generate additional electricity by changing its buoyancy to either float at the surface or sink down into the colder depths.

By attaching an external energy-harvesting module, Seatrec has already begun transforming robotic probes into assets that can be recharged and reused more affordably than sending out a ship each time to retrieve the probes. This renewable energy approach could keep such drones going almost indefinitely barring electrical or mechanical failures. “We just attach the backpack to the robots, we give them a cable providing power, and they go into the ocean,” Chao explains. 

The early buyers of Seatrec’s products are primarily academic researchers who use underwater drones to collect oceanographic data. But the startup has also attracted military and government interest. It has already received small business innovation research contracts from both the U.S. Office of Naval Research and National Oceanic and Atmospheric Administration (NOAA).

Seatrec has also won two $10,000 prizes under the Powering the Blue Economy: Ocean Observing Prize administered by the U.S. Department of Energy and NOAA. The prizes awarded during the DISCOVER Competition phase back in March 2020 included one prize split with Northrop Grumman for the joint Mission Unlimited UUV Station concept. The startup and defense giant are currently looking for a robotics company to partner with for the DEVELOP Competition phase of the Ocean Observing Prize that will offer a total of $3 million in prizes.

In the long run, Seatrec hopes its energy-harvesting technology can support commercial ventures such as the aquaculture industry that operates vast underwater farms. The technology could also support underwater drones carrying out seabed surveys that pave the way for deep sea mining ventures, although those are not without controversy because of their projected environmental impacts.

Among all the possible applications Chao seems especially enthusiastic about the prospect of Seatrec’s renewable power technology enabling underwater drones and floaters to collect oceanographic data for much longer periods of time. He spent the better part of two decades working at the NASA Jet Propulsion Laboratory in Pasadena, Calif., where he helped develop a satellite designed for monitoring the Earth’s oceans. But he and the JPL engineering team that developed Seatrec’s core technology believe that swarms of underwater drones can provide a continuous monitoring network to truly begin understanding the oceans in depth.

The COVID-19 pandemic has slowed production and delivery of Seatrec’s products somewhat given local shutdowns and supply chain disruptions. Still, the startup has been able to continue operating in part because it’s considered to be a defense contractor that is operating an essential manufacturing facility. Seatrec’s engineers and other staff members are working in shifts to practice social distancing.

“Rather than building one or two for the government, we want to scale up to build thousands, hundreds of thousands, hopefully millions, so we can improve our understanding and provide that data to the community,” Chao says. 

AI Recruiting Tools Aim to Reduce Bias in the Hiring Process

Post Syndicated from Jeremy Hsu original

Two years ago, Amazon reportedly scrapped a secret artificial intelligence hiring tool after realizing that the system had learned to prefer male job candidates while penalizing female applicants—the result of the AI training on resumes that mostly male candidates had submitted to the company. The episode raised concerns over the use of machine learning in hiring software that would perpetuate or even exacerbate existing biases.

Now, with the Black Lives Matter movement spurring new discussions about discrimination and equity issues within the workforce, a number of startups are trying to show that AI-powered recruiting tools can in fact play a positive role in mitigating human bias and help make the hiring process fairer.

These companies claim that, with careful design and training of their AI models, they were able to specifically address various sources of systemic bias in the recruitment pipeline. It’s not a simple task: AI algorithms have a long history of being unfair regarding gender, race, and ethnicity. The strategies adopted by these companies include scrubbing identifying information from applications, relying on anonymous interviews and skillset tests, and even tuning the wording of job postings to attract as diverse a field of candidates as possible.

One of these firms is GapJumpers, which offers a platform for applicants to take “blind auditions” designed to assess job-related skills. The startup, based in San Francisco, uses machine learning to score and rank each candidate without including any personally identifiable information. Co-founder and CEO Kedar Iyer says this methodology helps reduce traditional reliance on resumes, which as a source of training data is “riddled with bias,” and avoids unwittingly replicating and propagating such biases through the scaled-up reach of automated recruiting.

That deliberate approach to reducing discrimination may be encouraging more companies to try AI-assisted recruiting. As the Black Lives Matter movement gained widespread support, GapJumpers saw an uptick in queries from potential clients. “We are seeing increased interest from companies of all sizes to improve their diversity efforts,” Iyers says.

AI with humans in the loop

Another lesson from Amazon’s gender-biased AI is that paying close attention to the design and training of the system is not enough: AI software will almost always require constant human oversight. For developers and recruiters, that means they cannot afford to blindly trust the results of AI-powered tools—they need to understand the processes behind them, how different training data affects their behavior, and monitor for bias.

“One of the unintended consequences would be to continue this historical trend, particularly in tech, where underserved groups such as African Americans are not within a sector that happens to have a compensation that is much greater than others,” says Fay Cobb Payton, a professor of information technology and analytics at North Carolina State University, in Raleigh. “You’re talking about a wealth gap that persists because groups cannot enter [such sectors], be sustained, and play long term.”

Payton and her colleagues highlighted several companies—including GapJumpers—that take an “intentional design justice” approach to hiring diverse IT talent in a paper published last year in the journal Online Information Review.

According to the paper’s authors, there is a broad spectrum of possible actions that AI hiring tools can perform. Some tools may just provide general suggestions about what kind of candidate to hire, whereas others may recommend specific applicants to human recruiters, and some may even make active screening and selection decisions about candidates. But whatever the AI’s role in the hiring process, there is a need for humans to have the capability to evaluate the system’s decisions and possibly override them.

“I believe that human-in-the-loop should not be at the end of the recommendation that the algorithms suggest,” Payton says. “Human-in-the-loop means in the full process of the loop from design to hire, all the way until the experience inside of the organization.”

Each stage of an AI system’s decision point should allow for an auditing process where humans can check the results, Payton adds. And of course, it’s crucial to have a separation of duties so that the humans auditing the system are not the same as those who designed the system in the first place.

“When we talk about bias, there are so many nuances and spots along this talent acquisition process where bias and bias mitigation come into play,” says Lynette Yarger, a professor of information sciences and technology at Pennsylvania State University and lead author on the paper with Payton. She added that “those companies that are trying to mitigate these biases are interesting because they’re trying to push human beings to be accountable.”

Another example highlighted by Yarger and Payton is a Seattle-based startup called Textio that has trained its AI systems to analyze job advertisements and predict their ability to attract a diverse array of applicants. Textio’s “Tone Meter” can help companies offer job listings with more inclusive language: Phrases like “rock star” that attract more male job seekers could be swapped out for the software’s suggestion of “high performer” instead.

“We use Textio for our own recruiting communication and have from the beginning,” says Kieran Snyder, CEO and co-founder of Textio, which is based in Seattle. “But perhaps because we make the software, we know that Textio on its own is not the whole solution when it comes to building an equitable organization—it’s just one piece of the puzzle.”

Indeed, many tech companies, including those that develop AI-powered hiring tools, are still working on inclusion and equity. Enterprise software company Workday, founded by former PeopleSoft executives and headquartered in Pleasanton, Calif., has more than 3,700 employees worldwide and clients that include half the Fortune 100. During a company forum on diversity and racial bias in June, Workday acknowledged that Black employees make up just 2.4 percent of its U.S. workforce versus the average of 4.4 percent for Silicon Valley firms, according to SearchHRSoftware, a human resources technology news site.

AI hiring tools: not a quick fix

Another challenge for AI-powered recruiting tools is that some customers expect them to offer a quick fix to a complex problem, when in reality that is not the case. James Doman-Pipe, head of product marketing at Headstart, a recruiting software startup based in London, says any business interested in reducing discrimination with AI or other technologies will need significant buy-in from the leadership and other parts of the organization.

Headstart’s software uses machine learning to evaluate job applicants and generate a “match score” that shows how well the candidates fit with a job’s requirements for skills, education, and experience. “By generating a match score, recruiters are more likely to consider underprivileged and underrepresented minorities to move forward in the recruiting process,” Doman-Pipe says. The company claims that in tests comparing the AI-based approach to traditional recruiting methods, clients using its software saw a significant improvements in the diversity makeup of new hires.

Still, one of the greatest obstacles AI-powered recruiting tools face before they can gain widespread trust is the lack of public data showing how different tools can help—or hinder—efforts to making tech hiring more equitable.

“I do know from interviews with software companies that they do audit, and they can go back and recalibrate their systems,” Yarger, the Pennsylvania State University professor, says. But the effectiveness of efforts to improve algorithmic equity in recruitment remain unclear. She explains that many companies remain reluctant to publicly share such information because of liability issues surrounding equitable employment and workplace discrimination. Companies using AI tools could face legal consequences if the tools were shown to discriminate against certain groups groups.

For North Carolina State’s Payton, it remains to be seen whether corporate commitments to addressing diversity and racial bias will have a broader and lasting impact in the hiring and retention of tech workers—and whether or not AI can prove significant in helping to create equitable an workforce.

“Association and confirmation biases and networks that are built into the system, those don’t change overnight,” she says. “So there’s much work to be done.”

Risk Dashboard Could Help the Power Grid Manage Renewables

Post Syndicated from Jeremy Hsu original

To fully embrace wind and solar power unless, grid operators need to be able to predict and manage the variability that comes from changes in the wind or clouds dimming sunlight.

One solution may come from a $2-million project backed by the U.S. Department of Energy that aims to develop a risk dashboard for handling more complex power grid scenarios.


Grid operators now use dashboards that report the current status of the power grid and show the impacts of large disturbances—such as storms and other weather contingencies—along with regional constraints in flow and generation. The new dashboard being developed by Columbia University researchers and funded by the Advanced Research Projects Agency–Energy (ARPA-E) would improve upon existing dashboards by modeling more complex factors. This could help the grid better incorporate both renewable power sources and demand response programs that encourage consumers to use less electricity during peak periods.

“[Y]ou have to operate the grid in a way that is looking forward in time and that accepts that there will be variability—you have to start talking about what people in finance would call risk,” says Daniel Bienstock, professor of industrial engineering and operations research, and professor of applied physics and applied mathematics at Columbia University.

The new dashboard would not necessarily help grid operators prepare for catastrophic black swan events that might happen only once in 100 years. Instead, Bienstock and his colleagues hope to apply some lessons from financial modeling to measure and manage risk associated with more common events that could strain the capabilities of the U.S. regional power grids managed by independent system operators (ISOs). The team plans to build and test an alpha version of the dashboard within two years, before demonstrating the dashboard for ISOs and electric utilities in the third year of the project.

Variability already poses a challenge to modern power grids that were designed to handle steady power output from conventional power plants to meet an anticipated level of demand from consumers. Power grids usually rely on gas turbine generators to kick in during peak periods of power usage or to provide backup to intermittent wind and solar power.

But such generators may not provide a fast enough response to compensate for the expected variability in power grids that include more renewable power sources and demand response programs driven by fickle human behavior. In the worst cases, grid operators may shut down power to consumers and create deliberate blackouts in order to protect the grid’s physical equipment.

One of the dashboard project’s main goals involves developing mathematical and statistical models that can quantify the risk from having greater uncertainty in the power grid. Such models would aim to simulate different scenarios based on conditions—such as changes in weather or power demand—that could stress the power grid. Repeatedly playing out such scenarios would force grid operators to fine-tune and adapt their operational plans to handle such surprises in real life.

For example, one scenario might involve a solar farm generating 10 percent less power and a wind farm generating 30 percent more power within a short amount of time, Bienstock explains. The combination of those factors might mean too much power begins flowing on a particular power line and the line subsequently starts running hot at the risk of damage.

Such models would only be as good as the data that trains them. Some ISOs and electric utilities have already been gathering useful data from the power grid for years. Those that already have more experience dealing with the variability of renewable power have been the most proactive. But many of the ISOs are reluctant to share such data with outsiders.

“One of the ISOs has told us that they will let us run our code on their data provided that we actually physically go to their office, but they will not give us the data to play with,” Bienstock says. 

For this project, ARPA-E has been working with one ISO to produce synthetic data covering many different scenarios based on historical data. The team is also using publicly available data on factors such as solar irradiation, cloud cover, wind strength, and the power generation capabilities of solar panels and wind turbines.

“You can look at historical events and then you can design stress that’s somehow compatible with what we observe in the past,” says Agostino Capponi, associate professor of industrial engineering and operations research at Columbia University and external consultant for the U.S. Commodity Futures Trading Commission.

A second big part of the dashboard project involves developing tools that grid operators could use to help manage the risks that come from dealing with greater uncertainty. Capponi is leading the team’s effort to design customized energy volatility contracts that could allow grid operators to buy such contracts for a fixed amount and receive compensation for all the variance that occurs over a historical period of time.

But he acknowledged that financial contracts designed to help offset risk in the stock market won’t apply in a straightforward manner to the realities of the power grid that include delays in power transmission, physical constraints, and weather events.

“You cannot really directly use existing financial contracts because in finance you don’t have to take into account the physics of the power grid,” Capponi says.

Once the new dashboard is up and running, it could begin to help grid operators deal with both near-term and long-term challenges for the U.S. power grid. One recent example comes from the current COVID-19 pandemic and associated human behavioral changes—such as more people working from home—having already increased variability in energy consumption across New York City and other parts of the United States. In the future, the risk dashboard might help grid operators quickly identify areas at higher risk of suffering from imbalances between supply and demand and act quickly to avoid straining the grid or having blackouts.

Knowing the long-term risks in specific regions might also drive more investment in additional energy storage technologies and improved transmission lines to help offset such risks. The situation is different for every grid operator’s particular region, but the researchers hope that their dashboard can eventually help level the speed bumps as the U.S. power grid moves toward using more renewable power.

“The ISOs have different levels of renewable penetration, and so they have different exposures and visibility to risk,” Bienstock says. “But this is just the right time to be doing this sort of thing.”

Survey Finds Americans Skeptical of Contact Tracing Apps

Post Syndicated from Jeremy Hsu original

Confusion and skepticism may confound efforts to make use of digital contact tracing technologies during the COVID-19 pandemic. A recent survey found that just 42 percent of American respondents support using so-called contact tracing apps—an indication of a lack of confidence that could weaken or even derail effective deployment of such technologies.

How the Pandemic Impacts U.S. Electricity Usage

Post Syndicated from Jeremy Hsu original

As the COVID-19 outbreak swept through Manhattan and the surrounding New York City boroughs earlier this year, electricity usage dropped as businesses shuttered and people hunkered down in their homes. Those changes in human behavior became visible from space as the nighttime lights of the city that never sleeps dimmed by 40 percent between February and April.

That striking visualization of the COVID-19 impact on U.S. electricity consumption came from NASA’s “Black Marble” satellite data. U.S. and Chinese researchers are currently using such data sources in what they describe as an unprecedented effort to study how electricity consumption across the United States has been changing in response to the pandemic. One early finding suggests that mobility in the retail sector—defined as daily visits to retail establishments—is an especially significant factor in the reduction of electricity consumption seen across all major U.S. regional markets.

“I was previously not aware that there is such a strong correlation between the mobility in the retail sector and the public health data on the electricity consumption,” says Le Xie, professor in electrical and computer engineering and assistant director of energy digitization at the Texas A&M Energy Institute. “So that is a key finding.”

Xie and his colleagues from Texas A&M, MIT, and Tsinghua University in Beijing, China, are publicly sharing their Coronavirus Disease-Electricity Market Data Aggregation (COVID-EMDA) project and the software codes they have used in their analyses in an online Github repository. They first uploaded a preprint paper describing their initial analyses to arXiv on 11 May 2020. 

Most previous studies that focused on public health and electricity consumption tried to examine whether changes in electricity usage could provide an early warning sign of health issues. But when the U.S. and Chinese researchers first put their heads together on studying COVID-19 impacts, they did not find other prior studies that had examined how a pandemic can affect electricity consumption.

Beyond using the NASA satellite imagery of the nighttime lights, the COVID-EMDA project also taps additional sources of data about the major U.S. electricity markets from regional transmission organizations, weather patterns, COVID-19 cases, and the anonymized GPS locations of cellphone users.

“Before when people study electricity, they look at data on the electricity domain, perhaps the weather, maybe the economy, but you would have never thought about things like your cell phone data or mobility data or the public health data from COVID cases,” Xie says. “These are traditionally totally unrelated data sets, but in these very special circumstances they all suddenly became very relevant.”

The unique compilation of different data sources has already helped the researchers spot some interesting patterns. The most notable finding suggests that the largest portion of the drop in electricity consumption likely comes from the drop in people’s daily visits to retail establishments as individuals begin early adoption of practicing social distancing and home isolation. By comparison, the number of new confirmed COVID-19 cases does not seem to have a strong direct influence on changes in electricity consumption.

The Northeastern region of the U.S. electricity sector that includes New York City seems to be experiencing the most volatile changes so far during the pandemic. Xie and his colleagues hypothesize that larger cities with higher population density and commercial activity would likely see bigger COVID-19 impacts on their electricity consumption. But they plan to continue monitoring electricity consumption changes in all the major regions as new COVID-19 hotspots have emerged outside the New York City area.

The biggest limitation of such an analysis comes from the lack of available higher-resolution data on electricity consumption. Each of the major regional transmission organizations publishes power load and price numbers daily for their electricity markets, but this reflects a fairly large geographic area that often covers multiple states. 

“For example, if we could know exactly how much electricity is used in each of the commercial, industrial, and residential categories in a city, we could have a much clearer picture of what is going on,” Xie says.

That could change in the near future. Some Texas utility companies have already approached the COVID-EMDA group about possibly sharing such higher-resolution data on electricity consumption for future analyses. The researchers have also heard from economists curious about analyzing and perhaps predicting near-term economic activities based on electricity consumption changes during the pandemic.

One of the next big steps is to “develop a predictive model with high confidence to estimate the impact to electricity consumption due to social-distancing policies,” Xie says. “This could potentially help the public policy people and [regional transmission organizations] to prepare for similar situations in the future.”

Spherical Solar Cells Soak Up Scattered Sunlight

Post Syndicated from Jeremy Hsu original

Flat solar panels still face big limitations when it comes to making the most of the available sunlight each day. A new spherical solar cell design aims to boost solar power harvesting potential from nearly every angle without requiring expensive moving parts to keep tracking the sun’s apparent movement across the sky. 

Optical Atomic Clocks Are Ready to Redefine Time

Post Syndicated from Jeremy Hsu original

Optical atomic clocks will likely redefine the international standard for measuring a second in time. They are far more accurate and stable than the current standard, which is based on microwave atomic clocks.

Now, researchers in the United States have figured out how to convert high-performance signals from optical clocks into a microwave signal that can more easily find practical use in modern electronic systems.

How to Protect Privacy While Mining Millions of Patient Records

Post Syndicated from Jeremy Hsu original

When people in the United Kingdom began dying from COVID-19, researchers saw an urgent need to understand all the possible factors contributing to such deaths. So in six weeks, a team of software developers, clinicians, and academics created an open-source platform designed to securely analyze millions of electronic health records while protecting patient privacy. 

Preventing AI From Divulging Its Own Secrets

Post Syndicated from Jeremy Hsu original

One of the sneakiest ways to spill the secrets of a computer system involves studying its pattern of power usage while it performs operations. That’s why researchers have begun developing ways to shield the power signatures of AI systems from prying eyes.

Sabrewing Cargo Drone Rises to Air Force Challenge

Post Syndicated from Jeremy Hsu original

Named after a dragon from the “Game of Thrones” fantasy series, the Rhaegal-A won’t be making its mark by burninating the countryside. Instead the hybrid-electric cargo drone capable of taking off and landing like a helicopter is in the spotlight today during a U.S. Air Force conference about “flying car” technologies.