Measuring the progress of quantum computers can prove tricky in the era of “noisy” quantum computing technology. One concept, known as “quantum volume,” has become a favored measure among companies such as IBM and Honeywell. But not every company or researcher agrees on its usefulness as a yardstick in quantum computing.
A huge number of machine learning applications could receive a performance upgrade, thanks to a relatively minor modification to their underlying neural networks.
If you are a developer creating a new machine learning application, you typically build on top of a existing neural network architecture, one that is already tuned for the kind of problem you are trying to solve—creating your own architecture from scratch is a difficult job that’s typically more trouble than it’s worth. Even with an existing architecture in hand, reengineering it for better performance is no small task. But one team has come up with new neural network module that can boost AI performance when plugged into four of the most widely used architectures.
Critically, the research funded by the U.S. National Science Foundation and Army Research Office achieves this performance boost through the new module without requiring much of an increase in computing power. It’s part of a broader project by North Carolina State University researchers to rethink the architecture of the neural networks involved in modern AI’s deep learning capabilities.
“At the macro level, we try to redesign the entire neural network as a whole,” says Tianfu Wu, an electrical and computer engineer at North Carolina State University in Raleigh. “Then we try to focus on the specific components of the neural network.”
Wu and his colleagues presented their work (PDF) on the new neural network component, or module, named Attentive Normalization, at the virtual version of the 16th European Conference on Computer Vision in August. They have also released the code so that other researchers can plug the module into their own deep learning models.
In preliminary testing, the group found that the new module improved performance in four mainstream neural network architectures: ResNets, DenseNets, MobileNetsV2 and AOGNets. The researchers checked the upgraded networks’ performances against two industry benchmarks for testing visual object recognition and classification, includingImageNet-1000 and MS-COCO 2017. For example, the new module boosted the top-1 accuracy in the ImageNet-1000 benchmark by between 0.5 percent and 2.7 percent. This may seem small, but it can make a significant difference in practice, not least because of the large scale of many machine learning deployments.
Altogether, the diverse array of architectures are suitable for performing AI-driven tasks on both large computing systems and mobile devices with more limited computing power. But the most noticeable improvement in performance came in the neural network architectures suited for mobile platforms such as smartphones.
The key to the team’s success came from combining two neural network modules that usually operate separately. “In order to make a neural network more powerful or easier to train, feature normalization and feature attention are probably two of the most important components,” Wu says.
The feature normalization module helps to make sure that no single subset of the data used to train a neural network outweighs the other subsets in shaping the deep learning model. By comparing neural network training to driving a car on a dark road, Wu describes feature normalization as the car’s suspension system smoothing out the jolts from any bumps in the road.
By comparison, the feature attention module helps to focus on certain features in the training data that could better achieve the learning task at hand. Going back to the car analogy for training neural networks, the feature attention module represents the vehicle headlights showing what to look out for on the dark road ahead.
After scrutinizing both modules, the researchers realized that certain sub-processes in both modules overlap in the shared goal of re-calibrating certain features in the training data. That provided a natural integration point for combining feature normalization and feature attention in the new module. “We want to see different micro components in neural architecture that can be and should be integrated together to make them more effective,” Wu says.
Wu and his colleagues also designed the new module so that it could perform the re-calibration task in a more dynamic and adaptive way than the standard modules. That may offer benefits when it comes to transfer learning—taking AI trained on one set of data to perform a given task and applying it to new data for a related task (for example, in a face recognition application, developers typically start with a network that’s good at identifying what objects in a camera’s view are faces, and then train it to recognize specific people).
The new module represents just one small part of the North Carolina State group’s vision for redesigning modern AI. For example, the researchers are trying to develop interpretable AI systems that allow humans to better understand the logic of AI decisions—a not insignificant problem for deep learning models based on neural networks. As one possible step toward that goal, Wu and his colleagues previously developed a framework for building deep neural networks based on a compositional grammar system.
Meanwhile, Wu still sees many other opportunities for fine-tuning smaller parts of neural networks without requiring a complete overhaul of the main architecture.
“There are so many other components in deep neural networks,” Wu says. “We probably also can take a similar angle and try to look at whether there are natural integration points to put them together, or try to redesign them in better a form.”
In June, as the coronavirus swept across the United States, Paloma Beamer spent hours each day helping her university plan for a September reopening. Beamer, an associate professor of public health at the University of Arizona, was helping to test a mobile app that would notify users if they crossed paths with confirmed COVID-19 patients.
A number of such “contact tracing” apps have recently had their trials by fire, and many of the developers readily admit that the technology has not yet proven that it can slow the spread of the virus. But that caveat has not stopped national governments and local communities from using the apps.
“Right now, in Arizona, we’re in the full-blown pandemic phase,” Beamer said, speaking in June, well before the new-case count had peaked. “And even manual contact tracing is very limited here—we need whatever tool we can get right now to curb our epidemic.”
Traditionally, tracers would ask newly diagnosed patients to list the people they’d spent time with recently, then ask those people to provide contacts of their own. Such legwork has helped to control other infectiousdisease outbreaks, such as syphilis in the United States and Ebola in West Africa. However, while these methods can extinguish the first spark or the last embers of an epidemic, they’re no good in the wildfire stage, when the caseload expands exponentially.
That’s the reason to automate the job. Digital contact tracing may also jog fuzzy memories by dredging up relevant information on where a patient has been, and with whom. Some technologies can go further by automatically alerting people who have been in close proximity to a patient and thus may need to get tested or go into isolation. Speedy notification is particularly important during the COVID-19 pandemic, given that asymptomatic people seem capable of transmitting the virus.
Automatic alerts may sound great, but there are “limited real-world use cases” and “limited evidence for their effectiveness,” says Joseph Ali, associate director for global programs at the Johns Hopkins Berman Institute of Bioethics and coauthor of the book Digital Contact Tracing for Pandemic Response, published in May. Rushed deployment of unproven technologies runs the risk of misidentifying moments of exposure that in fact never happened—false positives—and missing moments that did happen, or false negatives.
Some governments have embraced these apps; others have struggled with the decision. The United Kingdom, for example, initially spent millions developing an app that would collect data and send it to a centralized data storage system run by the National Health Service. But privacy advocates raised concern about the system, and in June the government announced that it would abandon that effort and switch to a less-centralized alternative built on technology from the tech giants Apple and Google.
The U.K.’s indecision shows how the choice of strategy revolves around privacy trade-offs. Some countries have staked everything on effectiveness and nothing on privacy.
Wuhan, the Chinese city at the heart of the pandemic, squashed the virus, eased the lockdown, then saw a small resurgence of the contagion in May. Public-health authorities went all out: They tested the entire population of 11 million and instituted the tracking of each person’s movements. Would-be customers could enter a shop only by having their temperature taken and exchanging personal bar codes, displayed on their phones, with the shop’s own identifying barcode. They then had to repeat the exchange upon leaving. That way, if anyone in the shop ended up testing positive, the authorities would be able to find whoever was in the same place at the same time, test those people, and, if necessary, quarantine them.
It worked. As of mid-July, Wuhan was reporting that no new cases of the virus had been recorded for 50 consecutive days. But such a gargantuan effort is not always an option. In many parts of the world, most people will willingly participate only if they trust in the system.
In an online survey of Americans conducted by Avira, a security software company, 71 percent of respondents said they don’t plan to use a COVID contact-tracing app. Respondents cited privacy as their main concern. In a telling contrast, 32 percent said they would trust apps from Google and Apple to keep their data secure and private—but just 14 percent said they would trust apps from the government.
One shining example of effective digital contact tracing is South Korea, which built a centralized system that scrutinized patients’ movements, identified people who had been in contact with patients, and used apps to monitor people under quarantine. To date, South Korea has successfully contained its COVID-19 outbreaks without closing national borders or imposing local lockdowns.
The South Korean government’s system gave contact tracers access to multiple information sources, including footage from security cameras, GPS data from mobile phones, and credit card transaction data, says Uichin Lee, an associate professor of industrial and systems engineering at the Korea Advanced Institute of Science and Technology (KAIST), in Daejeon. “This system helps them to quickly identify hot spots and close contacts,” Lee says.
But South Korea’s system also publicly shares patients’ contact-trace data—including pseudonymized information on demographics, infection information, and travel logs. This approach raises serious privacy concerns, as Lee and his colleagues outlined in the journal Frontiers in Public Health. The travel logs alone could enable observers to infer where a patient lives and works.
By comparison, public-health authorities in Europe and the United States have shied away from publicly sharing such patient data. There’s also a middle way: A person’s phone may store data identifying people or location, and it may be left to the owner of the phone whether to share that information with public-health officials.
And then there’s the radical idea of not storing such data at all. That’s the approach taken by the Google/Apple Exposure Notification (GAEN) system. As these tech giants own, respectively, the Android and iOS smartphone standards, the GAEN system enables independent developers to build apps that can run on either standard. The system records Bluetooth transmissions between phones in close proximity to one another, and stores that data as anonymized beacons on each phone for a limited time. If one phone user tests positive for COVID-19 and enters that positive status in a mobile app built upon GAEN, the system will alert other phone users who have been in close proximity within the potentially infectious time period.
To protect user privacy, the system does all these things without ever recording the exact location of such encounters. It also limits the reported exposure time for each encounter to 5-minute increments, with a maximum possible total of 30 minutes. That constraint makes it more difficult for users to guess the source of their exposure.
The GAEN system also appeals to those wary of increased surveillance in the name of public health. Germany, Italy, and Switzerland have already deployed exposure-notification apps based on GAEN, and other countries will likely follow. In the United States, Virginia was the first to introduce one.
“If you collect identifying information along with Bluetooth data, it could potentially lead to new forms of surveillance,” says Tina White, founder and executive director of the nonprofit COVID Watch and a Ph.D. candidate at the Stanford Institute for Human-Centered Artificial Intelligence. “And that’s exactly what we don’t want to see.” COVID Watch is working with the University of Arizona on a privacy-centric app based on the GAEN system. Preliminary testing involving two phones placed at different indoor locations has ramped up to more real-life campus scenarios inside classrooms, dining halls, and the Cat Tran student shuttle, followed by a campuswide rollout in mid-August.
There’s one big hitch: Repurposing Bluetooth from its original communication function poses serious technical difficulties. At Trinity College Dublin, researchers found that Bluetooth can perform poorly on the crucial task of proximity detection when a phone is in the presence of reflective metal surfaces. In one experiment on a commuter bus, a Swiss COVID-19 app built on the GAEN system failed to trigger exposure notifications even though the phones were within 2 meters (a little over 6 feet) of each other for 15 minutes.
“Public transport, which seems kind of mundane, is actually one of the core use cases for contact-tracing apps, but it’s also a terrible radio environment,” says Douglas Leith, a professor of computer science and statistics at Trinity College. “All our measurements suggest that it probably won’t work on buses and trains.”
Another problem is the variation in antenna configuration over the thousands of Android phone models. Engineers must calibrate the software to make up for any loss in signal strength, Leith explains. And although Bluetooth-signal “chirps” require minimal power, simply listening for such chirps requires that the main phone processors be turned on, which can quickly drain battery power unless the apps are restricted to short listening periods.
Beyond the technical challenges faced by Bluetooth-based apps, all contact-tracing apps suffer from the same general problem: Unless a certain percentage of the population installs an app, it can’t do its work. People won’t opt in unless they believe in the public-health strategy behind an app and in the personal advantages they can hope to gain from it. Making that sale has been tough. In Germany, which has had some of the best results of any country in containing the virus, only 41 percent of the population has said it was willing to download what is known as the Corona-Warn-App.
Some researchers point to a University of Oxford study that modeled the coronavirus’s spread through a simulated city of 1 million people; it found that 60 percent adoption is needed to stop the pandemic and keep countries out of lockdown (although the study suggested that lower rates of adoption could still prove helpful).
The tech giants are making widespread adoption easier by deploying an app-less Exposure Notifications Express function for iOS and Android devices. If a phone user opts in, the phone begins listening for nearby Bluetooth beacons from other phones. And later, if a stored Bluetooth beacon proves to be a match for someone confirmed to be positive for COVID-19, the system will prompt the user to download an exposure-notification app for more information.
Bluetooth is not the only way forward. The COVID Safe Paths project led by the nonprofit PathCheck Foundation, an MIT spin-off, has been developing and fielding a mobile app that uses GPS location data instead. The GPS approach provides more location data than Bluetooth does, in exchange for less user privacy. But Safe Paths also aims to build a Bluetooth-based version with the GAEN system. “We have been mostly agnostic to the technology we want to use,” says Abhishek Singh, a Ph.D. candidate in machine learning at the MIT Media Lab and a member of Safe Paths.
“It matters less what’s actually happening in the back end, and more about communication and perception,” says Kyle Towle, a member of the technology team at Safe Paths and former senior director of cloud technology at PayPal. The crucial component, he says, is the “appeal to our community members to gain that trust in the first place.”
The best path to success may come from ample preparation. South Korea’s experience with an outbreak of Middle East respiratory syndrome (MERS) in 2015 prompted the government to update national laws and lay the bureaucratic and technological foundations for an efficient contact-tracing system. The resulting public-private partnership enables human contact tracers to pull together digital data on a suspected or confirmed case’s travel history within 10 minutes.
“A country like South Korea, maybe because they went through this before with other viruses five years ago, really got a head start, and they didn’t mess around,” says Marc Zissman, associate head of the cybersecurity and information sciences division at MIT Lincoln Laboratory. The lab is among those in the PACT (Private Automated Contact Tracing) project, which is testing GAEN’s Bluetooth-based app performance.
Zissman says that developing digital contact tracing during a pandemic is like building a plane and flying it at the same time while also measuring how well everything works. “In a perfect world, something like this would have taken a couple years to implement,” he says. “There just isn’t the time, so instead what’s happening is people are doing the best they can, and making the best engineering judgments they can, with the data they have and the time that they have.”
This article appears in the October 2020 print issue as “The Dilemma of Contact-Tracing Apps.”
During the COVID-19 pandemic, digital contact tracing apps based on the Bluetooth technology found in smartphones have been deployed by various countries despite the fact that Bluetooth’s baseline performance as a proximity detector remains mostly a mystery.
That is why the U.S. National Institute of Standards and Technology organized a months-long event that leveraged the talents of AI researchers around the world to help evaluate and potentially improve upon that baseline Bluetooth performance for helping detect when smartphone users are standing too close to one another.
The appropriately named Too Close for Too Long (TC4TL) Challenge has yielded a mixed bag for anyone looking to be optimistic about the performance of Bluetooth-based contact tracing and exposure notification apps. The challenge attracted a diverse array of AI research teams from around the world who showed how machine learning might help boost proximity detection by analyzing the patterns in Bluetooth signals and data from other phone sensors. But the teams’ testing results presented during a final evaluation workshop held on August 28 also showed that Bluetooth’s capability alone in detecting nearby phones is shaky at best.
“We showed that if you hold both phones in hand, you’re going to get relatively better proximity detection results on this pilot dataset,” says Omid Sadjadi, a computer scientist at the National Institute of Standards and Technology. “When it comes to everyday scenarios where you put your phones in your pocket or your purse or in other other carrier states and in other locations, that’s when the performance of this proximity detection technology seems to start to degrade.”
Bluetooth Low Energy (BLE) technology was not originally designed to use Bluetooth signals from phones to accurately estimate the distance between phones. But the technology has been thrown into the breach to help hold the line for contact tracing during the pandemic. The main reason why many countries have gravitated toward Bluetooth-based apps is that they generally represent a more privacy-preserving option compared to using location-based technologies such as GPS.
Given the highly improvised nature of this Bluetooth-based solution, it made sense for the U.S. National Institute of Standards and Technology (NIST) to assist in evaluating the technology’s performance. NIST has previously helped establish testing benchmarks and international standards for evaluating widely-used modern technologies such as online search engines and facial recognition. And so the agency was more than willing to step up again when asked by researchers working on digital contact tracing technologies through the MIT PACT project.
But repeating the same evaluation process during a global public health emergency proved far from easy. NIST found itself condensing the typical full-year cycle for the evaluation challenge down into just five months starting in April and ending in August. “It was a tight timeline and could have been stressful at times,” Sadjadi says. “Hopefully we were able to make it easier for the participating teams.”
The challenge did not specifically test Bluetooth-based app frameworks such as the Google Apple Exposure Notification (GAEN) protocol that are currently used in exposure notification or contact tracing apps. Instead, the challenge focused on evaluating whether teams’ machine learning models could improve on the process of detecting a phone’s proximity based on the combination on Bluetooth signal information and data from other common phone sensors such as accelerometers, gyroscopes, and magnetometers.
To provide the training data necessary for the teams’ machine learning models, MITRE Corporation and MIT Lincoln Laboratory staff members helped collect data from pairs of phones held at certain distances and heights near one another. They also included data from different scenarios such as both people holding the phones in their hands, as well as one or both people having the phones in their pockets. The latter is important given how Bluetooth signals can be weakened or deflected by a number of different materials.
“If you’re collecting data for the purpose of training and evaluating automated proximity detection technologies, you need to consider all possible scenarios and phone carriage states that could happen in everyday conditions, whether people are moving around and going shopping, or in nursing homes, or they’re sitting in a classroom or they are sitting at their desk at their work organization,” Sadjadi says.
One unexpected hiccup occurred when the original NIST development data set—based on 10 different recording sessions with MIT Lincoln Laboratory researchers holding phone pairs in different positions—led to the classic “overfitting” problem where machine learning performance is tuned too specifically to the conditions in a particular data set. The machine learning models were able to identify specific recording sessions by using air pressure information from the altitude sensors of the iPhones. That gave the models a performance boost in phone proximity detection for that specific training data set, but their performance could fall when faced with new data in real-world situations.
Luckily, one of the teams participating in the challenge reported the issue to NIST when it noticed its machine learning model prioritizing data from the altitude sensors. Once Sadjadi and his colleagues figured out what happened, they enlisted the help of the MITRE Corporation to collect new data based on the same data collection protocol and released the new training data set within a few days.
The team results on the final TC4TL leaderboard reflect the machine learning models’ performances based on the new training data set. But NIST still included a second table below the leaderboard results showing how the overfitted models performed on the original training data set. Such results are presented as a normalized decision cost function (NDCF) that represents proximity detection performance when accounting for the combination of false negatives (failing to detect a nearby phone) and false positives (falsely saying a nearby phone has been detected).
If the machine learning models only performed as accurately as flipping a coin on those binary yes-or-no questions about false positives and false negatives, their NDCF values would be 1. The fact that most of the machine learning models seemed to get values significantly below 1 represents a good sign for the promise of applying machine learning to boosting digital contact tracing down the line.
However, it’s still unclear what these normalized DCF values would actually mean for a person’s infection risk in real life. For future evaluations, the NIST team may focus on figuring out the best way to weight both the false positive and false negative error measures. “The next question is ‘What’s the relative importance of false-positives and false-negatives?’” Sadjadi explains. “How can the metric be adjusted to better correlate with realistic conditions?”
It’s also hard to tell which specific machine learning models perform the best for enhancing phone proximity detection. The four teams ended up trying out a variety of different approaches without necessarily finding the most optimal method. Still, Sadjadi seemed encouraged by the fact that even these early results suggest that machine learning can improve upon the baseline performance of Bluetooth signal detection alone.
“We hope that in the future the participants use our datasets and our metrics to drive the errors down further,” Sadjadi says. “But these results are far better than random.”
The fact that the baseline performance of Bluetooth signal detection for detecting nearby phones still seems quite weak may not bode well for many of the current digital contact tracing efforts using Bluetooth-based apps—especially given the much higher error rates for situations when one or both phones is in someone’s pocket or purse. But Sadjadi suggests that current Bluetooth-based apps could still provide some value for overwhelmed public health systems and humans doing manual contact tracing.
“It seems like we’re not there yet when you consider everyday scenarios and real-life scenarios,” Sadjadi says. “But again, even in case of not so strong performance, it can still be useful, and it can probably still be used to augment manual contact tracing, because as humans we don’t remember exactly who we were in contact with or where we were.”
Many future challenges remain before researchers can deliver enhanced Bluetooth-based proximity detection and a possible performance boost from machine learning. For example, Bluetooth-based proximity detection could likely become more accurate if phones spent more time listening for Bluetooth chirps from nearby phones, but tech companies such as Google and Apple have currently limited that listening time period in the interest of preserving phone battery life.
The NIST team is also thinking about how to collect more training data for what comes next beyond the TC4TL Challenge. Some groups such as the MIT Lincoln Laboratory have been testing the use of robots to conduct phone data collection sessions, which could improve the reliability of accurately-reported distances and other factors involved in tests. That may be useful for collecting training data. But Sadjadi believes that it would still be best to use humans in collecting the data used for the test data sets that measure machine learning models’ performances, so that the conditions match real life as closely as possible.
“This is not the first pandemic and it does not seem to be the last one,” Sadjadi says. “And given how important contact tracing is—either manual or digital contact tracing—for this kind of pandemic and health crisis, the next TC4TL challenge cycle is definitely going to be longer.”
Yi Chao likes to describe himself as an “armchair oceanographer” because he got incredibly seasick the one time he spent a week aboard a ship. So it’s maybe not surprising that the former NASA scientist has a vision for promoting remote study of the ocean on a grand scale by enabling underwater drones to recharge on the go using his company’s energy-harvesting technology.
Many of the robotic gliders and floating sensor stations currently monitoring the world’s oceans are effectively treated as disposable devices because the research community has a limited number of both ships and funding to retrieve drones after they’ve accomplished their mission of beaming data back home. That’s not only a waste of money, but may also contribute to a growing assortment of abandoned lithium-ion batteries polluting the ocean with their leaking toxic materials—a decidedly unsustainable approach to studying the secrets of the underwater world.
“Our goal is to deploy our energy harvesting system to use renewable energy to power those robots,” says Chao, president and CEO of the startup Seatrec. “We’re going to save one battery at a time, so hopefully we’re going to not to dispose more toxic batteries in the ocean.”
Chao’s California-based startup claims that its SL1 Thermal Energy Harvesting System can already help save researchers money equivalent to an order of magnitude reduction in the cost of using robotic probes for oceanographic data collection. The startup is working on adapting its system to work with autonomous underwater gliders. And it has partnered with defense giant Northrop Grumman to develop an underwater recharging station for oceangoing drones that incorporates Northrop Grumman’s self-insulating electrical connector capable of operating while the powered electrical contacts are submerged.
Seatrec’s energy-harvesting system works by taking advantage of how certain substances transition from solid-to-liquid phase and liquid-to-gas phase when they heat up. The company’s technology harnesses the pressure changes that result from such phase changes in order to generate electricity.
To make the phase changes happen, Seatrec’s solution taps the temperature differences between warmer water at the ocean surface and colder water at the ocean depths. Even a relatively simple robotic probe can generate additional electricity by changing its buoyancy to either float at the surface or sink down into the colder depths.
By attaching an external energy-harvesting module, Seatrec has already begun transforming robotic probes into assets that can be recharged and reused more affordably than sending out a ship each time to retrieve the probes. This renewable energy approach could keep such drones going almost indefinitely barring electrical or mechanical failures. “We just attach the backpack to the robots, we give them a cable providing power, and they go into the ocean,” Chao explains.
The early buyers of Seatrec’s products are primarily academic researchers who use underwater drones to collect oceanographic data. But the startup has also attracted military and government interest. It has already received small business innovation research contracts from both the U.S. Office of Naval Research and National Oceanic and Atmospheric Administration (NOAA).
Seatrec has also won two $10,000 prizes under the Powering the Blue Economy: Ocean Observing Prize administered by the U.S. Department of Energy and NOAA. The prizes awarded during the DISCOVER Competition phase back in March 2020 included one prize split with Northrop Grumman for the joint Mission Unlimited UUV Station concept. The startup and defense giant are currently looking for a robotics company to partner with for the DEVELOP Competition phase of the Ocean Observing Prize that will offer a total of $3 million in prizes.
In the long run, Seatrec hopes its energy-harvesting technology can support commercial ventures such as the aquaculture industry that operates vast underwater farms. The technology could also support underwater drones carrying out seabed surveys that pave the way for deep sea mining ventures, although those are not without controversy because of their projected environmental impacts.
Among all the possible applications Chao seems especially enthusiastic about the prospect of Seatrec’s renewable power technology enabling underwater drones and floaters to collect oceanographic data for much longer periods of time. He spent the better part of two decades working at the NASA Jet Propulsion Laboratory in Pasadena, Calif., where he helped develop a satellite designed for monitoring the Earth’s oceans. But he and the JPL engineering team that developed Seatrec’s core technology believe that swarms of underwater drones can provide a continuous monitoring network to truly begin understanding the oceans in depth.
The COVID-19 pandemic has slowed production and delivery of Seatrec’s products somewhat given local shutdowns and supply chain disruptions. Still, the startup has been able to continue operating in part because it’s considered to be a defense contractor that is operating an essential manufacturing facility. Seatrec’s engineers and other staff members are working in shifts to practice social distancing.
“Rather than building one or two for the government, we want to scale up to build thousands, hundreds of thousands, hopefully millions, so we can improve our understanding and provide that data to the community,” Chao says.
Two years ago, Amazon reportedly scrapped a secret artificial intelligence hiring tool after realizing that the system had learned to prefer male job candidates while penalizing female applicants—the result of the AI training on resumes that mostly male candidates had submitted to the company. The episode raised concerns over the use of machine learning in hiring software that would perpetuate or even exacerbate existing biases.
Now, with the Black Lives Matter movement spurring new discussions about discrimination and equity issues within the workforce, a number of startups are trying to show that AI-powered recruiting tools can in fact play a positive role in mitigating human bias and help make the hiring process fairer.
These companies claim that, with careful design and training of their AI models, they were able to specifically address various sources of systemic bias in the recruitment pipeline. It’s not a simple task: AI algorithms have a long historyof being unfairregarding gender,race, and ethnicity. The strategies adopted by these companies include scrubbing identifying information from applications, relying on anonymous interviews and skillset tests, and even tuning the wording of job postings to attract as diverse a field of candidates as possible.
One of these firms is GapJumpers, which offers a platform for applicants to take “blind auditions” designed to assess job-related skills. The startup, based in San Francisco, uses machine learning to score and rank each candidate without including any personally identifiable information. Co-founder and CEO Kedar Iyer says this methodology helps reduce traditional reliance on resumes, which as a source of training data is “riddled with bias,” and avoids unwittingly replicating and propagating such biases through the scaled-up reach of automated recruiting.
That deliberate approach to reducing discrimination may be encouraging more companies to try AI-assisted recruiting. As the Black Lives Matter movement gained widespread support, GapJumpers saw an uptick in queries from potential clients. “We are seeing increased interest from companies of all sizes to improve their diversity efforts,” Iyers says.
AI with humans in the loop
Another lesson from Amazon’s gender-biased AI is that paying close attention to the design and training of the system is not enough: AI software will almost always require constant human oversight. For developers and recruiters, that means they cannot afford to blindly trust the results of AI-powered tools—they need to understand the processes behind them, how different training data affects their behavior, and monitor for bias.
“One of the unintended consequences would be to continue this historical trend, particularly in tech, where underserved groups such as African Americans are not within a sector that happens to have a compensation that is much greater than others,” saysFay Cobb Payton, a professor of information technology and analytics at North Carolina State University, in Raleigh. “You’re talking about a wealth gap that persists because groups cannot enter [such sectors], be sustained, and play long term.”
Payton and her colleagues highlighted several companies—including GapJumpers—that take an “intentional design justice” approach to hiring diverse IT talent in a paper published last year in the journal Online Information Review.
According to the paper’s authors, there is a broad spectrum of possible actions that AI hiring tools can perform. Some tools may just provide general suggestions about what kind of candidate to hire, whereas others may recommend specific applicants to human recruiters, and some may even make active screening and selection decisions about candidates. But whatever the AI’s role in the hiring process, there is a need for humans to have the capability to evaluate the system’s decisions and possibly override them.
“I believe that human-in-the-loop should not be at the end of the recommendation that the algorithms suggest,” Payton says. “Human-in-the-loop means in the full process of the loop from design to hire, all the way until the experience inside of the organization.”
Each stage of an AI system’s decision point should allow for an auditing process where humans can check the results, Payton adds. And of course, it’s crucial to have a separation of duties so that the humans auditing the system are not the same as those who designed the system in the first place.
“When we talk about bias, there are so many nuances and spots along this talent acquisition process where bias and bias mitigation come into play,” says Lynette Yarger, a professor of information sciences and technology at Pennsylvania State University and lead author on the paper with Payton. She added that “those companies that are trying to mitigate these biases are interesting because they’re trying to push human beings to be accountable.”
Another example highlighted by Yarger and Payton is a Seattle-based startup called Textio that has trained its AI systems to analyze job advertisements and predict their ability to attract a diverse array of applicants. Textio’s “Tone Meter” can help companies offer job listings with more inclusive language: Phrases like “rock star” that attract more male job seekers could be swapped out for the software’s suggestion of “high performer” instead.
“We use Textio for our own recruiting communication and have from the beginning,” says Kieran Snyder, CEO and co-founder of Textio, which is based in Seattle. “But perhaps because we make the software, we know that Textio on its own is not the whole solution when it comes to building an equitable organization—it’s just one piece of the puzzle.”
Indeed, many tech companies, including those that develop AI-powered hiring tools, are still working on inclusion and equity. Enterprise software company Workday, founded by former PeopleSoft executives and headquartered in Pleasanton, Calif., has more than 3,700 employees worldwide and clients that include half the Fortune 100. During a company forum on diversity and racial bias in June, Workday acknowledged that Black employees make up just 2.4 percent of its U.S. workforce versus the average of 4.4 percent for Silicon Valley firms, according to SearchHRSoftware, a human resources technology news site.
AI hiring tools: not a quick fix
Another challenge for AI-powered recruiting tools is that some customers expect them to offer a quick fix to a complex problem, when in reality that is not the case. James Doman-Pipe, head of product marketing at Headstart, a recruiting software startup based in London, says any business interested in reducing discrimination with AI or other technologies will need significant buy-in from the leadership and other parts of the organization.
Headstart’s software uses machine learning to evaluate job applicants and generate a “match score” that shows how well the candidates fit with a job’s requirements for skills, education, and experience. “By generating a match score, recruiters are more likely to consider underprivileged and underrepresented minorities to move forward in the recruiting process,” Doman-Pipe says. The company claims that in tests comparing the AI-based approach to traditional recruiting methods, clients using its software saw a significant improvements in the diversity makeup of new hires.
Still, one of the greatest obstacles AI-powered recruiting tools face before they can gain widespread trust is the lack of public data showing how different tools can help—or hinder—efforts to making tech hiring more equitable.
“I do know from interviews with software companies that they do audit, and they can go back and recalibrate their systems,” Yarger, the Pennsylvania State University professor, says. But the effectiveness of efforts to improve algorithmic equity in recruitment remain unclear. She explains that many companies remain reluctant to publicly share such information because of liability issues surrounding equitable employment and workplace discrimination. Companies using AI tools could face legal consequences if the tools were shown to discriminate against certain groups groups.
For North Carolina State’s Payton, it remains to be seen whether corporate commitments to addressing diversity and racial bias will have a broader and lasting impact in the hiring and retention of tech workers—and whether or not AI can prove significant in helping to create equitable an workforce.
“Association and confirmation biases and networks that are built into the system, those don’t change overnight,” she says. “So there’s much work to be done.”
To fully embrace wind and solar power unless, grid operators need to be able to predict and manage the variability that comes from changes in the wind or clouds dimming sunlight.
One solution may come from a $2-million project backed by the U.S. Department of Energy that aims to develop a risk dashboard for handling more complex power grid scenarios.
Grid operators now use dashboards that report the current status of the power grid and show the impacts of large disturbances—such as storms and other weather contingencies—along with regional constraints in flow and generation. The new dashboard being developed by Columbia University researchers and funded by the Advanced Research Projects Agency–Energy (ARPA-E) would improve upon existing dashboards by modeling more complex factors. This could help the grid better incorporate both renewable power sources and demand response programs that encourage consumers to use less electricity during peak periods.
“[Y]ou have to operate the grid in a way that is looking forward in time and that accepts that there will be variability—you have to start talking about what people in finance would call risk,” says Daniel Bienstock, professor of industrial engineering and operations research, and professor of applied physics and applied mathematics at Columbia University.
The new dashboard would not necessarily help grid operators prepare for catastrophic black swan events that might happen only once in 100 years. Instead, Bienstock and his colleagues hope to apply some lessons from financial modeling to measure and manage risk associated with more common events that could strain the capabilities of the U.S. regional power grids managed by independent system operators (ISOs). The team plans to build and test an alpha version of the dashboard within two years, before demonstrating the dashboard for ISOs and electric utilities in the third year of the project.
Variability already poses a challenge to modern power grids that were designed to handle steady power output from conventional power plants to meet an anticipated level of demand from consumers. Power grids usually rely on gas turbine generators to kick in during peak periods of power usage or to provide backup to intermittent wind and solar power.
But such generators may not provide a fast enough response to compensate for the expected variability in power grids that include more renewable power sources and demand response programs driven by fickle human behavior. In the worst cases, grid operators may shut down power to consumers and create deliberate blackouts in order to protect the grid’s physical equipment.
One of the dashboard project’s main goals involves developing mathematical and statistical models that can quantify the risk from having greater uncertainty in the power grid. Such models would aim to simulate different scenarios based on conditions—such as changes in weather or power demand—that could stress the power grid. Repeatedly playing out such scenarios would force grid operators to fine-tune and adapt their operational plans to handle such surprises in real life.
For example, one scenario might involve a solar farm generating 10 percent less power and a wind farm generating 30 percent more power within a short amount of time, Bienstock explains. The combination of those factors might mean too much power begins flowing on a particular power line and the line subsequently starts running hot at the risk of damage.
Such models would only be as good as the data that trains them. Some ISOs and electric utilities have already been gathering useful data from the power grid for years. Those that already have more experience dealing with the variability of renewable power have been the most proactive. But many of the ISOs are reluctant to share such data with outsiders.
“One of the ISOs has told us that they will let us run our code on their data provided that we actually physically go to their office, but they will not give us the data to play with,” Bienstock says.
For this project, ARPA-E has been working with one ISO to produce synthetic data covering many different scenarios based on historical data. The team is also using publicly available data on factors such as solar irradiation, cloud cover, wind strength, and the power generation capabilities of solar panels and wind turbines.
“You can look at historical events and then you can design stress that’s somehow compatible with what we observe in the past,” says Agostino Capponi, associate professor of industrial engineering and operations research at Columbia University and external consultant for the U.S. Commodity Futures Trading Commission.
A second big part of the dashboard project involves developing tools that grid operators could use to help manage the risks that come from dealing with greater uncertainty. Capponi is leading the team’s effort to design customized energy volatility contracts that could allow grid operators to buy such contracts for a fixed amount and receive compensation for all the variance that occurs over a historical period of time.
But he acknowledged that financial contracts designed to help offset risk in the stock market won’t apply in a straightforward manner to the realities of the power grid that include delays in power transmission, physical constraints, and weather events.
“You cannot really directly use existing financial contracts because in finance you don’t have to take into account the physics of the power grid,” Capponi says.
Once the new dashboard is up and running, it could begin to help grid operators deal with both near-term and long-term challenges for the U.S. power grid. One recent example comes from the current COVID-19 pandemic and associated human behavioral changes—such as more people working from home—having already increased variability in energy consumption across New York City and other parts of the United States. In the future, the risk dashboard might help grid operators quickly identify areas at higher risk of suffering from imbalances between supply and demand and act quickly to avoid straining the grid or having blackouts.
Knowing the long-term risks in specific regions might also drive more investment in additional energy storage technologies and improved transmission lines to help offset such risks. The situation is different for every grid operator’s particular region, but the researchers hope that their dashboard can eventually help level the speed bumps as the U.S. power grid moves toward using more renewable power.
“The ISOs have different levels of renewable penetration, and so they have different exposures and visibility to risk,” Bienstock says. “But this is just the right time to be doing this sort of thing.”
Confusion and skepticism may confound efforts to make use of digital contact tracing technologies during the COVID-19 pandemic. A recent survey found that just 42 percent of American respondents support using so-called contact tracing apps—an indication of a lack of confidence that could weaken or even derail effective deployment of such technologies.
As the COVID-19 outbreak swept through Manhattan and the surrounding New York City boroughs earlier this year, electricity usage dropped as businesses shuttered and people hunkered down in their homes. Those changes in human behavior became visible from space as the nighttime lights of the city that never sleeps dimmed by 40 percent between February and April.
That striking visualization of the COVID-19 impact on U.S. electricity consumption came from NASA’s “Black Marble” satellite data. U.S. and Chinese researchers are currently using such data sources in what they describe as an unprecedented effort to study how electricity consumption across the United States has been changing in response to the pandemic. One early finding suggests that mobility in the retail sector—defined as daily visits to retail establishments—is an especially significant factor in the reduction of electricity consumption seen across all major U.S. regional markets.
“I was previously not aware that there is such a strong correlation between the mobility in the retail sector and the public health data on the electricity consumption,” says Le Xie, professor in electrical and computer engineering and assistant director of energy digitization at the Texas A&M Energy Institute. “So that is a key finding.”
Xie and his colleagues from Texas A&M, MIT, and Tsinghua University in Beijing, China, are publicly sharing their Coronavirus Disease-Electricity Market Data Aggregation (COVID-EMDA) project and the software codes they have used in their analyses in an online Github repository. They first uploaded a preprint paper describing their initial analyses to arXiv on 11 May 2020.
Most previous studies that focused on public health and electricity consumption tried to examine whether changes in electricity usage could provide an early warning sign of health issues. But when the U.S. and Chinese researchers first put their heads together on studying COVID-19 impacts, they did not find other prior studies that had examined how a pandemic can affect electricity consumption.
Beyond using the NASA satellite imagery of the nighttime lights, the COVID-EMDA project also taps additional sources of data about the major U.S. electricity markets from regional transmission organizations, weather patterns, COVID-19 cases, and the anonymized GPS locations of cellphone users.
“Before when people study electricity, they look at data on the electricity domain, perhaps the weather, maybe the economy, but you would have never thought about things like your cell phone data or mobility data or the public health data from COVID cases,” Xie says. “These are traditionally totally unrelated data sets, but in these very special circumstances they all suddenly became very relevant.”
The unique compilation of different data sources has already helped the researchers spot some interesting patterns. The most notable finding suggests that the largest portion of the drop in electricity consumption likely comes from the drop in people’s daily visits to retail establishments as individuals begin early adoption of practicing social distancing and home isolation. By comparison, the number of new confirmed COVID-19 cases does not seem to have a strong direct influence on changes in electricity consumption.
The Northeastern region of the U.S. electricity sector that includes New York City seems to be experiencing the most volatile changes so far during the pandemic. Xie and his colleagues hypothesize that larger cities with higher population density and commercial activity would likely see bigger COVID-19 impacts on their electricity consumption. But they plan to continue monitoring electricity consumption changes in all the major regions as new COVID-19 hotspots have emerged outside the New York City area.
The biggest limitation of such an analysis comes from the lack of available higher-resolution data on electricity consumption. Each of the major regional transmission organizations publishes power load and price numbers daily for their electricity markets, but this reflects a fairly large geographic area that often covers multiple states.
“For example, if we could know exactly how much electricity is used in each of the commercial, industrial, and residential categories in a city, we could have a much clearer picture of what is going on,” Xie says.
That could change in the near future. Some Texas utility companies have already approached the COVID-EMDA group about possibly sharing such higher-resolution data on electricity consumption for future analyses. The researchers have also heard from economists curious about analyzing and perhaps predicting near-term economic activities based on electricity consumption changes during the pandemic.
One of the next big steps is to “develop a predictive model with high confidence to estimate the impact to electricity consumption due to social-distancing policies,” Xie says. “This could potentially help the public policy people and [regional transmission organizations] to prepare for similar situations in the future.”
Flat solar panels still face big limitations when it comes to making the most of the available sunlight each day. A new spherical solar cell design aims to boost solar power harvesting potential from nearly every angle without requiring expensive moving parts to keep tracking the sun’s apparent movement across the sky.
Optical atomic clocks will likely redefine the international standard for measuring a second in time. They are far more accurate and stable than the current standard, which is based on microwave atomic clocks.
Now, researchers in the United States have figured out how to convert high-performance signals from optical clocks into a microwave signal that can more easily find practical use in modern electronic systems.
When people in the United Kingdom began dying from COVID-19, researchers saw an urgent need to understand all the possible factors contributing to such deaths. So in six weeks, a team of software developers, clinicians, and academics created an open-source platform designed to securely analyze millions of electronic health records while protecting patient privacy.
One of the sneakiest ways to spill the secrets of a computer system involves studying its pattern of power usage while it performs operations. That’s why researchers have begun developing ways to shield the power signatures of AI systems from prying eyes.
Named after a dragon from the “Game of Thrones” fantasy series, the Rhaegal-A won’t be making its mark by burninating the countryside. Instead the hybrid-electric cargo drone capable of taking off and landing like a helicopter is in the spotlight today during a U.S. Air Force conference about “flying car” technologies.
Amid the coronavirus pandemic, even privacy-conscious European governments have asked telecom companies for residents’ phone location data in hopes of understanding whether national social distancing measures such as stay-at-home orders and business closures are having any effect on the spread of COVID-19.
Some of the hardest-hit countries, including Italy and Spain, are now open to proposals formobile apps that can make contact tracing more efficient and alert people who have come into contact with someone infected by the novel coronavirus.
Internet shutdowns that affect entire regions or countries and cost billions of dollars annually have become a widespread phenomenon, especially as various governments wield them like a blunt club to restrict citizens’ access to online information.
Some governments deploy Internet shutdowns in an attempt to suppress protests, while Iraq’s Ministry of Education even orders shutdowns to prevent cheating during national school exams. The trick for independent observers trying to keep track of it all involves figuring out the difference between government-ordered shutdowns versus other causes of Internet outages.
In early 2020, the five-person team behind the nongovernmental organization NetBlocks was watching dips in Internet connectivity happening in a particular region of China over several months. That could have sparked suspicion that China’s online censors—who restrict access to certain online content as part of China’s “Great Firewall”—were perhaps throttling some popular online services or social media networks. But the NetBlocks team’s analysis showed that such patterns likely had to do with businesses shutting down or limiting operations to comply with government efforts aimed at containing the coronavirus outbreak that has since become a pandemic.
“When you’re investigating an internet shutdown, you need to work from both ends to conclusively verify that incident has happened, and to understand why it’s happened,” says Alp Toker, executive director of NetBlocks. “This means ruling out different types of outages.”
NetBlocks is among the independent research groups trying to keep an eye on the growing prevalence of Internet shutdowns. Since it formed in 2016, the London-based NetBlocks has expanded its focus from Turkey and the Middle East to other parts of the world by using remote measurement techniques. These include analytics software that monitors how well millions of phones and other devices can access certain online websites and services, along with both hardware probes plugged into local routers and an Internet browser probe that anyone can use to check their local connectivity.
But NetBlocks also relies upon what Toker describes as a more hands-on investigation to manually check out various incidents. That could mean checking in with local engineers or Internet service providers who are in a position to help confirm or rule out certain lines of inquiry. This combined approach has helped NetBlocks investigate all sorts of causes of Internet shutdowns, including major hurricanes, nationwide power outages in Venezuela and cuts in undersea Internet cables affecting Africa and the Middle East. Each of these types of outages provides data that NetBlocks is using to train machine learning algorithms in hopes of better automating detection and analysis of different events.
Update: 65 hours after #Iran implemented a near-total internet shutdown, some of the last remaining networks are now being cut and connectivity to the outside world has fallen further to 4% of normal levels 📉 #Internet4Iran#IranProtests
“Each of the groups that’s currently monitoring Internet censorship uses a different technical approach and can observe different aspects of what’s happening,” says Zachary Weinberg, a postdoctoral researcher at the University of Massachusetts Amherst and member of the Information Controls Lab (ICLab) project. “We’re working with them on combining all of our data sets to get a more complete picture.”
ICLab relies heavily on a network of commercial virtual private networks (VPNs) to gain observation points that provide a window into Internet connectivity in each country, along with a handful of human volunteers based around the world. These VPN observation points can do bandwidth-intensive tests and collect lots of data on network traffic without endangering volunteers in certain countries. But one limitation of this approach is that VPN locations in commercial data centers are sometimes not subject to the same Internet censorship affecting residential networks and mobile networks.
If a check turns up possible evidence of a network shutdown, ICLab’s internal monitoring alerts the team. The researchers use manual confirmation checks to make sure it’s a government-ordered shutdown action and not something like a VPN service malfunction. “We have some ad-hoc rules in our code to try to distinguish these possibilities, and plans to dig into the data [collected] so far and come up with something more principled,” Weinberg says.
The Open Observatory of Network Interference (OONI) takes a more decentralized, human-reliant approach to measuring Internet censorship and outages. OONI’s six-person team has developed and refined a computer software tool called OONI probe that people can download and run to can check local Internet connectivity with a number of websites, including a global test list of internationally relevant websites (such as Facebook) and a country-specific test list.
The OONI project began when members of the Tor Project, the nonprofit organization that oversees the Tor network designed to enable people to use the Internet anonymously, began creating “ad hoc scripts” to investigate blocking of Tor software and other examples of Internet censorship, says Arturo Filasto, lead developer of OONI. Since 2012, that has evolved into the free and open-source OONI probe with an openly-documented methodology explaining how it measures Internet censorship, along with a frequently updated database that anyone can search.
“We eventually consolidated [that] into the software that now tens of thousands of people run all over the world to collect their own evidence of Internet censorship and contribute to this growing pool of open data that anybody can use to research and investigate various forms of information controls on the Internet,” Filasto says.
Beyond the tens of thousands of active monthly users, hundreds of millions of people have downloaded the OONI probe. That probe is currently available as a mobile app and for desktop Linux and macOS users who don’t mind using the command-line interface, but the team aims to launch a more user-friendly desktop program for Windows and macOS users in April 2020.
On the low-tech side, news articles and word-of-mouth reports from ordinary people can also provide valuable internet outage data for websites such as the Internet Shutdown Tracker run by the Software Freedom Law Centre in New Delhi, India. But the Internet Shutdown Tracker website also invites mobile users to download and install the OONI probe tool as a way of helping gather more data on regional and city-level Internet shutdowns ordered by India’s government.
Whatever their approach, most of the groups tracking Internet shutdowns and online censorship still consist of small teams with budget constraints. For example, ICLab’s team would like to speed up and automate much of their process, but their budget is reliant in large part upon getting grants from the U.S. National Science Foundation. They also have limited data storage that restricts them to checking each country about two or three times a week on average to collect detailed cycles of measurements—amounting to about 500 megabytes of raw data per country.
Another challenge comes on the data collection side. People may face personal risk in downloading and using OONI probe or similar tools in some countries, especially if the government’s laws regard such actions as illegal or even akin to espionage. This is why the OONI team openly warns about the risk up front as part of what they consider their informed consent process, and even require mobile users to complete a quiz before starting to use the OONI probe app.
“Thanks to the fact that many people are running OONI probe in China and Iran, we’ve been able to uncover a lot of really interesting and important cases of Internet censorship that we wouldn’t otherwise have known,” Filasto says. “So we are very grateful to the brave users of OONI probe that have gathered these important measurements.”
Recent trends in both government information control strategies and the broader Internet landscape may also complicate the work of such groups. Governments in countries such as China, Russia, and Iran have begun moving away from network-level censorship toward embedding censorship policies within large social media platforms and chat systems such as Tencent’s WeChat in China. Detecting more subtle censorship within these platforms represents an even bigger challenge than collecting evidence of a region-wide Internet shutdown.
“We have to create accounts on all these systems, which in some cases requires proof of physical-space identity, and then we have to automate access to them, which the platforms intentionally make as difficult as possible,” Weinberg says. “And then we have to figure out whether someone’s post isn’t showing up because of censorship, or because the ‘algorithm’ decided our test account wouldn’t be interested in it.”
In 2019, large-scale Internet shutdowns affecting entire countries occurred alongside the shift toward “more nuanced Internet disruptions that happen on different layers,” Toker says. The NetBlocks team is refining its analytical capability to home in on different types of outages by learning more about the daily pattern of Internet traffic that reflects each country’s normal economic activities. But Toker is also hoping that his group and others can continue forging international cooperation to study these issues together. For now, NetBlocks relies upon community contributions, the technical community, and volunteers.
“There are bubbles of expertise in different parts of the world, and those haven’t necessarily combined, so from where we’ve been coming I think those bridges are just starting to be built,” Toker says. “And that means really getting engineers together from different fields and different backgrounds, whether it’s electrical engineering or Internet engineering.”
In the ongoing race to make and break digital codes, the idea of perfect secrecy has long hovered on the horizon like a mirage. A recent research paper has attracted both interest and skepticism for describing how to achieve perfect secrecy in communications by using specially-patterned silicon chips to generate one-time keys that are impossible to recreate.
When government officials in India decided to shut down the Internet, software engineers working for an IT and data analytics firm lost half a day of work and fell behind in delivering a project for clients based in London. A hotel was unable to pay its employees or manage online bookings for tourists. A major hospital delayed staff salary payments and restricted its medical services to the outpatient and emergency departments.
How can headphone-wearing pedestrians tune out the chaotic world around them without compromising their own safety? One solution may come from the pedestrian equivalent of a vehicle collision warning system that aims to detect nearby vehicles based purely on sound.
Fears about online censorship have grown since Hong Kong government officials raised the possibility of curbing Internet freedom to suppress a city-wide protest movement that has led to increasingly violent clashes between riot police and some protesters.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.