Tag Archives: computing

In the Race to Hundreds of Qubits, Photons May Have “Quantum Advantage”

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/hardware/race-to-hundreds-of-photonic-qubits-xanadu-scalable-photon

Quantum computers based on photons may have some advantages over  electron-based machines, including operating at room temperature and not temperatures colder than that of deep space. Now, say scientists at quantum computing startup Xanadu, add one more advantage to the photon side of the ledger. Their photonic quantum computer, they say, could scale up to rival or even beat the fastest classical supercomputers—at least at some tasks.

Whereas conventional computers switch transistors either on or off to symbolize data as ones and zeroes, quantum computers use quantum bits or “qubits” that, because of the bizarre nature of quantum physics, can exist in a state known as superposition where they can act as both 1 and 0. This essentially lets each qubit perform multiple calculations at once.

The more qubits are quantum-mechanically connected entangled together, the more calculations they can simultaneously perform. A quantum computer with enough qubits could in theory achieve a “quantum advantage enabling it to grapple with problems no classical computer could ever solve. For instance, a quantum computer with 300 mutually-entangled qubits could theoretically perform more calculations in an instant than there are atoms in the visible universe.

Why Electronic Health Records Haven’t Helped U.S. With Vaccinations

Post Syndicated from Robert N. Charette original https://spectrum.ieee.org/riskfactor/computing/it/ehr-pandemic-promises-not-kept

IEEE COVID-19 coverage logo, link to landing page

Centers for Disease Control and Prevention (CDC) website states that if you have or suspect you have Covid-19, to inform your doctor. However, who do you tell that you actually received your shot?

If you get the shot at a local hospital that is affiliated with your doctor, the vaccination information might show up in your electronic health record (EHR). However, if it doesn’t, or you get the shot through a pharmacy like CVS, your vaccination information may end up stranded on the little paper CDC vaccination card you receive with your shot. The onus will be on you to give your doctor your vaccination information, which will have to be manually (hopefully without error) entered into your EHR.

It was not supposed to be like this.

When President George W. Bush announced in his 2004 State of the Union address that he wanted every U.S. citizen to have an EHR by 2014, one of the motivations was to greatly improve medical care in case of a public health crisis such as a pandemic. The 2003 SARS-Cov-1 pandemic had highlighted the need for the access, interoperability and fusion of health information from a wide variety of governmental and private sources so that the best public health decisions could be made by policy makers in a rapid fashion. As one Chinese doctor remarked in a candid 2004 lessons learned article, in the spring of 2003 SARS had become “out of control due to incorrect information.”

Today, President Bush’s goal that nearly all Americans have access to electronic health records has been met, thanks in large part to the $35 billion in Congressionally-mandated incentives for EHR adoption set out in the 2009 American Recovery and Reinvestment Act. Unfortunately, EHRs have created challenges in combatting the current SARS-Cov-2 pandemic, and at least one doctor is saying that they are proving to be an impediment.

In a recent National Public Radio interview, Dr. Bob Kocher, an adjunct professor at Stanford University School of Medicine and former government healthcare policy official in the Obama Administration, blamed EHRs in part for problems with Americans not being able to easily schedule their vaccinations. A core issue, he states, is that most EHRs are not universally interoperable, meaning health and other relevant patient information they contain is not easily shared either with other EHRs or government public health IT systems. In addition, Kocher notes, key patient demographic information like race and ethnicity may not be captured by an EHR to allow public health officials to understand whether all communities are being equitably vaccinated.

As a result, the information public officials need to determine who has been vaccinated, and where scarce vaccine supplies need to be allocated next, is less accurate and timely than it needs to be. With Johnson & Johnson’s Covid-19 vaccine rolling out this week, the need for precise information increases to ensure the vaccine is allocated to where it is needed most. 

It is a bit ironic that EHRs have not been central in the fight against COVID-19, given how much President Bush was obsessed with preparing the U.S. against a pandemic. Even back in 2007, researchers were outlining how EHRs should be designed to support public health crises and the criticality of seamless data interoperability. However, the requirement for universal EHR interoperability was lost in the 2009 Congressional rush to get hospitals and individual healthcare providers to move from paper medical records to adopt EHRs. Consequently, EHR designs were primarily focused on supporting patient-centric healthcare and especially on aiding healthcare provider billing.

A myriad of other factors has exacerbated the interoperability situation. One is the sheer number of different EHR systems used across the thousands of healthcare providers, each with its own unique way of capturing and storing health information. Even hospitals themselves operate several different EHRs, with the average reportedly having some 16 disparate EHR vendors in use at its affiliated practices. 

Another is the number of different federal, let alone state or local public health IT systems that data needs to flow among. For example, there are six different CDC software systems, including two brand new systems, that are used for vaccine administration and distribution alongside the scores of vaccine registry systems that the states are using. With retail pharmacies now authorized to give COVID-19 vaccinations, the number of IT systems involved keeps growing. To say the process involved in creating timely and accurate information for health policy decision makers is convoluted is being kind.

In addition, the data protocols used to link EHRs to government vaccine registries are a maddeningly tortuous mess that often impede information sharing rather than fostering it. Some EHR vendors like Cerner and Epic have already successfully rolled out improvements to their EHR products to support state mass vaccination efforts, but they are more the exception than the rule.

Furthermore, many EHR vendors (and healthcare providers) have not really been interested in sharing patient information, worrying that it would make it too easy for patients to move to a competitor. While the federal government has recently clamped down on information blocking, it is unlikely to go away anytime soon given the exceptions in the interoperability rules.

Yet another factor has been that state and local public health departments have been cut steadily since the recession of 2008, along with investments in health information systems. Many, if not most, existing state public health IT systems for allocating, scheduling, monitoring and reporting vaccinations have proven not up to the task of supporting the complex health information logistical requirements in a pandemic, and have required significant upgrades or supplemental support. Even then, these supplemental efforts have been fraught with problems. Of course, the federal government’s IT efforts to support state vaccination efforts in the wake of such obstacles has hardly been stellar, either. 

Perhaps the only good news is that the importance of having seamless health IT systems, from health provider EHR systems to state and federal government public health IT systems, is now fully appreciated. Late last year, Congress authorized $500 million under the Coronavirus Aid, Relief, and Economic Security (CARES) Act to the CDC for its Public Health Data Modernization Initiative. The Initiative aims to “bring together state, tribal, local, and territorial (STLT) public health jurisdictions and our private and public sector partners to create modern, interoperable, and real-time public health data and surveillance systems that will protect the American public.” The Office of the National Coordinator for Health Information Technology will also receive $62 million to increase the interoperability of public health systems.

The $560 million will likely be just a small down payment, as many existing EHRs will need to be upgraded not only to better support a future pandemic or other public health crises, but to correct existing issues with EHRs such as poor operational safety and doctor burnout. Furthermore, state and local public health organizations along with their IT infrastructure will also have to be modernized, which will take years. This assumes the states are even willing to invest the funding required given all the other funding priorities caused by the pandemic. 

One last issue that will likely come to the fore is whether it is now time for a national patient identifier that could be used for uniquely identifying patient information. It would make achieving EHR interoperability easier, but creating one has been banned since 1998 over both privacy and security concerns, and more recently, arguments that it isn’t necessary. While the U.S. House of Representatives voted to overturn the ban again last summer, the Senate has not followed suit. If they do decide to overturn the ban and boost EHR interoperability efforts, let us also hope Senators keep in mind just how vulnerable healthcare providers are to cybersecurity attacks.

Quantum Computer Error Correction Is Getting Practical

Post Syndicated from Michael J. Biercuk original https://spectrum.ieee.org/tech-talk/computing/hardware/quantum-computer-error-correction-is-getting-practical

Quantum computers are gaining traction across fields from logistics to finance. But as most users know, they remain research experiments, limited by imperfections in hardware. Today’s machines tend to suffer hardware failures—errors in the underlying quantum information carriers called qubits—in times much shorter than a second. Compare that with the approximately one billion years of continuous operation before a transistor in a conventional computer fails, and it becomes obvious that we have a long way to go.

Companies building quantum computers like IBM and Google have highlighted that their roadmaps include the use of “quantum error correction” to achieve what is known as fault-tolerant quantum computing as they scale to machines with 1,000 or more qubits.

Quantum error correction—or QEC for short—is an algorithm designed to identify and fix errors in quantum computers. It’s able to draw from validated mathematical approaches used to engineer special “radiation hardened” classical microprocessors deployed in space or other extreme environments where errors are much more likely to occur.

QEC is real and has seen many partial demonstrations in laboratories around the world—initial steps that make it clear it’s a viable approach. 2021 may just be the year when it is convincingly demonstrated to give a net benefit in real quantum-computing hardware.

Unfortunately, with corporate roadmaps and complex scientific literature highlighting an increasing number of relevant experiments, an emerging narrative is falsely painting QEC as a future panacea for the life-threatening ills of quantum computing.

QEC, in combination with the theory of fault-tolerant quantum computing, suggests that engineers can in principle build an arbitrarily large quantum computer that if operated correctly would be capable of arbitrarily long computations. This would be a stunningly powerful achievement. The prospect that it can be realized underpins the entire field of quantum computer science: Replace all quantum computing hardware with “logical” qubits running QEC, and even the most complex algorithms come into reach. For instance, Shor’s algorithm could be deployed to render Bitcoin insecure with just a few thousand error-corrected logical qubits. On its face, that doesn’t seem far from the 1,000+ qubit machines promised by 2023. (Spoiler alert: this is the wrong way to interpret these numbers).

The challenge comes when we look at the implementation of QEC in practice. The algorithm by which QEC is performed itself consumes resources—more qubits and many operations.

Returning to the promise of 1,000-qubit machines in industry, so many resources might be required that those 1,000 qubits yield only, say, 5 useful logical qubits.

Even worse, the amount of extra work that must be done to apply QEC currently introduces more error than correction. QEC research has made great strides from the earliest efforts in the late 1990s, introducing mathematical tricks that relax the associated overheads or enable computations on logical qubits to be conducted more easily, without interfering with the computations being performed. And the gains have been enormous, bringing the break-even point, where it’s actually better to perform QEC than not, at least 1,000 times closer than original predictions. Still, the most advanced experimental demonstrations show it’s at least 10 times better to do nothing than to apply QEC in most cases.

This is why a major public-sector research program run by the U.S. intelligence community has spent the last four years seeking to finally cross the break-even point in experimental hardware, for just one logical qubit. We may well unambiguously achieve this goal in 2021—but that’s the beginning of the journey, not the end.

Crossing the break-even point and achieving useful, functioning QEC doesn’t mean we suddenly enter an era with no hardware errors—it just means we’ll have fewer. QEC only totally suppresses errors if we dedicate infinite resources to the process, an obviously untenable proposition. Moreover, even forgetting those theoretical limits, QEC is imperfect and relies on many assumptions about the properties of the errors it’s tasked with correcting. Small deviations from these mathematical models (which happen all the time in real labs) can reduce QEC’s effectiveness further.

Instead of thinking of QEC as a single medicine capable of curing everything that goes wrong in a quantum computer, we should instead consider it an important part of a drug cocktail.

As special as QEC is for abstract quantum computing mathematically, in practice it’s really just a form of what’s known as feedback stabilization. Feedback is the same well-studied technique used to regulate your speed while driving with cruise control or to keep walking robots from tipping over. This realization opens new opportunities to attack the problem of error in quantum computing holistically and may ultimately help us move closer to what we actually want: real quantum computers with far fewer errors.

Fortunately, there are signs that within the research community a view to practicality is emerging. For instance, there is greater emphasis on approximate approaches to QEC that help deal with the most nefarious errors in a particular system, at the expense of being a bit less effective for others.

The combination of hardware-level, open-loop quantum control with feedback-based QEC may also be particularly effective. Quantum control permits a form of “error virtualization” in which the overall properties of the hardware with respect to errors are transformed before implementation of QEC encoding. These include reduced overall error rates, better error uniformity between devices, better hardware stability against slow variations, and a greater compatibility of the error statistics with the assumptions of QEC. Each of these benefits can reduce the resource overheads needed to implement QEC efficiently. Such a holistic view of the problem of error in quantum computing—from quantum control at the hardware level through to algorithmic QEC encoding—can improve net quantum computational performance with fixed hardware resources.

None of this discussion means that QEC is somehow unimportant for quantum computing. And there will always remain a central role for exploratory research into the mathematics of QEC, because you never know what a clever colleague might discover. Still, a drive to practical outcomes might even lead us to totally abandon the abstract notion of fault-tolerant quantum computing and replace it with something more like fault-tolerant-enough quantum computing. That might be just what the doctor ordered.

Michael J. Biercuk, a professor of quantum physics and quantum technology at the University of Sydney, is the founder and CEO of Q-CTRL.

AI Recodes Legacy Software to Operate on Modern Platforms

Post Syndicated from Dexter Johnson original https://spectrum.ieee.org/tech-talk/computing/software/ai-legacy-software-analysis-tool

Last year, IBM demonstrated how AI can perform the tedious job of software maintenance through the updating of legacy code. Now Big Blue has introduced AI-based methods for re-coding old applications so that they can operate on today’s computing platforms.

The latest IBM initiatives, dubbed Mono2Micro and Application Modernization Accelerator (AMA), give app architects new tools for updating legacy applications and extracting new value from them. These initiatives represent a step towards a day when AI could automatically translate a program written in COBOL into Java, according to Nick Fuller, director of hybrid cloud services at IBM Research.

Fuller cautions that these latest AI approaches are currently only capable of breaking the legacy machine code of non-modular monolithic programs into standalone microservices. There still remains another step in translating the programming language because, while the AMA toolkit is in fact designed to modernize COBOL, at this point it only provides an incremental step in the modernization process, according to Fuller. “Language translation is a fundamental challenge for AI that we’re working on to enable some of that legacy code to run in a modern software language,” he added.

In the meantime, IBM’s latest AI tools offer some new capabilities. In the case of Mono2Micro, it first analyzes the old code to reveal all the hidden connections within it that application architects would find extremely difficult and time consuming to uncover on their own, such as the multiple components in the underlying business logic that contain numerous calls and connections to each other. 

Mono2Micro leverages AI clustering techniques to group similar code together, revealing more clearly how groups of code interact. Once Mono2Micro ingests the code, it analyzes the source and object code both statically (analyzing the program before it runs) and dynamically (analyzing the program while it’s running).

The tool then refactors monolithic Java-based programs and their associated business logic and user interfaces into microservices. This refactoring of the monolith into standalone microservices with specific functions minimizes the connections that existed in the software when it was a monolithic program, changing the application’s structure without altering its external behavior.

The objective of the AMA toolkit is to both analyze and refactor legacy applications written in even older languages (COBOL, PL/I). For the AMA toolkit, static analysis of the source code coupled with an understanding of the application structure is used to create a graph that represents the legacy application. When used in conjunction with deep-learning methods, this graph-based approach facilitates data retention as AMA goes through deep-learning processes.

IBM’s AI strategy addresses the key challenges for machine learning when the data input is code and the function is analysis: volume and multiple meanings. Legacy, mission-critical applications are typically hundreds of thousands to millions of lines of code. In this context, applying machine learning (ML) techniques to such large volumes of data can be made more efficient through the concept of embeddings.

These embedding layers represent a way to translate the data into numerical values. The power of embeddings comes from them mapping a large volume of code with multiple possible meanings to numerical values. This is what is done, for example, in translating natural human language to numerical values using “word” embeddings. It is also done in a graph context as it relates to code analysis.

“Embedding layers are tremendous because without them you would struggle to get anything approaching an efficiently performing machine-learning system,” said Fuller.

He added that in the case of code analysis, the ML system gets better in recommending microservices for the refactored legacy application by replicating the application functionality.

Fuller noted: “Once you get to that point, you’re not quite home free, but you’re essentially 70 percent done in terms of what you’re looking to gain, namely a mission critical application that is refactored into a microservices architecture.”

Coding for Qubits: How to Program in Quantum Computer Assembly Language

Post Syndicated from W. Wayt Gibbs original https://spectrum.ieee.org/tech-talk/computing/software/qscout-sandia-open-source-quantum-computer-and-jaqal-quantum-assembly-language

Quantum computing arguably isn’t quite full-fledged computing till there’s quantum software as well as hardware. One open-source quantum computer project at Sandia National Laboratory in Albuquerque, New Mexico aims to address this disparity with a custom-made assembly language for quantum computation. 

Over the next several years, physicist Susan Clark and her team at Sandia plan to use a $25 million, 5-year grant they won from the U.S. Department of Energy to run code provided by academic, commercial, and independent researchers around the world on their “QSCOUT” platform as they steadily upgrade it from 3 qubits today to as many as 32 qubits by 2023. 

QSCOUT stands for the Quantum Scientific Computing Open User Testbed and consists of ionized ytterbium atoms levitating inside a vacuum chamber. Flashes of ultraviolet laser light spin these atoms about, executing algorithms written in the team’s fledgling quantum assembly code—which they’ve named Just Another Quantum Assembly Language or JAQAL. (They’ve in fact trademarked the name as Jaqal with lowercase letters “aqal,” so all subsequent references will use that handle instead.) 

Although Google, IBM, and some other companies have built bigger quantum machines and produced their own programming languages, Clark says that QSCOUT offers some advantages to those keen to explore this frontier of computer science. Superconducting gates, like those in the Google and IBM machines, are certainly fast. But they’re also unstable, losing coherence (and data) in less than a second.

Thanks to ion-trapping technology similar to that developed by the company IonQ (which has published a nice explainer here), Clark says QSCOUT can maintain its computation’s coherence—think of it like a computational equivalent of retaining a train of thought—over as much as 10 seconds. “That’s the best out there,” Clark says. “But our gates are a little slower.” 

The real advantage of QSCOUT is not performance, however, but the ability it gives users to control as much or as little of the computer’s operation as they want to—even adding new or altered operations to the basic instruction set architecture of the machine. “QSCOUT is like a breadboard, while what companies are offering are like printed circuits,” says Andrew Landahl, who leads the QSCOUT software team.

“Our users are scientists who want to do controlled experiments. When they ask for two quantum gates to happen at the same time, they mean it,” he says. Commercial systems tend to optimize users’ programs to improve their performance. “But they don’t give you a lot of details of what’s going on under the hood,” Clark says. In these early days, when it is still so unclear how best to deal with major problems of noise, data persistence, and scalability, there’s a role for a quantum machine that just does what you tell it to do.

To deliver that combination of precision and flexibility, Landahl says, they created Jaqal, which includes commands to initialize the ions as qubits, rotate them individually or together into various states, entangle them into superpositions, and read out their end states as output data. (See “A ‘Hello World’ Program in Jaqal,” below.) 

The first line of any Jaqal program, e.g.

from qscout.v1.std usepulses *

loads a gate pulse file that defines the standard operations (“gates,” in the lingo of quantum computing). This scheme allows for easy extensibility. Landahl says that the next version will add new instructions to support more than 10 qubits and add new functions. Plus, he says, users can even write their own functions, too. 

One addition high on the wish list, Clark says, is a feature taken for granted in classical computing: the ability to do a partial measurement of a computation in progress and to then make adjustments based on the intermediate state. The interconnectedness of qubits makes such partial measurements tricky in the quantum realm, but experimentalists have shown it can be done.

Practical programs will intermix quantum and classical operations, so the QSCOUT team has also released on Github a Python package called JaqalPaq that provides a Jaqal emulator as well as commands to include Jaqal code as an object inside a larger Python program.

Most of the first five project proposals that Sandia accepted from an initial batch of 15 applicants will perform benchmarking of various kinds against other quantum computers. But, Clark says, “One of the teams [led by Phil Richerme at Indiana University, Bloomington] is solving a small quantum chemistry problem by finding the ground states of a particular molecule.”

She says she plans to invite a second round of proposals in March, after the team has upgraded the machine from 3 to 10 qubits.

A “Hello World” Program in Jaqal

One of the simplest non-trivial programs typically run on a new quantum computer, Landahl says, is code that entangles two qubits into one of the so-called Bell states, which are superpositions of the classical 0 and 1 binary states. The Jaqal documentation gives an example of a 15-line program that defines two textbook operations, executes those instructions to prepare a Bell state, and then reads out measurements of the two qubits’ resulting states. 

But as a trapped-ion computer, QSCOUT supports a nifty operation called a Mølmer–Sørensen gate that offers a shortcut. Exploiting that allows the 6-line program below to accomplish the same task—and to repeat it 1024 times:

register q[2]        // Define a 2-qubit register

loop 1024 {          // Sequential statements, repeated 1024x
    prepare_all     // Prepare each qubit in the |0⟩ state
    Sxx q[0] q[1]    // Perform the Mølmer–Sørensen gate
    measure_all     // Measure each qubit and output results
}

Darpa Hacks Its Secure Hardware, Fends Off Most Attacks

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/embedded-systems/darpa-hacks-its-secure-hardware-fends-off-most-attacks

Last summer, Darpa asked hackers to take their best shots at a set of newly designed hardware architectures. After 13,000 hours of hacking by 580 cybersecurity researchers, the results are finally in: just 10 vulnerabilities. Darpa is calling it a win, not because the new hardware fought off every attack, but because it “proved the value of the secure hardware architectures developed under its System Security Integration Through Hardware and Firmware (SSITH) program while pinpointing critical areas to further harden defenses,” says the agency.

Researchers in SSITH, which is part of Darpa’s multibillion dollar Electronics Resurgence Initiative, are now in the third and final phase of developing security architectures and tools that guard systems against common classes of hardware vulnerabilities that can be exploited by malware. [See “How the Spectre and Meltdown Hacks Really Worked.”] The idea is to find a way past the long-standing security model of “patch and pray”, where vulnerabilities are found and software is updated.

In an essay introducing the bug bounty, Keith Rebello, the project’s leader, wrote that patching and praying is a particularly ineffective strategy for IoT hardware, because of the cost and inconsistency of updating and qualifying a hugely diverse set of systems. [See “DARPA: Hack Our Hardware”]

Rebello described the common classes of vulnerabilities as buffer errorsprivilege escalationsresource management attacksinformation leakage attacksnumeric errorscode injection attacks, and cryptographic attacks. SSITH teams came up with RISC-V-based architectures meant to render them impossible. These were then emulated using FPGAs. A full stack of software including a bunch of apps known to be vulnerable ran on the FPGA. They also allowed outsiders to add their own vulnerable applications. The Defense Department then loosed hackers upon the emulated systems using a crowdsourced security platform provided by Synack in a bug bounty effort called Finding Exploits to Thwart Tampering (FETT).

“Knowing that virtually no system is unhackable, we expected to discover bugs within the processors. But FETT really showed us that the SSITH technologies are quite effective at protecting against classes of common software-based hardware exploits,” said Rebello, in a press release. “The majority of the bug reports did not come from exploitation of the vulnerable software applications that we provided to the researchers, but rather from our challenge to the researchers to develop any application with a vulnerability that could be exploited in contradiction with the SSITH processors’ security claims. We’re clearly developing hardware defenses that are raising the bar for attackers.”

Of the 10 vulnerabilities discovered, four were fixed during the bug bounty, which ran from July to October 2020. Seven of those 10 were deemed critical, according to the Common Vulnerability Scoring System 3.0 standards. Most of those resulted from weaknesses introduced by interactions between the hardware, firmware, and the operating system software. For example, one hacker managed to steal the Linux password authentication manager from a protected enclave by hacking the firmware that monitors security, Rebello explains.

In the program’s third and final phase, research teams will work on boosting the performance of their technologies and then fabricating a silicon system-on-chip that implements the security enhancements. They will also take the security tech, which was developed for the open-source RISC-V instruction set architecture, and adapt it to processors with the much more common Arm and x86 instruction set architectures. How long that last part will take depends on the approach the research team took, says Rebelllo. However, he notes that three teams have already ported their architectures to Arm processors in a fraction of the time it took to develop the initial RISC-V version.

Practical Guide to Security in the AWS Cloud, by the SANS Institute

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/practical-guide-to-security-in-the-aws-cloud-by-the-sans-institute

Security in the AWS Cloud

AWS Marketplace would like to present you with a digital copy of the new book, Practical Guide to Security in the AWS Cloud, by the SANS Institute. This complimentary book is a collection of knowledge from 18 contributing authors, who share their tactics, techniques, and procedures for securely operating in the cloud. 

Smart City Video Platform Finds Crimes and Suspects

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/networks/smart-tracking-platform-anveshak-proves-effective-on-the-streets-of-bangalore

Journal Watch report logo, link to report landing page

In many cities, whenever a car is stolen or someone is abducted, there are video cameras on street corners and in public spaces that could help solve those crimes. However, investigators and police departments typically don’t have the time or resources required to sift through hundreds of video feeds and tease out the evidence they need. 

Aakash Khochare, a doctoral student at the Indian Institute of Science, in Bangalore, has been working for several years on a platform that could be useful. Khochare’s Anveshak, which means “Investigator” in Hindi, won an IEEE TCSC SCALE Challenge 2019 award in 2019.  That annual competition is sponsored by the IEEE Technical Committee on Scalable Computing. Last month, Anveshak was described in detail in a study published in IEEE Transactions on Parallel and Distributed Systems.

Anveshak features a novel tracking algorithm that narrows down the range of cameras likely to have captured video of the object or person of interest. Positive identifications further reduce the number of camera feeds under scrutiny. “This reduces the computational cost of processing the video feeds—from thousands of cameras to often just a few cameras,” explains Khochare.

The algorithm selects which cameras to analyze based on factors such as the local road network, camera location, and the last-seen position of the entity being tracked.

Anveshak also decides how long to buffer a video feed (ie., download a certain amount of data) before analyzing it, which helps reduce delays in computer processing. 

Last, if the computer becomes overburdened with processing data, Anveshak begins to intelligently cut out some of the video frames it deems least likely to be of interest. 

Khochare and his colleagues tested the platform using open dataset images generated by 1,000 virtual cameras covering a 7 square kilometer region across the Indian Institute of Science’s Bangalore campus. They simulated an object of interest moving through the area and used Anveshak to track it within 15 seconds. 

Khochare says Anveshak brings cities closer to a practical and scalable tracking application. However, he says future versions of the program will need to take privacy issues into account.

“Privacy is an important consideration,” he says. “We are working on incorporating privacy restrictions within the platform, for example by allowing analytics to track vehicles, but not people. Or, analytics that track adults but not children. Anonymization and masking of entities who are not of interest or should not be tracked can also be examined.”

Next, he says, the team plans to extend Anveshak so that it is capable of tracking multiple objects at a time. “The use of camera feeds from drones, whose location can be dynamically controlled, is also an emerging area of interest,” says Khochare.

He says the goal is to eventually make Anveshak an open source platform for researchers to use.

Application Note – Advanced Time-Domain Measurements

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/application-note-advanced-timedomain-measurements

Advanced time-domain measurements

Detection of weak signals in the presence of strong disturbers is challenging and requires a high dynamic range. In this application note, we show how high-performance digitizers with built-in FPGA help overcome these challenges using real-time noise suppression techniques such as baseline stabilization, linear filtering, non-linear threshold, and waveform averaging.

Get Your Interpreters Ready—This Year’s BASIC 10-Liner Competition Is Open For Entries

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/computing/software/get-your-interpreters-readythis-years-basic-10-liner-competition-is-open-for-entries

Tired of slicing Python lists? Can’t face coding up another class in C++? Take break and try your hand at writing a little BASIC—very little, in fact, because the goal here is to write a game in just 10 lines!

The annual BASIC 10-Liner competition is now taking entries for the 2021 contest. Run almost every year by Gunnar Kanold since 2011, contestants must submit game programs written for any 8-bit home computer for which an emulator is available. Using the program to load a machine code payload is forbidden. 

Some of the entries in previous years have been nothing short of astounding, leveraging the abilities of classic computers like the the TRS-80, Commodore 64, and Amiga 800. Back in 2019, IEEE Spectrum interviewed Kanold about the contest, and he gave some tips if you’re thinking about throwing your hat into the ring:

While some contestants are expert programmers who have written “special development tools to perfect their 10 Liners,” the contest is still accessible to neophytes: “The barriers to participation are low. Ten lines [is] a manageable project.” …  Kanold points to programing tutorials that can be found on YouTube or classic manuals [PDF] and textbooks that have been archived online. “But the best resource is the contest itself,” he says. “You can find the 10 Liners from the previous years on the home page, most of them with comprehensive descriptions. With these excellent breakdowns, you can learn how the code works.”

So take a look, download an emulator of your favorite machine (if you don’t already have a favorite, I’m partial to the BBC Micro myself) and start programming! This year’s deadline is 27 March, 2021.

North Korea-Backed Hackers Targeting Security Researchers

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/networks/northkorea-security

According to Google, North Korean-backed hackers are pretending to be security researchers, complete with a fake research blog and bogus Twitter profiles. These actions are supposedly part of spying efforts against actual security experts.

In an online report posted on 25 January, Google’s Threat Analysis Group, which focuses on government-backed attacks, provided an update on this ongoing campaign, which Google identified over the past several months. The North Korean hackers are apparently focusing on computer scientists at different companies and organizations working on research and development into computer security vulnerabilities.

“Sophisticated security researchers and professionals often have important information of value to attackers, and they often have special access privileges to sensitive systems. It is no wonder they are targets,” says Salvatore Stolfo, a computer scientist specializing in computer security at Columbia University, who did not take part in this research. “Successfully penetrating the security these people employ would provide access to very valuable secrets.”

How Patents Made Facebook’s Filter Bubble

Post Syndicated from Mark Harris original https://spectrum.ieee.org/computing/networks/the-careful-engineering-of-facebooks-filter-bubble

When Mark Zuckerberg said “Move fast and break things,” this is surely not what he meant.

Nevertheless, at a technological level the 6 January attacks on the U.S. Capitol could be contextualized by a line of patents filed or purchased by Facebook, tracing back 20 years. This portfolio arguably sheds light on how the most powerful country in the world was brought low by a rampaging mob nurtured on lies and demagoguery.

While Facebook’s arsenal of over 9,000 patents span a bewildering range of topics, at its heart are technologies that allow individuals to see an exclusive feed of content uniquely curated to their interests, their connections, and increasingly, their prejudices.

Algorithms create intensely personal “filter bubbles,” which are powerfully addictive to users, irresistible to advertisers, and a welcoming environment for rampant misinformation and disinformation such as QAnon, antivaxxer propaganda, and election conspiracy theories.

As Facebook turns 17—it was “born” 4 February 2004—a close reading of the company’s patent history shows how the social network has persistently sought to attract, categorize, and retain users by giving them more and more of what keeps them engaged on the site. In other words, hyperpartisan communities on Facebook that grow disconnected from reality are arguably less a bug than a feature.

Anyone who has used social media in recent years will likely have seen both misinformation (innocently shared false information) and disinformation (lies and propaganda). Last March, in a survey of over 300 social media users by the Center for an Informed Public at the University of Washington (UW), published on Medium, almost 80 percent reported seeing COVID-19 misinformation online, with over a third believing something false themselves. A larger survey covering the United States and Brazil in 2019, by the University of Liverpool and others, found that a quarter of Facebook users had accidentally shared misinformation. Nearly one in seven admitted sharing fake news on purpose.

“Misinformation tends to be more compelling than journalistic content, as it’s easy to make something interesting and fun if you have no commitment to the truth,” says Patricia Rossini, the social-media researcher who conducted the Liverpool study.

In December, a complaint filed by dozens of U.S. states asserted, “Due to Facebook’s unlawful conduct and the lack of competitive constraints…there has been a proliferation of misinformation and violent or otherwise objectionable content on Facebook’s properties.”

When a platform is open, like Twitter, most users can see almost everyone’s tweets. Therefore, tracking the source and spread of misinformation is comparatively straightforward. Facebook, on the other hand, has spent a decade and a half building a mostly closed information ecosystem.

Last year, Forbes estimated that the company’s 15,000 content moderators make some 300,000 bad calls every day. Precious little of that process is ever open to public scrutiny, although Facebook recently referred its decision to suspend Donald Trump’s Facebook and Instagram accounts to its Oversight Board. This independent 11-member “Supreme Court” is designed to review thorny content moderation decisions.

Meanwhile, even some glimpses of sunlight prove fleeting: After the 2020 U.S. presidential election, Facebook temporarily tweaked its algorithms to promote authoritative, fact-based news sources like NPR, a U.S. public-radio network. According to The New York Times, it soon reversed that decision, though, effectively cutting short its ability to curtail what a spokesperson called “inaccurate claims about the election.”

The company began filing patents soon after it was founded in 2004. A 2006 patent described how to automatically track your activity to detect relationships with other users, while another the same year laid out how those relationships could determine which media content and news might appear in your feed.

In 2006, Facebook patented a way to “characterize major differences between two sets of users.” In 2009, Mark ­Zuckerberg himself filed a patent that showed how ­Facebook “and/or external parties” could “target information delivery,” including political news, that might be of particular interest to a group.

This automated curation can drive people down partisan rabbit holes, fears Jennifer Stromer-Galley, a professor in the School of Information Studies at Syracuse University. “When you see perspectives that are different from yours, it requires thinking and creates aggravation,” she says. “As a for-profit company that’s selling attention to advertisers, Facebook doesn’t want that, so there’s a risk of algorithmic reinforcement of homogeneity, and filter bubbles.”

In the run-up to Facebook’s IPO in 2012, the company moved to protect its rapidly growing business from intellectual property lawsuits. A 2011 Facebook patent describes how to filter content according to biographic, geographic, and other information shared by a user. Another patent, bought by Facebook that year from the consumer electronics company Philips, concerns “an intelligent information delivery system” that, based on someone’s personal preferences collects, prioritizes, and “selectively delivers relevant and timely” information.

In recent years, as the negative consequences of Facebook’s drive to serve users ever more attention-grabbing content emerged, the company’s patent strategy seems to have shifted. Newer patents appear to be trying to rein in the worst excesses of the filter bubbles Facebook pioneered.

The word “misinformation” appeared in a Facebook patent for the first time in 2015, for technology designed to demote “objectionable material that degrades user experience with the news feed and otherwise compromises the integrity of the social network.” A pair of patents in 2017 described providing users with more diverse content from both sides of the political aisle and adding contextual tags to help rein in misleading “false news.”

Such tags using information from independent fact-checking organizations could help, according to a study by Ethan Porter, coauthor of False Alarm: The Truth About Political Mistruths in the Trump Era (Cambridge University Press, 2019). “It’s no longer a controversy that fact-checks reliably improve factual accuracy,” he says. “And contrary to popular misconception, there is no evident exception for controversial or highly politicized topics.”

Franziska Roesner, a computer scientist and part of the UW team, was involved in a similar, qualitative study last year that also gave a glimmer of hope. “People are now much more aware of the spread and impact of misinformation than they were in 2016 and can articulate robust strategies for vetting content,” she says. “The problem is that they don’t always follow them.”

Rossini’s Liverpool study also found that behaviors usually associated with democratic gains, such as discussing politics and being exposed to differing opinions, were associated with dysfunctional information sharing. Put simply, the worst offenders for sharing fake news were also the best at building online communities; they shared a lot of information, both good and bad.

Moreover, Rossini doubts the very existence of filter bubbles. Because many Facebook users have more and more varied digital friends than they do in-person connections, she says “most social media users are systematically exposed to more diversity than they would be in their offline life.”

The problem is that some of that diversity includes hate speech, lies, and propaganda that very few of us would ever seek out voluntarily—but that goes on to radicalize some.

“I personally quit Facebook two and a half years ago when the Cambridge Analytica scandal happened,” says Lalitha Agnihotri, formerly a data scientist for the Dutch company Philips, who in 2001 was part of a team that filed a related patent. In 2011, Facebook then acquired that Philips patent. “I don’t think Facebook treats my data right. Now that I realize that IP generated by me enabled Facebook to do things wrong, I feel terrible about it.”

Agnihotri says that she has been contacted by Facebook recruiters several times over the years but has always turned them down. “My 12-year-old suggested that maybe I need to join them, to make sure they do things right,” she says. “But it will be hard, if not impossible, to change a culture that comes from their founder.”

This article appears in the February 2021 print issue as “The Careful Engineering of Facebook’s Filter Bubble.”

Three Frosty Innovations for Better Quantum Computers

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/tech-talk/computing/hardware/three-super-cold-devices-quantum-computers

For most quantum computers, heat is the enemy. Heat creates error in the qubits that make a quantum computer tick, scuttling the operations the computer is carrying out. So quantum computers need to be kept very cold, just a tad above absolute zero.

“But to operate a computer, you need some interface with the non-quantum world,” says Jan Cranickx, a research scientist at imec. Today, that means a lot of bulky backend electronics that sit at room temperature. To make better quantum computers, scientists and engineers are looking to bring more of those electronics into the dilution refrigerator that houses the qubits themselves.

At December’s IEEE International Electron Devices Meeting (IEDM), researchers from than a half dozen companies and universities presented new ways to run circuits at cryogenic temperatures. Here are three such efforts: 

Google’s cryogenic control circuit could start shrinking quantum computers

At Google, researchers have developed a cryogenic integrated circuit for controlling the qubits, connecting them with other electronics. The Google team actually first unveiled their work back in 2019, but they’re continuing to scale up the technology, with an eye for building larger quantum computers.

This cryo-CMOS circuit isn’t much different from its room-temperature counterparts, says Joseph Bardin, a research scientist with Google Quantum AI and a professor at the University of Massachusetts, Amherst. But designing it isn’t so straightforward. Existing simulations and models of components aren’t tailored for cryogenic operation. Much of the researchers’ challenge comes in adapting those models for cold temperatures.

Google’s device operates at 4 kelvins inside the refrigerator, just slightly warmer than the qubits that are about 50 centimeters away. That could drastically shrink what are now room-sized racks of electronics. Bardin claims that their cryo-IC approach “could also eventually bring the cost of the control electronics way down.” Efficiently controlling quantum computers, he says, is crucial as they reach 100 qubits or more.

Cryogenic low-noise amplifiers make reading qubits easier

A key part of a quantum computer are the electronics to read out the qubits. On their own, those qubits emit weak RF signals. Enter the low-noise amplifier (LNA), which can boost those signals and make the qubits far easier to read. It’s not just quantum computers that benefit from cryogenic LNAs; radio telescopes and deep-space communications networks use them, too.

Researchers at Chalmers University of Technology in Gothenburg, Sweden, are among those trying to make cryo-LNAs. Their circuit uses high-electron-mobility transistors (HEMTs), which are especially useful for rapidly switching and amplifying current. The Chalmers researchers use transistors made from indium phosphide (InP), a familiar material for LNAs, though gallium arsenide is more common commercially. Jan Grahn, a professor at Chalmers University of Technology, states that InP HEMTs are ideal for the deep freeze, because the material does an even better job of conducting electrons at low temperatures than at room temperature.

Researchers have tinkered with InP HEMTs in LNAs for some time, but the Chalmers group are pushing their circuits to run at lower temperatures and to use less power than ever. Their devices operate as low as 4 kelvins, a temperature which makes them at home in the upper reaches of a quantum computer’s dilution refrigerator.

imec researchers are pruning those cables

Any image of a quantum computer is dominated by the byzantine cabling. Those cables connect the qubits to their control electronics, reading out of the states of the qubits and feeding back inputs. Some of those cables can be weeded out by an RF multiplexer (RF MUX), a circuit which can control the signals to and from multiple qubits. And researchers at imec have developed an RF MUX that can join the qubits in the fridge.

Unlike many experimental cryogenic circuits, which work at 4 kelvins, imec’s RF MUX can operate down to millikelvins. Jan Cranickx says that getting an RF MUX to work that temperature meant entering a world where the researchers and device physicists had no models to work from, He describes fabricating the device as a process of “trial and error,” of cooling components down to millikelvins and seeing how well they still work. “It’s totally unknown territory,” he says. “Nobody’s ever done that.”

This circuit sits right next to the qubits, deep in the cold heart of the dilution refrigerator. Further up and away, researchers can connect other devices, such as LNAs, and other control circuits. This setup could make it less necessary for each individual qubit to have its own complex readout circuit, and make it much easier to build complex quantum computers with much larger numbers of qubits—perhaps even thousands.

Beyond Bitcoin: China’s Surveillance Cash

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/computing/software/beyond-bitcoin-chinas-surveillance-cash

Of all the technological revolutions we’ll live through in this next decade, none will be more fundamental or pervasive than the transition to digital cash. Money touches nearly everything we do, and although all those swipes and taps and PIN-entry moments may make it seem as though cash is already digital, all of that tech merely eases access to our bank accounts. Cash remains stubbornly physical. But that’s soon going to change.

As with so much connected to digital payments, the Chinese got there first. Prompted by the June 2019 public announcement of Facebook’s Libra (now Diem)—the social media giant’s private form of digital cash—the People’s Bank of China unveiled Digital Currency/Electronic Payments, or DCEP. Having rolled out the new currency to tens of millions of users who now hold digital yuan in electronic wallets, the bank expects that by the time Beijing hosts the 2022 Winter Olympics, DCEP will be in widespread use across what some say is the world’s biggest economy. Indeed, the bank confidently predicts that by the end of the decade digital cash will displace nearly all of China’s banknotes.

Unlike the anarchic and highly sought-after Bitcoin—whose creation marked the genesis of digital cash—DCEP gives up all its secrets. The People’s Bank records every DCEP transaction of any size across the entire Chinese economy, enabling a form of economic surveillance that was impossible to conceive of before the advent of digital cash but is now baked into the design of this new kind of money.

China may have been first, but nearly every other major national economy has its central bankers researching their own forms of digital cash. That makes this an excellent moment to consider the tensions at the intersection of money, technology, and society.

Nearly every country tracks the movement of large sums of money in an effort to thwart terrorism and tax evasion. But most nations have been content to preserve the anonymity of cash when the amount conveyed falls below some threshold. Will that still be the case in 2030, or will our money watch us as we spend it? Just because we can use the technology to create an indelible digital record of our every transaction, should we? And if we do, who gets to see that ledger?

Digital cash also means that high-tech items will rapidly become bearers of value. We’ll have more than just smartphone-based wallets: Our automobiles will pay for their own bridge tolls or the watts they gulp to charge their batteries. It also means that such a car can be paid directly if it returns some of those electrons to the grid at times of peak demand. Within this decade, most of our devices could become fully transactional, not just in exchanging data, but also as autonomous financial entities.

Pervasive digital cash will demand a new ecosystem of software services to manage it. Here we run headlong into a fundamental challenge: You can claw back a mistaken or fraudulent credit card transaction with a charge-back, but a cash transfer is forever, whether that’s done by exchanging bills or bytes. That means someone’s buggy code will cost someone else real money. Ever-greater attention will have to be paid to testing, before the code managing these transactions is deployed. Youthful cries of “move fast and break things” ring hollow in a more mature connected world where software plays such a central role in the economy.

This article appears in the February 2021 print issue as “Surveillance Cash.”

Smart Algorithm Bursts Social Networks’ “Filter Bubbles”

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/networks/finally-a-means-for-bursting-social-media-bubbles

Journal Watch report logo, link to report landing page

Social media today depends on building echo chambers (a.k.a. “filter bubbles”) that wall off users into like-minded digital communities. These bubbles create higher levels of engagement, but come with pitfalls— including limiting people’s exposure to diverse views and driving polarization among friends, families and colleagues. This effect isn’t a coincidence either. Rather, it’s directly related to the profit-maximizing algorithms used by social media and tech giants on platforms like Twitter, Facebook, TikTok and YouTube.

In a refreshing twist, one team of researchers in Finland and Denmark has a different vision for how social media platforms could work. They developed a new algorithm that increases the diversity of exposure on social networks, while still ensuring that content is widely shared.

Antonis Matakos, a PhD student in the Computer Science Department at Aalto University in Espoo, Finland, helped develop the new algorithm. He expresses concern about the social media algorithms being used now.

While current algorithms mean that people more often encounter news stories and information that they enjoy, the effect can decrease a person’s exposure to diverse opinions and perspectives. “Eventually, people tend to forget that points of view, systems of values, ways of life, other than their own exist… Such a situation corrodes the functioning of society, and leads to polarization and conflict,” Matakos says.

“Additionally,” he says, “people might develop a distorted view of reality, which may also pave the way for the rapid spread of fake news and rumors.”

Matakos’ research is focused on reversing these harmful trends. He and his colleagues describe their new algorithm in a study published in November in IEEE Transactions on Knowledge and Data Engineering

The approach involves assigning numerical values to both social media content and users. The values represent a position on an ideological spectrum, for example far left or far right. These numbers are used to calculate a diversity exposure score for each user. Essentially, the algorithm is identifying social media users who would share content that would lead to the maximum spread of a broad variety of news and information perspectives.

Then, diverse content is presented to a select group of people with a given diversity score who are most likely to help the content propagate across the social media network—thus maximizing the diversity scores of all users.

In their study, the researchers compare their new social media algorithm to several other models in a series of simulations. One of these other models was a simpler method that selects the most well-connected users and recommends content that maximizes a person’s individual diversity exposure score.

Matakos says his group’s algorithm provides a feed for social media users that is at least three times more diverse (according to the researchers’ metric) than this simpler method, and even more so for baseline methods used for comparison in the study.  

These results suggest that targeting a strategic group of social media users and feeding them the right content is more effective for propagating diverse views through a social media network than focusing on the most well-connected users. Importantly, the simulations completed in the study also suggest that the new model is scalable.

A major hurdle, of course, is whether social media networks would be open to incorporating the algorithm into their systems. Matakos says that, ideally, his new algorithm could be an opt-in feature that social media networks offer to their users.

“I think [the social networks] would be potentially open to the idea,” says Matakos. “However, in practice we know that the social network algorithms that generate users’ news feeds are orientated towards maximizing profit, so it would need to be a big step away from that direction.”

 

SANS and AWS Marketplace Webinar: Learn to improve your Cloud Threat Intelligence program through cloud-specific data sources

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/sans-and-aws-marketplace-webinar-learn-to-improve-your-cloud-threat-intelligence-program-through-cloudspecific-data-sources

SOC efficiency

You’re Invited!

SANS and AWS Marketplace will discuss CTI detection and prevention metrics, finding effective intelligence data feeds and sources, and determining how best to integrate them into security operations functions.

Attendees of this webinar will learn how to:

  • Understand cloud-specific data sources for threat intelligence, such as static indicators and TTPs.
  • Efficiently search for compromised assets based on indicators provided, events generated on workloads and within the cloud infrastructure, or communications with known malicious IP addresses and domains.
  • Place intelligence and automation at the core of security workflows and decision making to create a comprehensive security program.

Presenters:

Dave Shackelford

Dave Shackelford, SANS Analyst, Senior Instructor

Dave Shackleford, a SANS analyst, senior instructor, course author, GIAC technical director and member of the board of directors for the SANS Technology Institute, is the founder and principal consultant with Voodoo Security. He has consulted with hundreds of organizations in the areas of security, regulatory compliance, and network architecture and engineering. A VMware vExpert, Dave has extensive experience designing and configuring secure virtualized infrastructures. He previously worked as chief security officer for Configuresoft and CTO for the Center for Internet Security. Dave currently helps lead the Atlanta chapter of the Cloud Security Alliance.

Nam Le

Nam Le, Specialist Solutions Architect, AWS

Nam Le is a Specialist Solutions Architect at AWS covering AWS Marketplace, Service Catalog, Migration Services, and Control Tower. He helps customers implement security and governance best practices using native AWS Services and Partner products. He is an AWS Certified Solutions Architect, and his skills include security, compliance, cloud computing, enterprise architecture, and software development. Nam has also worked as a consulting services manager, cloud architect, and as a technical marketing manager.

Why Aren’t COVID Tracing Apps More Widely Used?

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/why-arent-covid-tracing-apps-more-widely-used

As the COVID-19 pandemic began to sweep around the globe in early 2020, many governments quickly mobilized to launch contract tracing apps to track the spread of the virus. If enough people downloaded and used the apps, it would be much easier to identify people who have potentially been exposed. In theory, contact tracing apps could play a critical role in stemming the pandemic.

In reality, adoption of contract tracing apps by citizens was largely sporadic and unenthusiastic. A trio of researchers in Australia decided to explore why contract tracing apps weren’t more widely adopted. Their results, published on 23 December in IEEE Software, emphasize the importance of social factors such as trust and transparency.

Muneera Bano is a senior lecturer of software engineering at Deakin University, in Melbourne. Bano and her co-authors study human aspects of technology adoption. “Coming from a socio-technical research background, we were intrigued initially to study the contact tracing apps when the Australian Government launched the CovidSafe app in April 2020,” explains Bano. “There was a clear resistance from many citizens in Australia in downloading the app, citing concerns regarding trust and privacy.”

To better understand the satisfaction—or dissatisfaction—of app users, the researchers analyzed data from the Apple and Google app stores. At first, they looked at average star ratings, number of downloads, and conducted a sentiment analysis of app reviews.

However, just because a person downloads an app doesn’t guarantee that they will use it. What’s more, Bano’s team found that sentiment scores—which are often indicative of an app’s popularity, success, and adoption—were not an effective means for capturing the success of COVID-19 contract tracing apps.

“We started to dig deeper into the reviews to analyze the voices of users for particular requirements of these apps. More or less all the apps had issues related to the Bluetooth functionality, battery consumption, reliability and usefulness during pandemic.”

For example, apps that relied on Bluetooth for tracing had issues related to range, proximity, signal strength, and connectivity. A significant number of users also expressed frustration over battery drainage. Some efforts have been made to address this issue; for example, Singapore launched an updated version of its TraceTogether app that allows it to operate with Bluetooth while running in the background, with the goal of improving battery life.

But, technical issues were just one reason for lack of adoption. Bano emphasizes that, “The major issues around the apps were social in nature, [related to] trust, transparency, security, and privacy.”

In particular, the researchers found that resistance to downloading and using the app was high in countries with a voluntary adoption model and low trust-index on their governments such as Australia, the United Kingdom, and Germany.  

“We observed slight improvement only in the case of Germany because the government made sincere efforts to increase trust. This was achieved by increasing transparency during ‘Corona-Warn-App’ development by making it open source from the outset and by involving a number of reputable organizations,” says Bano. “However, even as the German officials were referring to their contact tracing app as the ‘best app’ in the world, Germany was struggling to avoid the second wave of COVID-19 at the time we were analyzing the data, in October 2020.”

In some cases, even when measures to improve trust and address privacy issues were taken by governments and app developers, people were hesitant to adopt the apps. For example, a Canadian contract tracing app called COVID Alert is open source, requires no identifiable information from users, and all data are deleted after 14 days. Nevertheless, a survey of Canadians found that two thirds would not download any contact tracing app because it still “too invasive.” (The survey covered tracing apps in general, and was not specific to the COVID Alert app).

Bano plans to continue studying how politics and culture influence the adoption of these apps in different countries around the world. She and her colleagues are interested in exploring how contact tracing apps can be made more inclusive for diverse groups of users in multi-cultural countries.

It’s Too Easy to Hide Bias in Deep-Learning Systems

Post Syndicated from Matthew Hutson original https://spectrum.ieee.org/computing/software/its-too-easy-to-hide-bias-in-deeplearning-systems

If you’re on Facebook, click on “Why am I seeing this ad?” The answer will look something like “[Advertiser] wants to reach people who may be similar to their customers” or “[Advertiser] is trying to reach people ages 18 and older” or “[Advertiser] is trying to reach people whose primary location is the United States.” Oh, you’ll also see “There could also be more factors not listed here.” Such explanations started appearing on Facebook in response to complaints about the platform’s ad-placing artificial intelligence (AI) system. For many people, it was their first encounter with the growing trend of explainable AI, or XAI. 

But something about those explanations didn’t sit right with Oana Goga, a researcher at the Grenoble Informatics Laboratory, in France. So she and her colleagues coded up AdAnalyst, a browser extension that automatically collects Facebook’s ad explanations. Goga’s team also became advertisers themselves. That allowed them to target ads to the volunteers they had running AdAnalyst. The result: “The explanations were often incomplete and sometimes misleading,” says Alan Mislove, one of Goga’s collaborators at Northeastern University, in Boston.

When advertisers create a Facebook ad, they target the people they want to view it by selecting from an expansive list of interests. “You can select people who are interested in football, and they live in Cote d’Azur, and they were at this college, and they also like drinking,” Goga says. But the explanations Facebook provides typically mention only one interest, and the most general one at that. Mislove assumes that’s because Facebook doesn’t want to appear creepy; the company declined to comment for this article, so it’s hard to be sure.

Google and Twitter ads include similar explanations. All three platforms are probably hoping to allay users’ suspicions about the mysterious advertising algorithms they use with this gesture toward transparency, while keeping any unsettling practices obscured. Or maybe they genuinely want to give users a modicum of control over the ads they see—the explanation pop-ups offer a chance for users to alter their list of interests. In any case, these features are probably the most widely deployed example of algorithms being used to explain other algorithms. In this case, what’s being revealed is why the algorithm chose a particular ad to show you.

The world around us is increasingly choreographed by such algorithms. They decide what advertisements, news, and movie recommendations you see. They also help to make far more weighty decisions, determining who gets loans, jobs, or parole. And in the not-too-distant future, they may decide what medical treatment you’ll receive or how your car will navigate the streets. People want explanations for those decisions. Transparency allows developers to debug their software, end users to trust it, and regulators to make sure it’s safe and fair.

The problem is that these automated systems are becoming so frighteningly complex that it’s often very difficult to figure out why they make certain decisions. So researchers have developed algorithms for understanding these decision-making automatons, forming the new subfield of explainable AI.

In 2017, the Defense Advanced Research Projects Agency launched a US $75 million XAI project. Since then, new laws have sprung up requiring such transparency, most notably Europe’s General Data Protection Regulation, which stipulates that when organizations use personal data for “automated decision-making, including profiling,” they must disclose “meaningful information about the logic involved.” One motivation for such rules is a concern that black-box systems may be hiding evidence of illegal, or perhaps just unsavory, discriminatory practices.

As a result, XAI systems are much in demand. And better policing of decision-making algorithms would certainly be a good thing. But even if explanations are widely required, some researchers worry that systems for automated decision-making may appear to be fair when they really aren’t fair at all.

For example, a system that judges loan applications might tell you that it based its decision on your income and age, when in fact it was your race that mattered most. Such bias might arise because it reflects correlations in the data that was used to train the AI, but it must be excluded from decision-making algorithms lest they act to perpetuate unfair practices of the past.

The challenge is how to root out such unfair forms of discrimination. While it’s easy to exclude information about an applicant’s race or gender or religion, that’s often not enough. Research has shown, for example, that job applicants with names that are common among African Americans receive fewer callbacks, even when they possess the same qualifications as someone else.

A computerized résumé-screening tool might well exhibit the same kind of racial bias, even if applicants were never presented with checkboxes for race. The system may still be racially biased; it just won’t “admit” to how it really works, and will instead provide an explanation that’s more palatable.

Regardless of whether the algorithm explicitly uses protected characteristics such as race, explanations can be specifically engineered to hide problematic forms of discrimination. Some AI researchers describe this kind of duplicity as a form of “fairwashing”: presenting a possibly unfair algorithm as being fair.

 Whether deceptive systems of this kind are common or rare is unclear. They could be out there already but well hidden, or maybe the incentive for using them just isn’t great enough. No one really knows. What’s apparent, though, is that the application of more and more sophisticated forms of AI is going to make it increasingly hard to identify such threats.

No company would want to be perceived as perpetuating antiquated thinking or deep-rooted societal injustices. So a company might hesitate to share exactly how its decision-making algorithm works to avoid being accused of unjust discrimination. Companies might also hesitate to provide explanations for decisions rendered because that information would make it easier for outsiders to reverse engineer their proprietary systems. Cynthia Rudin, a computer scientist at Duke University, in Durham, N.C., who studies interpretable machine learning, says that the “explanations for credit scores are ridiculously unsatisfactory.” She believes that credit-rating agencies obscure their rationales intentionally. “They’re not going to tell you exactly how they compute that thing. That’s their secret sauce, right?”

And there’s another reason to be cagey. Once people have reverse engineered your decision-making system, they can more easily game it. Indeed, a huge industry called “search engine optimization” has been built around doing just that: altering Web pages superficially so that they rise to the top of search rankings.

Why then are some companies that use decision-making AI so keen to provide explanations? Umang Bhatt, a computer scientist at the University of Cambridge and his collaborators interviewed 50 scientists, engineers, and executives at 30 organizations to find out. They learned that some executives had asked their data scientists to incorporate explainability tools just so the company could claim to be using transparent AI. The data scientists weren’t told whom this was for, what kind of explanations were needed, or why the company was intent on being open. “Essentially, higher-ups enjoyed the rhetoric of explainability,” Bhatt says, “while data scientists scrambled to figure out how to implement it.”

The explanations such data scientists produce come in all shapes and sizes, but most fall into one of two categories: explanations for how an AI-based system operates in general and explanations for particular decisions. These are called, respectively, global and local explanations. Both can be manipulated.

Ulrich Aïvodji at the Université du Québec, in Montreal, and his colleagues showed how global explanations can be doctored to look better. They used an algorithm they called (appropriately enough for such fairwashing) LaundryML to examine a machine-learning system whose inner workings were too intricate for a person to readily discern. The researchers applied LaundryML to two challenges often used to study XAI. The first task was to predict whether someone’s income is greater than $50,000 (perhaps making the person a good loan candidate), based on 14 personal attributes. The second task was to predict whether a criminal will re-offend within two years of being released from prison, based on 12 attributes.

Unlike the algorithms typically applied to generate explanations, LaundryML includes certain tests of fairness, to make sure the explanation—a simplified version of the original system—doesn’t prioritize such factors as gender or race to predict income and recidivism. Using LaundryML, these researchers were able to come up with simple rule lists that appeared much fairer than the original biased system but gave largely the same results. The worry is that companies could proffer such rule lists as explanations to argue that their decision-making systems are fair.

Another way to explain the overall operations of a machine-learning system is to present a sampling of its decisions. Last February, Kazuto Fukuchi, a researcher at the Riken Center for Advanced Intelligence Project, in Japan, and two colleagues described a way to select a subset of previous decisions such that the sample would look representative to an auditor who was trying to judge whether the system was unjust. But the craftily selected sample met certain fairness criteria that the overall set of decisions did not.

Organizations need to come up with explanations for individual decisions more often than they need to explain how their systems work in general. One technique relies on something XAI researchers call attention, which reflects the relationship between parts of the input to a decision-making system (say, single words in a résumé) and the output (whether the applicant appears qualified). As the name implies, attention values are thought to indicate how much the final judgment depends on certain attributes. But Zachary Lipton of Carnegie Mellon and his colleagues have cast doubt on the whole concept of attention.

These researchers trained various neural networks to read short biographies of physicians and predict which of these people specialized in surgery. The investigators made sure the networks would not allocate attention to words signifying the person’s gender. An explanation that considers only attention would then make it seem that these networks were not discriminating based on gender. But oddly, if words like “Ms.” were removed from the biographies, accuracy suffered, revealing that the networks were, in fact, still using gender to predict the person’s specialty.

“What did the attention tell us in the first place?” Lipton asks. The lack of clarity about what the attention metric actually means opens space for deception, he argues.

Johannes Schneider at the University of Liechtenstein and others recently described a system that examines a decision it made, then finds a plausible justification for an altered (incorrect) decision. Classifying Internet Movie Database (IMDb) film reviews as positive or negative, a faithful model categorized one review as positive, explaining itself by highlighting words like “enjoyable” and “appreciated.” But Schneider’s system could label the same review as negative and point to words that seem scolding when taken out of context.

Another way of explaining an automated decision is to use a technique that researchers call input perturbation. If you want to understand which inputs caused a system to approve or deny a loan, you can create several copies of the loan application with the inputs modified in various ways. Maybe one version ascribes a different gender to the applicant, while another indicates slightly different income. If you submit all of these applications and record the judgments, you can figure out which inputs have influence.

That could provide a reasonable explanation of how some otherwise mysterious decision-making systems work. But a group of researchers at Harvard University led by Himabindu Lakkaraju have developed a decision-making system that detects such probing and adjusts its output accordingly. When it is being tested, the system remains on its best behavior, ignoring off-limits factors like race or gender. At other times, it reverts to its inherently biased approach. Sophie Hilgard, one of the authors on that study, likens the use of such a scheme, which is so far just a theoretical concern, to what Volkswagen actually did to detect when a car was undergoing emissions tests, temporarily adjusting the engine parameters to make the exhaust cleaner than it would normally be.

Another way of explaining a judgment is to output a simple decision tree: a list of if-then rules. The tree doesn’t summarize the whole algorithm, though; instead it includes only the factors used to make the one decision in question. In 2019, Erwan Le Merrer and Gilles Trédan at the French National Center for Scientific Research described a method that constructs these trees in a deceptive way, so that they could explain a credit rating in seemingly objective terms, while hiding the system’s reliance on the applicant’s gender, age, and immigration status.

Whether any of these deceptions have or ever will be deployed is an open question. Perhaps some degree of deception is already common, as in the case for the algorithms that explain how advertisements are targeted. Schneider of the University of Liechtenstein says that the deceptions in place now might not be so flagrant, “just a little bit misguiding.” What’s more, he points out, current laws requiring explanations aren’t hard to satisfy. “If you need to provide an explanation, no one tells you what it should look like.”

Despite the possibility of trickery in XAI, Duke’s Rudin takes a hard line on what to do about the potential problem: She argues that we shouldn’t depend on any decision-making system that requires an explanation. Instead of explainable AI, she advocates for interpretable AI—algorithms that are inherently transparent. “People really like their black boxes,” she says. “For every data set I’ve ever seen, you could get an interpretable [system] that was as accurate as the black box.” Explanations, meanwhile, she says, can induce more trust than is warranted: “You’re saying, ‘Oh, I can use this black box because I can explain it. Therefore, it’s okay. It’s safe to use.’ ”

What about the notion that transparency makes these systems easier to game? Rudin doesn’t buy it. If you can game them, they’re just poor systems, she asserts. With product ratings, for example, you want transparency. When ratings algorithms are left opaque, because of their complexity or a need for secrecy, everyone suffers: “Manufacturers try to design a good car, but they don’t know what good quality means,” she says. And the ability to keep intellectual property private isn’t required for AI to advance, at least for high-stakes applications, she adds. A few companies might lose interest if forced to be transparent with their algorithms, but there’d be no shortage of others to fill the void.

Lipton, of Carnegie Mellon, disagrees with Rudin. He says that deep neural networks—the blackest of black boxes—are still required for optimal performance on many tasks, especially those used for image and voice recognition. So the need for XAI is here to stay. But he says that the possibility of deceptive XAI points to a larger problem: Explanations can be misleading even when they are not manipulated.

Ultimately, human beings have to evaluate the tools they use. If an algorithm highlights factors that we would ourselves consider during decision-making, we might judge its criteria to be acceptable, even if we didn’t gain additional insight and even if the explanation doesn’t tell the whole story. There’s no single theoretical or practical way to measure the quality of an explanation. “That sort of conceptual murkiness provides a real opportunity to mislead,” Lipton says, even if we humans are just misleading ourselves.

In some cases, any attempt at interpretation may be futile. The hope that we’ll understand what some complex AI system is doing reflects anthropomorphism, Lipton argues, whereas these systems should really be considered alien intelligences—or at least abstruse mathematical functions—whose inner workings are inherently beyond our grasp. Ask how a system thinks, and “there are only wrong answers,” he says.

And yet explanations are valuable for debugging and enforcing fairness, even if they’re incomplete or misleading. To borrow an aphorism sometimes used to describe statistical models: All explanations are wrong—including simple ones explaining how AI black boxes work—but some are useful.

This article appears in the February 2021 print issue as “Lyin’ AIs.”

New and Hardened Quantum Crypto System Notches “Milestone” Open-Air Test

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/computing/hardware/quantum-crypto-mdi-qkd-satellites-security

Quantum cryptography remains the ostensibly impervious technology for which new hacks and potential vulnerabilities continue to be uncovered—perhaps ultimately calling into question its alleged unhackability. 

Yet now comes word of a successful Chinese open-air test of a new generation quantum crypto system that allows for untrustworthy receiving stations. 

The previous iteration of quantum cryptography did not have this feature. Then again, the previous iteration did have successful ground and satellite experiments, establishing quantum links to transmit secret messages over hundreds of kilometers.

The present new and more secure quantum crypto system is, perhaps not surprisingly, much more challenging to implement.  

It creates its secret cryptographic keys based on the quantum interference of single photons—light particles—that are made to be indistinguishable despite being generated by independent lasers. As long as sender and receiver use trusted laser sources, they can securely communicate with each other regardless of whether they trust the detectors performing the measurements of the photons’ quantum interference.

Because the cryptographic key can be securely transmitted even in the case of a potentially hacked receiver, the new method is called measurement-device independent quantum key distribution (MDI-QKD).

“It is not far-fetched to say that MDI-QKD could be the de-facto [quantum cryptography] protocol in future quantum networks, be it in terrestrial networks or across satellite communications.” says Charles Ci Wen Lim, an assistant professor in electrical and computer engineering and principal investigator at the Centre for Quantum Technologies at the National University of Singapore; he was not involved in the recent experiment.

In their unprecedented demonstration, Chinese researchers figured out how to overcome many of the experimental challenges of implementing MDI-QKD in the open atmosphere. Their paper detailing the experiment was published last month in the journal Physical Review Letters.

Such an experimental system bodes well for future demonstrations involving quantum links between ground stations and experimental quantum communication satellites such as China’s Micius, says Qiang Zhang, a professor of physics at the University of Science and Technology of China and an author of the recent paper.

The experiment demonstrated quantum interference between photons even in the face of atmospheric turbulence. Such turbulence is typically stronger across horizontal rather than vertical distances. The fact that the present experiment traverses horizontal distances bodes well for future ground-to-satellite systems. And the 19.2-kilometer distance involved in the demonstration already exceeds that of the thickest part of the Earth’s atmosphere.

To cross so much open air, the Chinese researchers developed an adaptive optics system similar to the technology that helps prevent atmospheric disturbances from interfering with astronomers’ telescope observations.

Even MDI-QKD is not 100 percent secure—it remains vulnerable to hacking based on attackers compromising the quantum key-generating lasers. Still, the MDI-QKD security scheme offers, Zhang claims, “near perfect information theoretical security.” It’s entirely secure, in other words, in theory. 

The remaining security vulnerabilities on the photon source side can be “pretty well taken care of by solid countermeasures,” Lim says. He and his colleagues at the National University of Singapore described one possible countermeasure in the form of a “quantum optical fuse” that can limit the input power of untrusted photon sources. Their paper was recently accepted for presentation during the QCRYPT 2020 conference.

All in, Lim says, the Chinese team’s “experiment demonstrated that adaptive optics will be essential in ensuring that MDI-QKD works properly over urban free-space channels, and it represents an important step towards deploying MDI-QKD over satellite channels.” From his outside perspective, he described the Chinese team’s work as a “milestone experiment for MDI-QKD.”

Get a method to separate common-mode and differential-mode separation using two oscilloscope channels.

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/get-a-method-to-separate-commonmode-and-differentialmode-separation-using-two-oscilloscope-channels

oscilloscope channels


Learn how to distinguish between common-mode (CM) and differential-mode (DM) noise. This additional information about the dominant mode provides the capability to optimize input filters very efficiently.