Tag Archives: computing

Three Frosty Innovations for Better Quantum Computers

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/tech-talk/computing/hardware/three-super-cold-devices-quantum-computers

For most quantum computers, heat is the enemy. Heat creates error in the qubits that make a quantum computer tick, scuttling the operations the computer is carrying out. So quantum computers need to be kept very cold, just a tad above absolute zero.

“But to operate a computer, you need some interface with the non-quantum world,” says Jan Cranickx, a research scientist at imec. Today, that means a lot of bulky backend electronics that sit at room temperature. To make better quantum computers, scientists and engineers are looking to bring more of those electronics into the dilution refrigerator that houses the qubits themselves.

At December’s IEEE International Electron Devices Meeting (IEDM), researchers from than a half dozen companies and universities presented new ways to run circuits at cryogenic temperatures. Here are three such efforts: 

Google’s cryogenic control circuit could start shrinking quantum computers

At Google, researchers have developed a cryogenic integrated circuit for controlling the qubits, connecting them with other electronics. The Google team actually first unveiled their work back in 2019, but they’re continuing to scale up the technology, with an eye for building larger quantum computers.

This cryo-CMOS circuit isn’t much different from its room-temperature counterparts, says Joseph Bardin, a research scientist with Google Quantum AI and a professor at the University of Massachusetts, Amherst. But designing it isn’t so straightforward. Existing simulations and models of components aren’t tailored for cryogenic operation. Much of the researchers’ challenge comes in adapting those models for cold temperatures.

Google’s device operates at 4 kelvins inside the refrigerator, just slightly warmer than the qubits that are about 50 centimeters away. That could drastically shrink what are now room-sized racks of electronics. Bardin claims that their cryo-IC approach “could also eventually bring the cost of the control electronics way down.” Efficiently controlling quantum computers, he says, is crucial as they reach 100 qubits or more.

Cryogenic low-noise amplifiers make reading qubits easier

A key part of a quantum computer are the electronics to read out the qubits. On their own, those qubits emit weak RF signals. Enter the low-noise amplifier (LNA), which can boost those signals and make the qubits far easier to read. It’s not just quantum computers that benefit from cryogenic LNAs; radio telescopes and deep-space communications networks use them, too.

Researchers at Chalmers University of Technology in Gothenburg, Sweden, are among those trying to make cryo-LNAs. Their circuit uses high-electron-mobility transistors (HEMTs), which are especially useful for rapidly switching and amplifying current. The Chalmers researchers use transistors made from indium phosphide (InP), a familiar material for LNAs, though gallium arsenide is more common commercially. Jan Grahn, a professor at Chalmers University of Technology, states that InP HEMTs are ideal for the deep freeze, because the material does an even better job of conducting electrons at low temperatures than at room temperature.

Researchers have tinkered with InP HEMTs in LNAs for some time, but the Chalmers group are pushing their circuits to run at lower temperatures and to use less power than ever. Their devices operate as low as 4 kelvins, a temperature which makes them at home in the upper reaches of a quantum computer’s dilution refrigerator.

imec researchers are pruning those cables

Any image of a quantum computer is dominated by the byzantine cabling. Those cables connect the qubits to their control electronics, reading out of the states of the qubits and feeding back inputs. Some of those cables can be weeded out by an RF multiplexer (RF MUX), a circuit which can control the signals to and from multiple qubits. And researchers at imec have developed an RF MUX that can join the qubits in the fridge.

Unlike many experimental cryogenic circuits, which work at 4 kelvins, imec’s RF MUX can operate down to millikelvins. Jan Cranickx says that getting an RF MUX to work that temperature meant entering a world where the researchers and device physicists had no models to work from, He describes fabricating the device as a process of “trial and error,” of cooling components down to millikelvins and seeing how well they still work. “It’s totally unknown territory,” he says. “Nobody’s ever done that.”

This circuit sits right next to the qubits, deep in the cold heart of the dilution refrigerator. Further up and away, researchers can connect other devices, such as LNAs, and other control circuits. This setup could make it less necessary for each individual qubit to have its own complex readout circuit, and make it much easier to build complex quantum computers with much larger numbers of qubits—perhaps even thousands.

Beyond Bitcoin: China’s Surveillance Cash

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/computing/software/beyond-bitcoin-chinas-surveillance-cash

Of all the technological revolutions we’ll live through in this next decade, none will be more fundamental or pervasive than the transition to digital cash. Money touches nearly everything we do, and although all those swipes and taps and PIN-entry moments may make it seem as though cash is already digital, all of that tech merely eases access to our bank accounts. Cash remains stubbornly physical. But that’s soon going to change.

As with so much connected to digital payments, the Chinese got there first. Prompted by the June 2019 public announcement of Facebook’s Libra (now Diem)—the social media giant’s private form of digital cash—the People’s Bank of China unveiled Digital Currency/Electronic Payments, or DCEP. Having rolled out the new currency to tens of millions of users who now hold digital yuan in electronic wallets, the bank expects that by the time Beijing hosts the 2022 Winter Olympics, DCEP will be in widespread use across what some say is the world’s biggest economy. Indeed, the bank confidently predicts that by the end of the decade digital cash will displace nearly all of China’s banknotes.

Unlike the anarchic and highly sought-after Bitcoin—whose creation marked the genesis of digital cash—DCEP gives up all its secrets. The People’s Bank records every DCEP transaction of any size across the entire Chinese economy, enabling a form of economic surveillance that was impossible to conceive of before the advent of digital cash but is now baked into the design of this new kind of money.

China may have been first, but nearly every other major national economy has its central bankers researching their own forms of digital cash. That makes this an excellent moment to consider the tensions at the intersection of money, technology, and society.

Nearly every country tracks the movement of large sums of money in an effort to thwart terrorism and tax evasion. But most nations have been content to preserve the anonymity of cash when the amount conveyed falls below some threshold. Will that still be the case in 2030, or will our money watch us as we spend it? Just because we can use the technology to create an indelible digital record of our every transaction, should we? And if we do, who gets to see that ledger?

Digital cash also means that high-tech items will rapidly become bearers of value. We’ll have more than just smartphone-based wallets: Our automobiles will pay for their own bridge tolls or the watts they gulp to charge their batteries. It also means that such a car can be paid directly if it returns some of those electrons to the grid at times of peak demand. Within this decade, most of our devices could become fully transactional, not just in exchanging data, but also as autonomous financial entities.

Pervasive digital cash will demand a new ecosystem of software services to manage it. Here we run headlong into a fundamental challenge: You can claw back a mistaken or fraudulent credit card transaction with a charge-back, but a cash transfer is forever, whether that’s done by exchanging bills or bytes. That means someone’s buggy code will cost someone else real money. Ever-greater attention will have to be paid to testing, before the code managing these transactions is deployed. Youthful cries of “move fast and break things” ring hollow in a more mature connected world where software plays such a central role in the economy.

This article appears in the February 2021 print issue as “Surveillance Cash.”

Smart Algorithm Bursts Social Networks’ “Filter Bubbles”

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/networks/finally-a-means-for-bursting-social-media-bubbles

Journal Watch report logo, link to report landing page

Social media today depends on building echo chambers (a.k.a. “filter bubbles”) that wall off users into like-minded digital communities. These bubbles create higher levels of engagement, but come with pitfalls— including limiting people’s exposure to diverse views and driving polarization among friends, families and colleagues. This effect isn’t a coincidence either. Rather, it’s directly related to the profit-maximizing algorithms used by social media and tech giants on platforms like Twitter, Facebook, TikTok and YouTube.

In a refreshing twist, one team of researchers in Finland and Denmark has a different vision for how social media platforms could work. They developed a new algorithm that increases the diversity of exposure on social networks, while still ensuring that content is widely shared.

Antonis Matakos, a PhD student in the Computer Science Department at Aalto University in Espoo, Finland, helped develop the new algorithm. He expresses concern about the social media algorithms being used now.

While current algorithms mean that people more often encounter news stories and information that they enjoy, the effect can decrease a person’s exposure to diverse opinions and perspectives. “Eventually, people tend to forget that points of view, systems of values, ways of life, other than their own exist… Such a situation corrodes the functioning of society, and leads to polarization and conflict,” Matakos says.

“Additionally,” he says, “people might develop a distorted view of reality, which may also pave the way for the rapid spread of fake news and rumors.”

Matakos’ research is focused on reversing these harmful trends. He and his colleagues describe their new algorithm in a study published in November in IEEE Transactions on Knowledge and Data Engineering

The approach involves assigning numerical values to both social media content and users. The values represent a position on an ideological spectrum, for example far left or far right. These numbers are used to calculate a diversity exposure score for each user. Essentially, the algorithm is identifying social media users who would share content that would lead to the maximum spread of a broad variety of news and information perspectives.

Then, diverse content is presented to a select group of people with a given diversity score who are most likely to help the content propagate across the social media network—thus maximizing the diversity scores of all users.

In their study, the researchers compare their new social media algorithm to several other models in a series of simulations. One of these other models was a simpler method that selects the most well-connected users and recommends content that maximizes a person’s individual diversity exposure score.

Matakos says his group’s algorithm provides a feed for social media users that is at least three times more diverse (according to the researchers’ metric) than this simpler method, and even more so for baseline methods used for comparison in the study.  

These results suggest that targeting a strategic group of social media users and feeding them the right content is more effective for propagating diverse views through a social media network than focusing on the most well-connected users. Importantly, the simulations completed in the study also suggest that the new model is scalable.

A major hurdle, of course, is whether social media networks would be open to incorporating the algorithm into their systems. Matakos says that, ideally, his new algorithm could be an opt-in feature that social media networks offer to their users.

“I think [the social networks] would be potentially open to the idea,” says Matakos. “However, in practice we know that the social network algorithms that generate users’ news feeds are orientated towards maximizing profit, so it would need to be a big step away from that direction.”

 

SANS and AWS Marketplace Webinar: Learn to improve your Cloud Threat Intelligence program through cloud-specific data sources

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/sans-and-aws-marketplace-webinar-learn-to-improve-your-cloud-threat-intelligence-program-through-cloudspecific-data-sources

SOC efficiency

You’re Invited!

SANS and AWS Marketplace will discuss CTI detection and prevention metrics, finding effective intelligence data feeds and sources, and determining how best to integrate them into security operations functions.

Attendees of this webinar will learn how to:

  • Understand cloud-specific data sources for threat intelligence, such as static indicators and TTPs.
  • Efficiently search for compromised assets based on indicators provided, events generated on workloads and within the cloud infrastructure, or communications with known malicious IP addresses and domains.
  • Place intelligence and automation at the core of security workflows and decision making to create a comprehensive security program.

Presenters:

Dave Shackelford

Dave Shackelford, SANS Analyst, Senior Instructor

Dave Shackleford, a SANS analyst, senior instructor, course author, GIAC technical director and member of the board of directors for the SANS Technology Institute, is the founder and principal consultant with Voodoo Security. He has consulted with hundreds of organizations in the areas of security, regulatory compliance, and network architecture and engineering. A VMware vExpert, Dave has extensive experience designing and configuring secure virtualized infrastructures. He previously worked as chief security officer for Configuresoft and CTO for the Center for Internet Security. Dave currently helps lead the Atlanta chapter of the Cloud Security Alliance.

Nam Le

Nam Le, Specialist Solutions Architect, AWS

Nam Le is a Specialist Solutions Architect at AWS covering AWS Marketplace, Service Catalog, Migration Services, and Control Tower. He helps customers implement security and governance best practices using native AWS Services and Partner products. He is an AWS Certified Solutions Architect, and his skills include security, compliance, cloud computing, enterprise architecture, and software development. Nam has also worked as a consulting services manager, cloud architect, and as a technical marketing manager.

It’s Too Easy to Hide Bias in Deep-Learning Systems

Post Syndicated from Matthew Hutson original https://spectrum.ieee.org/computing/software/its-too-easy-to-hide-bias-in-deeplearning-systems

If you’re on Facebook, click on “Why am I seeing this ad?” The answer will look something like “[Advertiser] wants to reach people who may be similar to their customers” or “[Advertiser] is trying to reach people ages 18 and older” or “[Advertiser] is trying to reach people whose primary location is the United States.” Oh, you’ll also see “There could also be more factors not listed here.” Such explanations started appearing on Facebook in response to complaints about the platform’s ad-placing artificial intelligence (AI) system. For many people, it was their first encounter with the growing trend of explainable AI, or XAI. 

But something about those explanations didn’t sit right with Oana Goga, a researcher at the Grenoble Informatics Laboratory, in France. So she and her colleagues coded up AdAnalyst, a browser extension that automatically collects Facebook’s ad explanations. Goga’s team also became advertisers themselves. That allowed them to target ads to the volunteers they had running AdAnalyst. The result: “The explanations were often incomplete and sometimes misleading,” says Alan Mislove, one of Goga’s collaborators at Northeastern University, in Boston.

When advertisers create a Facebook ad, they target the people they want to view it by selecting from an expansive list of interests. “You can select people who are interested in football, and they live in Cote d’Azur, and they were at this college, and they also like drinking,” Goga says. But the explanations Facebook provides typically mention only one interest, and the most general one at that. Mislove assumes that’s because Facebook doesn’t want to appear creepy; the company declined to comment for this article, so it’s hard to be sure.

Google and Twitter ads include similar explanations. All three platforms are probably hoping to allay users’ suspicions about the mysterious advertising algorithms they use with this gesture toward transparency, while keeping any unsettling practices obscured. Or maybe they genuinely want to give users a modicum of control over the ads they see—the explanation pop-ups offer a chance for users to alter their list of interests. In any case, these features are probably the most widely deployed example of algorithms being used to explain other algorithms. In this case, what’s being revealed is why the algorithm chose a particular ad to show you.

The world around us is increasingly choreographed by such algorithms. They decide what advertisements, news, and movie recommendations you see. They also help to make far more weighty decisions, determining who gets loans, jobs, or parole. And in the not-too-distant future, they may decide what medical treatment you’ll receive or how your car will navigate the streets. People want explanations for those decisions. Transparency allows developers to debug their software, end users to trust it, and regulators to make sure it’s safe and fair.

The problem is that these automated systems are becoming so frighteningly complex that it’s often very difficult to figure out why they make certain decisions. So researchers have developed algorithms for understanding these decision-making automatons, forming the new subfield of explainable AI.

In 2017, the Defense Advanced Research Projects Agency launched a US $75 million XAI project. Since then, new laws have sprung up requiring such transparency, most notably Europe’s General Data Protection Regulation, which stipulates that when organizations use personal data for “automated decision-making, including profiling,” they must disclose “meaningful information about the logic involved.” One motivation for such rules is a concern that black-box systems may be hiding evidence of illegal, or perhaps just unsavory, discriminatory practices.

As a result, XAI systems are much in demand. And better policing of decision-making algorithms would certainly be a good thing. But even if explanations are widely required, some researchers worry that systems for automated decision-making may appear to be fair when they really aren’t fair at all.

For example, a system that judges loan applications might tell you that it based its decision on your income and age, when in fact it was your race that mattered most. Such bias might arise because it reflects correlations in the data that was used to train the AI, but it must be excluded from decision-making algorithms lest they act to perpetuate unfair practices of the past.

The challenge is how to root out such unfair forms of discrimination. While it’s easy to exclude information about an applicant’s race or gender or religion, that’s often not enough. Research has shown, for example, that job applicants with names that are common among African Americans receive fewer callbacks, even when they possess the same qualifications as someone else.

A computerized résumé-screening tool might well exhibit the same kind of racial bias, even if applicants were never presented with checkboxes for race. The system may still be racially biased; it just won’t “admit” to how it really works, and will instead provide an explanation that’s more palatable.

Regardless of whether the algorithm explicitly uses protected characteristics such as race, explanations can be specifically engineered to hide problematic forms of discrimination. Some AI researchers describe this kind of duplicity as a form of “fairwashing”: presenting a possibly unfair algorithm as being fair.

 Whether deceptive systems of this kind are common or rare is unclear. They could be out there already but well hidden, or maybe the incentive for using them just isn’t great enough. No one really knows. What’s apparent, though, is that the application of more and more sophisticated forms of AI is going to make it increasingly hard to identify such threats.

No company would want to be perceived as perpetuating antiquated thinking or deep-rooted societal injustices. So a company might hesitate to share exactly how its decision-making algorithm works to avoid being accused of unjust discrimination. Companies might also hesitate to provide explanations for decisions rendered because that information would make it easier for outsiders to reverse engineer their proprietary systems. Cynthia Rudin, a computer scientist at Duke University, in Durham, N.C., who studies interpretable machine learning, says that the “explanations for credit scores are ridiculously unsatisfactory.” She believes that credit-rating agencies obscure their rationales intentionally. “They’re not going to tell you exactly how they compute that thing. That’s their secret sauce, right?”

And there’s another reason to be cagey. Once people have reverse engineered your decision-making system, they can more easily game it. Indeed, a huge industry called “search engine optimization” has been built around doing just that: altering Web pages superficially so that they rise to the top of search rankings.

Why then are some companies that use decision-making AI so keen to provide explanations? Umang Bhatt, a computer scientist at the University of Cambridge and his collaborators interviewed 50 scientists, engineers, and executives at 30 organizations to find out. They learned that some executives had asked their data scientists to incorporate explainability tools just so the company could claim to be using transparent AI. The data scientists weren’t told whom this was for, what kind of explanations were needed, or why the company was intent on being open. “Essentially, higher-ups enjoyed the rhetoric of explainability,” Bhatt says, “while data scientists scrambled to figure out how to implement it.”

The explanations such data scientists produce come in all shapes and sizes, but most fall into one of two categories: explanations for how an AI-based system operates in general and explanations for particular decisions. These are called, respectively, global and local explanations. Both can be manipulated.

Ulrich Aïvodji at the Université du Québec, in Montreal, and his colleagues showed how global explanations can be doctored to look better. They used an algorithm they called (appropriately enough for such fairwashing) LaundryML to examine a machine-learning system whose inner workings were too intricate for a person to readily discern. The researchers applied LaundryML to two challenges often used to study XAI. The first task was to predict whether someone’s income is greater than $50,000 (perhaps making the person a good loan candidate), based on 14 personal attributes. The second task was to predict whether a criminal will re-offend within two years of being released from prison, based on 12 attributes.

Unlike the algorithms typically applied to generate explanations, LaundryML includes certain tests of fairness, to make sure the explanation—a simplified version of the original system—doesn’t prioritize such factors as gender or race to predict income and recidivism. Using LaundryML, these researchers were able to come up with simple rule lists that appeared much fairer than the original biased system but gave largely the same results. The worry is that companies could proffer such rule lists as explanations to argue that their decision-making systems are fair.

Another way to explain the overall operations of a machine-learning system is to present a sampling of its decisions. Last February, Kazuto Fukuchi, a researcher at the Riken Center for Advanced Intelligence Project, in Japan, and two colleagues described a way to select a subset of previous decisions such that the sample would look representative to an auditor who was trying to judge whether the system was unjust. But the craftily selected sample met certain fairness criteria that the overall set of decisions did not.

Organizations need to come up with explanations for individual decisions more often than they need to explain how their systems work in general. One technique relies on something XAI researchers call attention, which reflects the relationship between parts of the input to a decision-making system (say, single words in a résumé) and the output (whether the applicant appears qualified). As the name implies, attention values are thought to indicate how much the final judgment depends on certain attributes. But Zachary Lipton of Carnegie Mellon and his colleagues have cast doubt on the whole concept of attention.

These researchers trained various neural networks to read short biographies of physicians and predict which of these people specialized in surgery. The investigators made sure the networks would not allocate attention to words signifying the person’s gender. An explanation that considers only attention would then make it seem that these networks were not discriminating based on gender. But oddly, if words like “Ms.” were removed from the biographies, accuracy suffered, revealing that the networks were, in fact, still using gender to predict the person’s specialty.

“What did the attention tell us in the first place?” Lipton asks. The lack of clarity about what the attention metric actually means opens space for deception, he argues.

Johannes Schneider at the University of Liechtenstein and others recently described a system that examines a decision it made, then finds a plausible justification for an altered (incorrect) decision. Classifying Internet Movie Database (IMDb) film reviews as positive or negative, a faithful model categorized one review as positive, explaining itself by highlighting words like “enjoyable” and “appreciated.” But Schneider’s system could label the same review as negative and point to words that seem scolding when taken out of context.

Another way of explaining an automated decision is to use a technique that researchers call input perturbation. If you want to understand which inputs caused a system to approve or deny a loan, you can create several copies of the loan application with the inputs modified in various ways. Maybe one version ascribes a different gender to the applicant, while another indicates slightly different income. If you submit all of these applications and record the judgments, you can figure out which inputs have influence.

That could provide a reasonable explanation of how some otherwise mysterious decision-making systems work. But a group of researchers at Harvard University led by Himabindu Lakkaraju have developed a decision-making system that detects such probing and adjusts its output accordingly. When it is being tested, the system remains on its best behavior, ignoring off-limits factors like race or gender. At other times, it reverts to its inherently biased approach. Sophie Hilgard, one of the authors on that study, likens the use of such a scheme, which is so far just a theoretical concern, to what Volkswagen actually did to detect when a car was undergoing emissions tests, temporarily adjusting the engine parameters to make the exhaust cleaner than it would normally be.

Another way of explaining a judgment is to output a simple decision tree: a list of if-then rules. The tree doesn’t summarize the whole algorithm, though; instead it includes only the factors used to make the one decision in question. In 2019, Erwan Le Merrer and Gilles Trédan at the French National Center for Scientific Research described a method that constructs these trees in a deceptive way, so that they could explain a credit rating in seemingly objective terms, while hiding the system’s reliance on the applicant’s gender, age, and immigration status.

Whether any of these deceptions have or ever will be deployed is an open question. Perhaps some degree of deception is already common, as in the case for the algorithms that explain how advertisements are targeted. Schneider of the University of Liechtenstein says that the deceptions in place now might not be so flagrant, “just a little bit misguiding.” What’s more, he points out, current laws requiring explanations aren’t hard to satisfy. “If you need to provide an explanation, no one tells you what it should look like.”

Despite the possibility of trickery in XAI, Duke’s Rudin takes a hard line on what to do about the potential problem: She argues that we shouldn’t depend on any decision-making system that requires an explanation. Instead of explainable AI, she advocates for interpretable AI—algorithms that are inherently transparent. “People really like their black boxes,” she says. “For every data set I’ve ever seen, you could get an interpretable [system] that was as accurate as the black box.” Explanations, meanwhile, she says, can induce more trust than is warranted: “You’re saying, ‘Oh, I can use this black box because I can explain it. Therefore, it’s okay. It’s safe to use.’ ”

What about the notion that transparency makes these systems easier to game? Rudin doesn’t buy it. If you can game them, they’re just poor systems, she asserts. With product ratings, for example, you want transparency. When ratings algorithms are left opaque, because of their complexity or a need for secrecy, everyone suffers: “Manufacturers try to design a good car, but they don’t know what good quality means,” she says. And the ability to keep intellectual property private isn’t required for AI to advance, at least for high-stakes applications, she adds. A few companies might lose interest if forced to be transparent with their algorithms, but there’d be no shortage of others to fill the void.

Lipton, of Carnegie Mellon, disagrees with Rudin. He says that deep neural networks—the blackest of black boxes—are still required for optimal performance on many tasks, especially those used for image and voice recognition. So the need for XAI is here to stay. But he says that the possibility of deceptive XAI points to a larger problem: Explanations can be misleading even when they are not manipulated.

Ultimately, human beings have to evaluate the tools they use. If an algorithm highlights factors that we would ourselves consider during decision-making, we might judge its criteria to be acceptable, even if we didn’t gain additional insight and even if the explanation doesn’t tell the whole story. There’s no single theoretical or practical way to measure the quality of an explanation. “That sort of conceptual murkiness provides a real opportunity to mislead,” Lipton says, even if we humans are just misleading ourselves.

In some cases, any attempt at interpretation may be futile. The hope that we’ll understand what some complex AI system is doing reflects anthropomorphism, Lipton argues, whereas these systems should really be considered alien intelligences—or at least abstruse mathematical functions—whose inner workings are inherently beyond our grasp. Ask how a system thinks, and “there are only wrong answers,” he says.

And yet explanations are valuable for debugging and enforcing fairness, even if they’re incomplete or misleading. To borrow an aphorism sometimes used to describe statistical models: All explanations are wrong—including simple ones explaining how AI black boxes work—but some are useful.

This article appears in the February 2021 print issue as “Lyin’ AIs.”

Why Aren’t COVID Tracing Apps More Widely Used?

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/why-arent-covid-tracing-apps-more-widely-used

As the COVID-19 pandemic began to sweep around the globe in early 2020, many governments quickly mobilized to launch contract tracing apps to track the spread of the virus. If enough people downloaded and used the apps, it would be much easier to identify people who have potentially been exposed. In theory, contact tracing apps could play a critical role in stemming the pandemic.

In reality, adoption of contract tracing apps by citizens was largely sporadic and unenthusiastic. A trio of researchers in Australia decided to explore why contract tracing apps weren’t more widely adopted. Their results, published on 23 December in IEEE Software, emphasize the importance of social factors such as trust and transparency.

Muneera Bano is a senior lecturer of software engineering at Deakin University, in Melbourne. Bano and her co-authors study human aspects of technology adoption. “Coming from a socio-technical research background, we were intrigued initially to study the contact tracing apps when the Australian Government launched the CovidSafe app in April 2020,” explains Bano. “There was a clear resistance from many citizens in Australia in downloading the app, citing concerns regarding trust and privacy.”

To better understand the satisfaction—or dissatisfaction—of app users, the researchers analyzed data from the Apple and Google app stores. At first, they looked at average star ratings, number of downloads, and conducted a sentiment analysis of app reviews.

However, just because a person downloads an app doesn’t guarantee that they will use it. What’s more, Bano’s team found that sentiment scores—which are often indicative of an app’s popularity, success, and adoption—were not an effective means for capturing the success of COVID-19 contract tracing apps.

“We started to dig deeper into the reviews to analyze the voices of users for particular requirements of these apps. More or less all the apps had issues related to the Bluetooth functionality, battery consumption, reliability and usefulness during pandemic.”

For example, apps that relied on Bluetooth for tracing had issues related to range, proximity, signal strength, and connectivity. A significant number of users also expressed frustration over battery drainage. Some efforts have been made to address this issue; for example, Singapore launched an updated version of its TraceTogether app that allows it to operate with Bluetooth while running in the background, with the goal of improving battery life.

But, technical issues were just one reason for lack of adoption. Bano emphasizes that, “The major issues around the apps were social in nature, [related to] trust, transparency, security, and privacy.”

In particular, the researchers found that resistance to downloading and using the app was high in countries with a voluntary adoption model and low trust-index on their governments such as Australia, the United Kingdom, and Germany.  

“We observed slight improvement only in the case of Germany because the government made sincere efforts to increase trust. This was achieved by increasing transparency during ‘Corona-Warn-App’ development by making it open source from the outset and by involving a number of reputable organizations,” says Bano. “However, even as the German officials were referring to their contact tracing app as the ‘best app’ in the world, Germany was struggling to avoid the second wave of COVID-19 at the time we were analyzing the data, in October 2020.”

In some cases, even when measures to improve trust and address privacy issues were taken by governments and app developers, people were hesitant to adopt the apps. For example, a Canadian contract tracing app called COVID Alert is open source, requires no identifiable information from users, and all data are deleted after 14 days. Nevertheless, a survey of Canadians found that two thirds would not download any contact tracing app because it still “too invasive.” (The survey covered tracing apps in general, and was not specific to the COVID Alert app).

Bano plans to continue studying how politics and culture influence the adoption of these apps in different countries around the world. She and her colleagues are interested in exploring how contact tracing apps can be made more inclusive for diverse groups of users in multi-cultural countries.

New and Hardened Quantum Crypto System Notches “Milestone” Open-Air Test

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/computing/hardware/quantum-crypto-mdi-qkd-satellites-security

Quantum cryptography remains the ostensibly impervious technology for which new hacks and potential vulnerabilities continue to be uncovered—perhaps ultimately calling into question its alleged unhackability. 

Yet now comes word of a successful Chinese open-air test of a new generation quantum crypto system that allows for untrustworthy receiving stations. 

The previous iteration of quantum cryptography did not have this feature. Then again, the previous iteration did have successful ground and satellite experiments, establishing quantum links to transmit secret messages over hundreds of kilometers.

The present new and more secure quantum crypto system is, perhaps not surprisingly, much more challenging to implement.  

It creates its secret cryptographic keys based on the quantum interference of single photons—light particles—that are made to be indistinguishable despite being generated by independent lasers. As long as sender and receiver use trusted laser sources, they can securely communicate with each other regardless of whether they trust the detectors performing the measurements of the photons’ quantum interference.

Because the cryptographic key can be securely transmitted even in the case of a potentially hacked receiver, the new method is called measurement-device independent quantum key distribution (MDI-QKD).

“It is not far-fetched to say that MDI-QKD could be the de-facto [quantum cryptography] protocol in future quantum networks, be it in terrestrial networks or across satellite communications.” says Charles Ci Wen Lim, an assistant professor in electrical and computer engineering and principal investigator at the Centre for Quantum Technologies at the National University of Singapore; he was not involved in the recent experiment.

In their unprecedented demonstration, Chinese researchers figured out how to overcome many of the experimental challenges of implementing MDI-QKD in the open atmosphere. Their paper detailing the experiment was published last month in the journal Physical Review Letters.

Such an experimental system bodes well for future demonstrations involving quantum links between ground stations and experimental quantum communication satellites such as China’s Micius, says Qiang Zhang, a professor of physics at the University of Science and Technology of China and an author of the recent paper.

The experiment demonstrated quantum interference between photons even in the face of atmospheric turbulence. Such turbulence is typically stronger across horizontal rather than vertical distances. The fact that the present experiment traverses horizontal distances bodes well for future ground-to-satellite systems. And the 19.2-kilometer distance involved in the demonstration already exceeds that of the thickest part of the Earth’s atmosphere.

To cross so much open air, the Chinese researchers developed an adaptive optics system similar to the technology that helps prevent atmospheric disturbances from interfering with astronomers’ telescope observations.

Even MDI-QKD is not 100 percent secure—it remains vulnerable to hacking based on attackers compromising the quantum key-generating lasers. Still, the MDI-QKD security scheme offers, Zhang claims, “near perfect information theoretical security.” It’s entirely secure, in other words, in theory. 

The remaining security vulnerabilities on the photon source side can be “pretty well taken care of by solid countermeasures,” Lim says. He and his colleagues at the National University of Singapore described one possible countermeasure in the form of a “quantum optical fuse” that can limit the input power of untrusted photon sources. Their paper was recently accepted for presentation during the QCRYPT 2020 conference.

All in, Lim says, the Chinese team’s “experiment demonstrated that adaptive optics will be essential in ensuring that MDI-QKD works properly over urban free-space channels, and it represents an important step towards deploying MDI-QKD over satellite channels.” From his outside perspective, he described the Chinese team’s work as a “milestone experiment for MDI-QKD.”

Get a method to separate common-mode and differential-mode separation using two oscilloscope channels.

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/get-a-method-to-separate-commonmode-and-differentialmode-separation-using-two-oscilloscope-channels

oscilloscope channels


Learn how to distinguish between common-mode (CM) and differential-mode (DM) noise. This additional information about the dominant mode provides the capability to optimize input filters very efficiently.

Superconducting Microprocessors? Turns Out They’re Ultra-Efficient

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/hardware/new-superconductor-microprocessor-yields-a-substantial-boost-in-efficiency

Computers use a staggering amount of energy today. According to one recent estimate, data centers alone consume two percent of the world’s electricity, a figure that’s expected to climb to eight percent by the end of the decade. To buck that trend, though, perhaps the microprocessor, at the center of the computer universe, could be streamlined in entirely new ways. 

One group of researchers in Japan have taken this idea to the limit, creating a superconducting microprocessor—one with zero electrical resistance. The new device, the first of its kind, is described in a study published last month in the IEEE Journal of Solid-State Circuits.

Superconductor microprocessors could offer a potential solution for more energy efficient computing power—but for the fact that, at present, these designs require ultra-cold temperatures below 10 kelvin (or -263 degrees Celsius). The research group in Japan sought to create a superconductor microprocessor that’s adiabatic, meaning that, in principle, energy is not gained or lost from the system during the computing process.

While adiabatic semiconductor microprocessors exist, the new microprocessor prototype, called MANA (Monolithic Adiabatic iNtegration Architecture), is the world’s first adiabatic superconductor microprocessor. It’s composed of superconducting niobium and relies on hardware components called adiabatic quantum-flux-parametrons (AQFPs). Each AQFP is composed of a few fast-acting Josephson junction switches, which require very little energy to support superconductor electronics. The MANA microprocessor consists of more than 20,000 Josephson junctions (or more than 10,000 AQFPs) in total.

Christopher Ayala is an Associate Professor at the Institute of Advanced Sciences at Yokohama National University, in Japan, who helped develop the new microprocessor. “The AQFPs used to build the microprocessor have been optimized to operate adiabatically such that the energy drawn from the power supply can be recovered under relatively low clock frequencies up to around 10 GHz,” he explains. “This is low compared to the hundreds of gigahertz typically found in conventional superconductor electronics.”

This doesn’t mean that the group’s current-generation device hits 10 GHz speeds, however. In a press statement, Ayala added, “We also show on a separate chip that the data processing part of the microprocessor can operate up to a clock frequency of 2.5 GHz making this on par with today’s computing technologies. We even expect this to increase to 5-10 GHz as we make improvements in our design methodology and our experimental setup.” 

The price of entry for the niobium-based microprocessor is of course the cryogenics and the energy cost for cooling the system down to superconducting temperatures.  

“But even when taking this cooling overhead into account,” says Ayala, “The AQFP is still about 80 times more energy-efficient when compared to the state-of-the-art semiconductor electronic device, [such as] 7-nm FinFET, available today.”

Since the MANA microprocessor requires liquid helium-level temperatures, it’s better suited for large-scale computing infrastructures like data centers and supercomputers, where cryogenic cooling systems could be used.

“Most of these hurdles—namely area efficiency and improvement of latency and power clock networks—are research areas we have been heavily investigating, and we already have promising directions to pursue,” he says.

Deep Learning at the Speed of Light

Post Syndicated from David Schneider original https://spectrum.ieee.org/computing/software/deep-learning-at-the-speed-of-light

In 2011, Marc Andreessen, ­general partner of venture capital firm Andreessen Horowitz, wrote an influential article in The Wall Street Journal titled,“Why Software Is Eating the World.” A decade later now, it’s deep learning that’s eating the world.

Deep learning, which is to say artificial neural networks with many hidden layers, is regularly stunning us with solutions to real-world problems. And it is doing that in more and more realms, including natural-language processing, fraud detection, image recognition, and autonomous driving. Indeed, these neural networks are getting better by the day.

But these advances come at an enormous price in the computing resources and energy they consume. So it’s no wonder that engineers and computer scientists are making huge efforts to figure out ways to train and run deep neural networks more efficiently.

An ambitious new strategy that’s coming to the fore this year is to perform many of the required mathematical calculations using photons rather than electrons. In particular, one company, ­Lightmatter, will begin marketing late this year a neural-network accelerator chip that calculates with light. It will be a refinement of the prototype Mars chip that the company showed off at the virtual Hot Chips conference last August.

While the development of a commercial optical accelerator for deep learning is a remarkable accomplishment, the general idea of computing with light is not new. Engineers regularly resorted to this tactic in the 1960s and ’70s, when electronic digital computers were too feeble to perform the complex calculations needed to process synthetic-aperture radar data. So they processed the data in the analog domain, using light.

Because of the subsequent Moore’s Law gains in what could be done with digital electronics, optical computing never really caught on, despite the ascendancy of light as a vehicle for data communications. But all that may be about to change: Moore’s Law may be nearing an end, just as the computing demands of deep learning are exploding.

There aren’t many ways to deal with this problem. Deep-learning researchers may develop more efficient algorithms, sure, but it’s hard to imagine those gains will be sufficient. “I challenge you to lock a bunch of theorists in a room and have them come up with a better algorithm every 18 months,” says Nicholas Harris, CEO of ­Lightmatter. That’s why he and his colleagues are bent on “developing a new compute technology that doesn’t rely on the transistor.”

So what then does it rely on?

The fundamental component in Lightmatter’s chip is a Mach-Zehnder interferometer. This optical device was jointly invented by Ludwig Mach and Ludwig Zehnder in the 1890s. But only recently have such optical devices been miniaturized to the point where large numbers of them can be integrated onto a chip and used to perform the matrix multiplications involved in neural-network calculations.

Keren Bergman, a professor of electrical engineering and the director of the Lightwave Research Laboratory at Columbia University, in New York City, explains that these feats have become possible only in the last few years because of the maturing of the manufacturing ecosystem for integrated photonics, needed to make photonic chips for communications. “What you would do on a bench 30 years ago, now they can put it all on a chip,” she says.

Processing analog signals carried by light slashes energy costs and boosts the speed of calculations, but the precision can’t match what’s possible in the digital domain. “We have an 8-bit-equivalent system,” say Harris. This limits his company’s chip to neural-network inference calculations—the ones that are carried out after the network has been trained. Harris and his colleagues hope their technology might one day be applied to training neural networks, too, but training demands more precision than their optical processor can now provide.

Lightmatter is not alone in the quest to harness light for ­neural-network calculations. Other startups working along these lines include Fathom Computing, ­LightIntelligence, LightOn, ­Luminous, and Optalysis. One of these, Luminous, hopes to apply optical computing to spiking neural networks, which take advantage of the way the neurons of the brain process information—­perhaps accounting for why the human brain can do the remarkable things it does using just a dozen or so watts.

Luminous expects to develop practical systems sometime between 2022 and 2025. So we’ll have to wait a few years yet to see what pans out with its approach. But many are excited about the prospects, including Bill Gates, one of the company’s high-profile investors.

It’s clear, though, that the computing resources being dedicated to artificial-intelligence systems can’t keep growing at the current rate, doubling every three to four months. Engineers are now keen to harness integrated photonics to address this challenge with a new class of computing machines that are dramatically different from conventional electronic chips yet are now practical to manufacture. Bergman boasts: “We have the ability to make devices that in the past could only be imagined.”

This article appears in the January 2021 print issue as “Deep Learning at the Speed of Light.”

Designing PCBs 5x Faster Despite the Pandemic

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/designing-pcbs-5x-faster-despite-the-pandemic

Designing PCBs faster

Join our webinar on January 20th, to learn how Kinetic Vision uses Altium’s platform to enable a connected and frictionless PCB design experience, increasing their productivity x5 even in the midst of Covid.

For over 30 years, Kinetic Vision has provided technology solutions to over 50 of the top Fortune 500 companies in the world. Hear about how their embedded development team needed a better design and collaboration solution to satisfy the increasing needs of their demanding clients and how Altium helped solve their toughest issues.

Altium Designer is their tool of choice when it comes to designing PCBs, as it provides the most connected PCB design experience – removing the common points of friction that occur throughout a typical design flow. With the Covid situation, using Altium’s platform became even more essential as it enabled seamless remote working and has actually increased their levels of productivity to 5 times their pre-Covid rate.

Join Jeremy Jarrett, Executive Vice President at Kinetic Vision and Michael Weston, Team Lead Engineer at Kinetic Vision as we discuss what exactly sets Altium apart from the rest of the PCB design solutions out there today – and why it is an essential tool in order to stay ahead of the game.

Register today!
January 20th | 10:00 AM PST | 1:00 PM EST

Virtual event: How AWS Marketplace innovators enhance security for a remote workforce

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/virtual-event-how-aws-marketplace-innovators-enhance-security-for-a-remote-workforce

Enhance security

Dispersed workforces require changes in security parameters and requirements for connecting business-critical resources. In this virtual event, remote workforce security thought leaders, strategists, and technologists will discuss key innovations enabling AWS customers to transform their security for a remote and hybrid workforce.

IBM Makes Encryption Paradox Practical

Post Syndicated from Dan Garisto original https://spectrum.ieee.org/tech-talk/computing/software/ibm-makes-cryptographic-paradox-practical

How do you access the contents of a safe without ever opening its lock or otherwise getting inside? This riddle may seem confounding, but its digital equivalent is now so solvable that it’s becoming a business plan. 

IBM is the latest innovator to tackle the well-studied cryptographic technique called fully homomorphic encryption (FHE), which allows for the processing of encrypted files without ever needing to decrypt them first. Earlier this month, in fact, Big Blue introduced an online demo for companies to try out with their own confidential data. IBM’s FHE protocol is inefficient, but it’s workable enough still to give users a chance to take it for a spin. 

Today’s public cloud services, for all their popularity, nevertheless typically present a tacit tradeoff between security and utility. To secure data, it must stay encrypted; to process data, it must first be decrypted. Even something as simple as a search function has required data owners to relinquish security to providers whom they may not trust.

Yet with a workable and reasonably efficient FHE system, even the most heavily encrypted data can still be securely processed. A customer could, for instance, upload their encrypted genetic data to a website, have their genealogy matched and sent back to them—all without the company ever knowing anything about their DNA or family tree. 

At the beginning of 2020, IBM reported the results of a test with a Brazilian bank, which showed that FHE could be used for a task as complex as machine learning. Using transaction data from Banco Bradesco, IBM trained two models—one with FHE and one with unencrypted data—to make predictions such as when customers would need loans.

Even though the data was encrypted, the FHE scheme made predictions with accuracy equal to the unencrypted model. Other companies, such as Microsoft and Google have also invested in the technology and developed open-source toolkits that allow users to try out FHE. These software libraries, however, are difficult to implement for anyone but a cryptographer, a problem IBM hopes to remedy with its new service.           

“This announcement right now is really about making that first level very consumable for the people [who] are maybe not quite as crypto-savvy,” said Michael Osborne, a security researcher at IBM.

One of the problems with bringing FHE to market is that it must be tailor-made for each situation. What works for Banco Bradesco can’t necessarily be transferred seamlessly over to Bank of America, for example.

“It’s not a generic service,” said Christiane Peters, a senior cryptographic researcher at IBM “You have to package it up. And that’s where we hope from the clients that they guide us a little bit.”

It is not clear whether IBM’s scheme for FHE is any better than that of its competitors. However, by offering a service to clients, the company may have gotten the lead on tackling some of the first practical implementations of the technology, which has been in development for years.

Since the 1970s, cryptographers had considered what it would mean to process encrypted data, but no one was sure whether such an encryption scheme could exist even in theory. In 2009, Craig Gentry, then a Stanford graduate student, proved FHE was possible in his PhD dissertation

Over the past decade, algorithmic improvements have improved the efficiency of FHE by a factor of about a billion. The technique is still anywhere from 100 to a million times slower than traditional data processing—depending on the data and the processing task. But in some cases, Osborne says, FHE could still be attractive. 

One way to understand a key principle behind FHE is to consider ways in which an adversary might break it. Suppose Alice wants to put her grocery list on the cloud, but she’s concerned about her privacy. If Alive encrypts items on her list by shifting one letter forward, she can encode APPLES as BQQMFT. This is easily broken, so Alice adds noise, in the form of a random letter. APPLES instead becomes BQQZMFT. This makes it much, much harder for the attacker to guess the grocery items because they have to account for noise. Alice must strike a balance: too much noise and operations take too much time; too little noise and the list is unsecured. Gentry’s 2009 breakthrough was to introduce a specific, manageable amount of noise. 

While FHE may be of interest to many individual consumers interested in data privacy, its early corporate adopters are mainly limited to the finance and healthcare sectors, according to Peters. 

FHE’s applications may be increasing with time, though. In a data-rich, privacy-poor world, it’s not hard to recognize the appeal of a novel technology that lets people have their secret cake and eat it too.

3D-printing hollow structures with “xolography”

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/hardware/xolography-printing

By shining light beams in liquid resin, a new 3-D printing technique dubbed “xolography” can generate complex hollow structures, including simple machines with moving parts, a new study finds.

“I like to imagine that it’s like the replicator from Star Trek,” says study co-author Martin Regehly, an experimental physicist at the Brandenburg University of Applied Science in Germany. “As you see the light sheet moving, you can see something created from nothing.”

Revisiting 2020’s Most Popular Blog Posts

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/computing/software/revisiting-2020s-most-popular-blog-posts

Spectrum first began publishing an online edition in 1996. And in the quarter century since, our website has tried to serve IEEE members as well as the larger worldwide base of tech-savvy readers across the Internet. In 2020, four of ­Spectrum’s top 10 blog posts were about COVID-19; another four were about robots. (One was about both.) Two discussed programming languages, another popular item on our site. Here we revisit five of those favorite postings from the past year, updating readers on new developments, among them promising COVID-19 tests and therapeutics, no-code programming, and an incredibly versatile robotic third leg. All of which, if the tremendous challenges of the past year offer any guidance, could be a useful survival kit for enduring whatever 2021 has in store.

  • COVID-19 Study: Quell the “Bradykinin Storm”

    Precisely how the novel coronavirus causes COVID-19 may still be a mystery. But one year into the pandemic, it’s no longer quite a mystery wrapped inside an enigma. This was the upshot of a landmark coronavirus study from July conducted by a team of American scientists using the Summit supercomputer at the Oak Ridge National Laboratory, in ­Tennessee. Their genetic-data mining paper, published in the journal eLife, concluded that one lesser-studied biomolecule arguably lies at the heart of how the SARS-CoV-2 virus causes COVID-19.

    Bradykinin is a peptide that regulates blood pressure and causes blood vessels to become permeable. The Oak Ridge study concluded that the novel coronavirus effectively hacks the body’s ­bradykinin system, leading to a sort of molecular landslide. In so many words, a “bradykinin storm” overdilates blood vessels in the lungs, leading to fluid buildup, congestion, and difficulty breathing. And because an overabundance of bradykinin can trigger heart, kidney, neurological, and circulatory problems, the bradykinin hypothesis may lead to yet more coronavirus treatments.

    Daniel Jacobson, Oak Ridge chief scientist for computational systems biology, says his team’s eLife study has been partly vindicated in the months since publication. Their paper highlighted a dozen compounds they said could be effective for some COVID-19 patients. Three of those drugs in particular have since proved, in early clinical trials, to show significant promise: Icatibant (a bradykinin blocker), calcifediol (a vitamin D analogue that targets a pathway related to bradykinin), and ­dexamethasone (a steroid that blocks signaling from bradykinin receptors).

    “Our focus is on getting the work out in ways that are going to help people,” Jacobson says. “We’re excited about these other data points that keep confirming the model.”

    The above is an update to a blog post (2020’s second most popular) that originally appeared on 2 August at spectrum.ieee.org/covidcode-aug2020

  • Third Leg Lends a Hand

    Need an extra hand? How about an extra foot? Roboticists in Canada, from the ­Université de Sherbrooke, in ­Quebec, have been developing supernumerary robotic limbs that are designed to explore what humans can do with three arms, or even three legs. The robotic limbs are similar in weight to human limbs, and are strong and fast thanks to ­magnetorheological clutches that feed pressurized water through a hydrostatic transmission system. This system, coupled to a power source inside a backpack, keeps the limb’s inertia low while also providing high torque.

    Mounted at a user’s hips, a supernumerary robotic arm can do things like hold tools, pick apples, play badminton, and even smash through a wall, all while under the remote control of a nearby human. The supernumerary robotic leg is more autonomous, able to assist with several different human gaits at a brisk walk and add as much as 84 watts of power. The leg could also be used to assist with balance, acting as a sort of hands-free cane. It can even move quickly enough to prevent a fall— far more quickly than a biological leg. Adding a second robotic leg opposite the first suggests even more possibilities, including a human-robot quadruped gait, which would be a completely new kind of motion.

    Eventually, the researchers hope to generalize these extra robotic limbs so that a single limb could function as either an arm, a leg, or perhaps even a tail, depending on what you need it to do. Their latest work was presented in October at the 2020 International Conference on Intelligent Robots and Systems (IROS), cosponsored by IEEE and the Robotics Society of Japan.

    The above is an update to a blog post (2020’s fifth most popular ) that originally appeared on 4 June at spectrum.ieee.org/thirdarm-jun2020

  • The Hello Robot Arm Offers a Leg Up

    Last summer was a challenging time to launch a new robotics company. But Hello Robot, which announced its new mobile manipulator this past July, has been working hard to provide its robot (called Stretch) to everyone who wants one. Over the last six months, Hello Robot, based in Martinez, Calif., has shipped dozens of the US $17,950 robots to customers, which have included an even mix of academia and industry.

    One of these early adopters of Stretch is Microsoft, which used the robot as part of a company-wide hackathon last summer. A Microsoft developer, Sidh, has cerebral palsy, and while Sidh has no trouble writing code with his toes, there are some everyday tasks—like getting a drink of water—that he regularly needs help with. Sidh started a hackathon team with Microsoft employees and interns to solve this problem with Stretch. Although most of the team knew very little about robotics, over just three days of remote work they were able to program Stretch to operate semiautonomously through voice control. Now Stretch can manipulate objects (including cups of water) at Sidh’s request. It’s still just a prototype, but Microsoft has already made the code open source, so that others can benefit from the work. Sidh is still working with Stretch to teach it to be even more useful.

    In the past, Hello Robot cofounder Charlie Kemp’s robot of choice has been a $400,000, 227-kilogram robot called PR2. Stretch offers many of the same mobile manipulation capabilities. But its friendly size and much lower cost mean that people who before might not have considered buying a robot are now giving Stretch a serious look.

    The above is an update to a blog post (2020’s sixth most popular) that originally appeared on 14 July at ­spectrum.ieee.org/hellorobot-jul2020

  • At-Home COVID-19 Test Hits Snags

    When last we heard from the maverick biotech entrepreneur Jonathan ­Rothberg, he’d just invented a rapid diagnostic test for COVID-19 that was as accurate as today’s best lab tests but easy enough for regular people to use in their own homes. Rothberg had pivoted one of his companies, the synthetic biology startup Homodeus, to develop a home test kit. During the first months of the pandemic, he worked with academic and clinical collaborators to test his team’s designs. In March, he optimistically projected a ready date of “weeks to months.” By late August, when The New Yorker published an article about his crash project, he spoke of getting the tests “out there by Thanksgiving.”

    Unfortunately, the so-called Detect kits haven’t yet made it to doctors’ offices or drugstore shelves. As of press time, Rothberg hoped to receive emergency use authorization from the U.S. Food and Drug Administration in late December, which would enable Homodeus to distribute the kits to health professionals. The kit could then be approved for consumers early in 2021.

    The Homodeus team got slowed down by their insistence on simplicity and scalability, Rothberg tells IEEE Spectrum. As they finalized the prototype, they also secured their supply chains. Once they receive FDA approval they’ll be able to “deliver upwards of 10 million tests per month,” Rothberg says.

    The above is an update to a blog post (2020’s eighth most popular) that originally appeared on 13 March at spectrum.ieee.org/covidtest-mar2020

  • Toward a World Without Code

    No-code development—building software without writing code—gained momentum in 2020 as a result of the COVID-19 pandemic. Governments and organizations needed swift action for a ­fast-moving crisis. They turned to no-code platforms to rapidly develop and deploy essential software, including a COVID-19 management hub that allowed New York City and Washington, D.C., to deliver critical services to residents; a loan-processing system for a bank so it could receive Paycheck Protection Program applications from small businesses; and a workforce safety solution to aid the return of employees to their workplaces.

    Tech companies capitalized on this trend too. In June 2020, ­Amazon Web Services released its no-code tool, Honeycode, in beta. A month later, Microsoft launched Project Oakdale, a built-in low-code data platform for Microsoft Teams. With Project Oakdale, users can create custom data tables, apps, and bots within the chat and videoconferencing platform using Power Apps, Microsoft’s no-code software.

    The no-code movement is also reaching the frontiers of artificial intelligence. Popular no-code machine-learning platforms include Apple’s Create ML, Google’s AutoML, Obviously AI, and Teachable Machine. These platforms make it easier for those with little to no coding expertise to train and deploy ­machine-learning models, as well as quickly categorize, extract, and analyze data.

    No-code development is set to go mainstream over the coming years, with the market research company Forrester predicting the emergence of hybrid teams of business users and software developers building apps together using no-code platforms. As the trends noted above take root in both the public and private sectors, there is little doubt today that—to modify an old programmer’s maxim—the future increasingly will be written in no-code.

    The above is an update to a blog post (2020’s most popular) that originally appeared on 11 March at ­spectrum.ieee.org/nocode-mar2020

This article appears in the January 2021 print issue as “2020’s Most Popular Blog Posts.”

Quantum Computers Will Speed Up the Internet’s Most Important Algorithm

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/computing/software/quantum-computers-will-speed-up-the-internets-most-important-algorithm

The fast Fourier transform (FFT) is the unsung digital workhorse of modern life. It’s a clever mathematical shortcut that makes possible the many signals in our device-connected world. Every minute of every video stream, for instance, entails computing some hundreds of FFTs. The FFT’s importance to practically every data-processing application in the digital age explains why some researchers have begun exploring how quantum computing can run the FFT algorithm more efficiently still.

“The fast Fourier transform is an important algorithm that’s had lots of applications in the classical world,” says Ian Walmsley, physicist at Imperial College London. “It also has many applications in the quantum domain. [So] it’s important to figure out effective ways to be able to implement it.”

The first proposed killer app for quantum computers—finding a number’s prime factors—was discovered by mathematician Peter Shor at AT&T Bell Laboratories in 1994. Shor’s algorithm scales up its factorization of numbers more efficiently and rapidly than any classical computer anyone could ever design. And at the heart of Shor’s phenomenal quantum engine is a subroutine called—you guessed it—the quantum Fourier transform (QFT).

Here is where the terminology gets a little out of hand. There is the QFT at the center of Shor’s algorithm, and then there is the QFFT—the quantum fast Fourier transform. They represent different computations that produce different results, although both are based on the same core mathematical concept, known as the discrete Fourier transform.

The QFT is poised to find technological applications first, though neither appears destined to become the new FFT. Instead, QFT and QFFT seem more likely to power a new generation of quantum applications.

The quantum circuit for QFFT is just one part of a much bigger puzzle that, once complete, will lay the foundation for future quantum algorithms, according to researchers at the Tokyo University of Science. The QFFT algorithm would process a single stream of data at the same speed as a classical FFT. However, the QFFT’s strength comes not from processing a single stream of data on its own but rather multiple data streams at once. The quantum paradox that makes this possible, called superposition, allows a single group of quantum bits (qubits) to encode multiple states of information simultaneously. So, by representing multiple streams of data, the QFFT appears poised to deliver faster performance and to enable power-saving information processing.

The Tokyo researchers’ quantum-circuit design uses qubits efficiently without producing so-called garbage bits, which can interfere with quantum computations. One of their next big steps involves developing quantum random-access memory for preprocessing large amounts of data. They laid out their QFFT blueprints in a recent issue of the journal ­Quantum Information Processing.

“QFFT and our arithmetic operations in the paper demonstrate their power only when used as subroutines in combination with other parts,” says Ryo Asaka, a physics graduate student at Tokyo University of Science and lead author on the study.

Greg Kuperberg, a mathematician at the University of California, Davis, says the Japanese group’s work provides a scaffolding for future quantum algorithms. However, he adds, “it’s not destined by itself to be a magical solution to anything. It’s trundling out the equipment for somebody else’s magic show.”

It is also unclear how well the proposed QFFT would perform when running on a quantum computer under real-world constraints, says Imperial’s Walmsley. But he suggested it might benefit from running on one kind of quantum computer versus another (for example, a ­magneto-optical trap versus nitrogen vacancies in diamond) and could eventually become a specialized coprocessor in a quantum-classical hybrid computing system.

University of Warsaw physicist Magdalena Stobińska, a main coordinator for the European Commission’s AppQInfo project—which will train young researchers in quantum information processing starting in 2021—notes that one main topic involves developing new quantum algorithms such as the QFFT.

“The real value of this work lies in proposing a different data encoding for computing the [FFT] on quantum hardware,” she says, “and showing that such out-of-box thinking can lead to new classes of quantum algorithms.”

This article appears in the January 2021 print issue as “A Quantum Speedup for the Fast Fourier Transform.”

IBM Makes Tape Storage Better Than Ever

Post Syndicated from Dexter Johnson original https://spectrum.ieee.org/nanoclast/computing/hardware/tape-is-back-and-better-than-ever

Introduced by strains of the Strauss waltz that served as the soundtrack for “2001: A Space Odyssey,” IBM demonstrated a new world record in magnetic tape storage capabilities in a live presentation this week from its labs in Zurich, Switzerland.

A small group of journalists looked on virtually as IBM scientists showed off a 29-fold increase in the storage capability of its current data-tape cartridge, from 20 terabytes (TB) to 580 TB. That’s roughly 32-times the capacity of LTO-Ultrium (Linear Tape-Open, version 9), the latest industry-standard in magnetic tape products.

While these figures may sound quite impressive, some may wonder whether this story might have mistakenly come from a time capsule buried in the 1970s. But the fact is tape is back, by necessity.

Magnetic tape storage has been undergoing a renaissance in recent years, according to Mark Lantz, manager of CloudFPGA and tape technologies at IBM Zurich. This resurgence, Lantz argues, has been driven by the convergence of two trends: exponential data growth and a simultaneous slowing down of areal density in hard-disk drives (HDD).

In fact, the growth rate of HDD areal density has slowed down to under an 8% compound annual growth rate over the last several years, according to Lantz. This slowdown is occurring while data is growing worldwide to the point where it is expected to hit 175 zettabytes by 2025, representing a 61% annual growth rate.

This lack of HDD scaling has resulted in the price per gigabyte of HDD rising dramatically. Estimates put HDD bytes at four times the cost of tape bytes. This creates a troublesome imbalance at an extremely inopportune moment: just as the amount of data being produced is increasing exponentially, data centers can’t afford to store it.

Fortunately a large portion of the data being stored is what’s termed “cold,” meaning it hasn’t been accessed in a long time and is not needed frequently. These types of data can tolerate higher retrieval latencies, making magnetic tape well suited for the job.

Magnetic tape is also inherently more secure from cybercrime, requires less energy, provides long-term durability, and has a lower cost per gigabyte than HDD. Because of these factors IBM estimates that more than 345,000 exabytes (EB) of data already resides in tape storage systems. In the midst of these market realities for data storage, IBM’s believes that its record-setting demonstration will enable tape to meet its scaling roadmap for the next decade.

This new record involved a 15-year journey that IBM had undertaken with Fujifilm to continuously push the capabilities of tape technology. After setting six new records since 2006, IBM and Fujifilm achieved this latest big leap by improving three main areas of tape technology: the tape medium, a new tape head technology with novel use of an HDD detector for reading the data, and servo-mechanical technologies that ensure the tape tracks precisely.

For the new tape medium, Fujifilm set aside the current industry standard of barium ferrite particles and incorporated smaller strontium ferrite particles in a new tape coating, allowing for higher density storage on the same amount of tape.

With Fujifilm’s strontium ferrite particulate magnetic tape in hand, IBM developed a new low friction tape head technology that could work with the very smooth surfaces of the new tape. IBM also used an ultra-narrow 29 nm wide tunnel-magnetoresistance (TMR) read sensor that enables reliable detection of data written on the strontium ferrite media at a linear density of 702 kilobytes per inch.

IBM also developed a family of new servo-mechanical technologies for the system. This suite of technologies measures the position of the tape head on the tape and then adjusts that position so the data is written in the correct location. Transducers then scan the center of the tracks during read-back operation. In the aggregate all these new servo technologies made head positioning possible at a world record accuracy of 3.2 nm, this while the tape is streamed over the read head at a speed of about 15 km/h.

Alberto Pace, head of data storage for the European Organization for Nuclear Research (CERN), put the development in context: “Only 20 years ago, all the data produced by the old Large Electron-Positron (LEP) collider had to be held in the big data center. Today all that data from the old LEP fits into a cabinet in my office. I expect that in less than 20 years we will have all the data from the Large Hadron Collider that now resides in our current data center fitting into a small cabinet in my office.”

Augmented Reality and the Surveillance Society

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/computing/hardware/augmented-reality-and-the-surveillance-society

First articulated in a 1965 white paper by Ivan Sutherland, titled “The Ultimate Display,” augmented reality (AR) lay beyond our technical capacities for 50 years. That changed when smartphones began providing people with a combination of cheap sensors, powerful processors, and high-bandwidth networking—the trifecta needed for AR to generate its spatial illusions. Among today’s emerging technologies, AR stands out as particularly demanding—for computational power, for sensed data, and, I’d argue, for attention to the danger it poses.

Unlike virtual-reality (VR) gear, which creates for the user a completely synthetic experience, AR gear adds to the user’s perception of her environment. To do that effectively, AR systems need to know where in space the user is located. VR systems originally used expensive and fragile systems for tracking user movements from the outside in, often requiring external sensors to be set up in the room. But the new generation of VR accomplishes this through a set of techniques collectively known as simultaneous localization and mapping (SLAM). These systems harvest a rich stream of observational data—mostly from cameras affixed to the user’s headgear, but sometimes also from sonar, lidar, structured light, and time-of-flight sensors—using those measurements to update a continuously evolving model of the user’s spatial environment.

For safety’s sake, VR systems must be restricted to certain tightly constrained areas, lest someone blinded by VR goggles tumble down a staircase. AR doesn’t hide the real world, though, so people can use it anywhere. That’s important because the purpose of AR is to add helpful (or perhaps just entertaining) digital illusions to the user’s perceptions. But AR has a second, less appreciated, facet: It also functions as a sophisticated mobile surveillance system.

This second quality is what makes Facebook’s recent Project Aria experiment so unnerving. Nearly four years ago, Mark Zuckerberg announced Facebook’s goal to create AR “spectacles”—consumer-grade devices that could one day rival the smartphone in utility and ubiquity. That’s a substantial technical ask, so Facebook’s research team has taken an incremental approach. Project Aria packs the sensors necessary for SLAM within a form factor that resembles a pair of sunglasses. Wearers collect copious amounts of data, which is fed back to Facebook for analysis. This information will presumably help the company to refine the design of an eventual Facebook AR product.

The concern here is obvious: When it comes to market in a few years, these glasses will transform their users into data-gathering minions for Facebook. Tens, then hundreds of millions of these AR spectacles will be mapping the contours of the world, along with all of its people, pets, possessions, and peccadilloes. The prospect of such intensive surveillance at planetary scale poses some tough questions about who will be doing all this watching and why.

To work well, AR must look through our eyes, see the world as we do, and record what it sees. There seems no way to avoid this hard reality of augmented reality. So we need to ask ourselves whether we’d really welcome such pervasive monitoring, why we should trust AR providers not to misuse the information they collect, or how they can earn our trust. Sadly, there’s not been a lot of consideration of such questions in our rush to embrace technology’s next big thing. But it still remains within our power to decide when we might allow such surveillance—and to permit it only when necessary.

This article appears in the January 2021 print issue as “AR’s Prying Eyes.”

Full-Wave EM Simulations: Electrically Large Antenna Placement and RCS Scenarios

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/fullwave-em-simulations-electrically-large-antenna-placement-and-rcs-scenarios

Handling various complex simulation scenarios with a single simulation method is a rather challenging task for any software suite.

We will show you how our software, based on Method-of-Moments, can analyze several scenarios including complicated and electrically large models (for instance, antenna placement and RCS) using desktop workstations. 

Systematic Cybersecurity Threat Analysis and Risk Assessment

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/systematic-cybersecurity-threat-analysis-and-risk-assessment

Cybersecurity

The digital revolution has given transportation companies and consumers a host of new features and benefits, introducing significant complexities in concert with unprecedented connectivity that leaves vehicles vulnerable to cyberattack. To safeguard these systems, engineers must consider a dizzying array of components and interconnected systems to identify any vulnerabilities. Traditional workflows and outdated tools will not be enough to ensure products meet the upcoming ISO 21434 and other applicable cybersecurity standards. Achieving a high level of cybersecurity to protect supply chains and personal vehicles requires high-quality threat analysis.

Ansys medini for Cybersecurity helps secure in-vehicle systems and substantially improves time to market for critical security-related functions. Addressing the increasing market needs for systematic analysis and assessment of security threats to cyber-physical systems, medini for Cybersecurity starts early in the system design. Armed with this proven tool, engineers will replace outdated processes reliant on error-prone human analysis.

In this webinar, we dynamically demonstrate how systematic cybersecurity analysis enables engineers to mitigate vulnerabilities to hacking and cyberattacks.

  • Learn how to identify assets in the system and their important security attributes.
  • Discover new methods for systematically identifying system vulnerabilities that can be exploited to execute attacks.
  • Understand the consequences of a potentially successful attack.
  • Receive expert tips on how to estimate the potential of an attack.
  • Learn how to associate a risk with each threat.
  • Leverage new tools for avoiding overengineering and underestimation.

Presenter

Mario Winkler

Mario Winkler, Lead Product Manager

After finishing his studies in Computer Science at the Humboldt University Berlin and after working for the Fraunhofer Research Institute Mario joined the medini team in 2001. During the past 15 years he gained expertise in functional safety expecially in the automotive domain by helping various customers, OEMs and suppliers, to apply medini analyze to their functional safety process according to ISO26262. With the extended focus of medini to Cybersecurity Mario took on the role of a Product Manager to drive the development of medini analyze in this direction.